man

Inquiries continue after discovery of human remains

Forensic work is ongoing and police say they are not aware of any suspicious circumstances.





man

Should Mia Freedman Apologise?

I went to Australia last month as a guest of the Opera House for the All About Women symposium.  As part of the event, I agreed to do some media appearances on ABC, including the Drum and Q&A.

All About Women was a fantastic day and I feel privileged to have met so many interesting and talented people there, including people I would put in the category of genuine modern heroes

As for Q&A… this is the Australian equivalent of Question Time, so I went anticipating a varied panel with a wide variety of opinions jostling to be heard. I was told Tony Jones was a strong moderator, so I went expecting him to rein in the conversation if things went off-piste. This was to be Q & A's first all-woman panel and expectations were high. The topics they circulated beforehand indicated I was in for a grilling while everyone else got softball. I went, not to put too fine a point on it, loaded for bear.

I thought it went pretty well. Opinions differed. Points of view were exchanged. Margaret Thatcher died. All in all, a good night. The producers seemed very pleased with the outcome.

So imagine my surprise, weeks later, that fellow guest Mia Freedman is still flogging her commentary about the appearance as content on her site MamaMia. The topic: should she apologise for continually insulting sex workers?

During the show Mia kept falling back on sloppy, ill-thought, and pat little lines that were easily countered. I found to my surprise a lot of common ground with Germaine Greer, hardly known as a fan of sexual entertainment, on the fact that conditions of labour and not sex per se are the most pressing issue for sex workers worldwide right now. Then in comes Mia with her assumptions about the people who do sex work (men AND women) and the people who hire them (men AND women). With Tony backing her up. So much for the disinterested moderator, eh? Maybe he felt bad for her. I don't know.

Here's the thing. I agree with Mia on this: I don't think she should apologise.

Why not? Because if she did it would be insincere. My first impression when we met backstage was that she was insincere, and damn it, a successful lady editor like her should have the guts to be true to herself and stand by her opinions no matter what they are.

Because the general public needs to see what kinds of uninformed nonsense that sex workers who stick their heads above the parapet get every single day.

Because for every 100 people who visit her site, there is one who is both a parent AND a sex worker, who knows what she is saying is nonsense. Yes, that's right Mia: sex workers raise families too. It's almost as if we're people.

Because she is a magazine editor who cares deeply about hits and attention, and clearly this is delivering on every level.

Because the sort of people who think sex workers should be topics of discussion rather than active participants are fighting a losing battle.

Keep digging, Mia. I ain't gonna stop you. Keep writing off other people simply because they didn't have the privileges you did or didn't make the same choices you did, and you can't accept that. Get it off your chest, lock up your children, whatever you think you need to do. Perhaps you have some issues about sex you want to work out in public, or this wouldn't be the biggest issue on your agenda weeks after the show went to air?

Mia, you have my express permission not to apologise. No, don't thank me… I insist.




man

‘We have to fight for the commanding heights of American culture’

American Culture Project’s John Tillman on winning through upstream engagement




man

Search Central Live 2024 in Bucharest, Romania

We're excited to announce a Search Central Live event in Bucharest, Romania on April 4, 2024. Search Central Live is our global Google Search event series specifically for site owners, publishers, and SEOs.




man

Improving Search Console ownership token management

This post discusses an update to Search Console's user and permissions to improve the accuracy and reflect the actual state of unused ownership tokens.




man

¿Para qué sirve un burofax? Reclamando facturas

Uno de los primeros pasos que se suelen dar ante una factura impagada es preguntarle, amablemente, al cliente por el importe debido. Esto se suele hacer, habitualmente, por teléfono o por escrito usando el correo electrónico. Sin embargo, cuando a pesar de nuestra insistencia la factura sigue pendiente llega un momento en el que tenemos […]




man

API Management: The missing link for SOA success

Nearly 2 years ago I tweeted:



Well, unfortunately, I had it a bit wrong.

APIs and service do have a very direct and 1-1 relationship: an API is the interface of a service. However, what is different is that one's about the implementation and is focused on the provider, and the other is about using the functionality and is focused on the consumer. The service of course is what matters to the provider and API is what matters to the consumer.

So its clearly more than just a new name.

Services: If you build it will they come?

One of the most common anti-patterns of SOA is the one service - one client pattern. That's when the developer who wrote the service also wrote its only client. In that case there's no sharing, no common data, no common authentication and no reuse of any kind. The number one reason for SOA (improving productivity by reusing functionality as services) is gone. Its simply client-server at the cost of having to use interoperable formats like XML, JSON, XML Schema, WSDL and SOAP. 

There are two primary reasons for this pattern being so prevalent: first is due to a management failure whereby everyone is required to create services for whatever they do because that's the new "blessed way". There's no architectural vision driving proper factoring. Instead its each person or at least each team for themselves. The resulting services are only really usable for that one scenario - so no wonder no one else uses them!

Writing services that can service many users requires careful design and thinking and willingness to invest in the common good. That's against human intuition and something that will happen only if its properly guided and incentivized. The cost of writing common services must be paid by someone and will not happen by itself.

That's in effect the second reason why this anti-pattern exists: the infrastructure in place for SOA does not support or encourage reuse. Even if you had a service that is reusable how do you find out how well it works? How do you know how many people are using it? Do you know what time of day they use it most? Do you know which operations of your service get hit the hardest? Next, how do others even find out you wrote a service and it may do what they need? 

SOA Governance (for which WSO2 has an excellent product: WSO2 Governance Registry) is not focused on encouraging service reuse but rather on governing the creation and management of services. The SOA world has lacked a solution for making it easy to help people discover available services and to manage and monitor their consumption. 

API Management

What's an API? Its the interface to a service. Simple. In other words, if you don't have any services, you have no APIs to expose and manage.

API Management is about managing the entire lifecycle of APIs. This involves someone who publishes the interface of a service into a store of some kind. Next it involves developers who browse the store to find APIs they care about and get access to them (typically by acquiring an access token of some sort) and then the developers using those keys to program accesses to the service via its interface.

Why is this important? In my opinion, API Management is to SOA what Amazon EC2 is to Virtualization. Of course virtualization has been around for a long time, but EC2 changed the game by making it trivially simple for someone to get a VM. It brought self service, serendipitous consumption, and elasticity to virtualization. Similarly, API Management brings self service & serendipitous consumption by allowing developers to discover, try and use services without requiring any type of "management approval". It allows consumers to not have to worry about scaling - they just indicate the desired SLA (typically in the form of a subscription plan) and its up to the provider to make it work right. 

API Management & SOA are married at the hip

If you have an SOA strategy in your organization but don't have an API Management plan then you are doomed to failure. Notice that I didn't even talk about externally exposing APIs- even internal service consumption should be managed through an API Management system so that everyone has clear visibility into who's using what service and how much is used when. Its patently obvious why external exposition of services requires API Management.

Chris Haddad, WSO2's VP of Technology Evangelism, recently wrote a superb whitepaper that discusses and explain the connection between SOA and API Management. Check out Promoting service reuse within your enterprise and maximizing SOA success and I can guarantee you will leave enlightened.

In May this year, a blog on highscalability.com talked about how "Startups Are Creating A New System Of The World For IT". In that the author talked about open source as the foundation of this new system and SOA as the load bearing walls of the new IT landscape. I will take it to the next level and say that API Management is the roof of the new IT house.

WSO2 API Manager

We recently introduced an API Management product: WSO2 API Manager. This product comes with an application for API Providers to create and manage APIs, a store application for API Developers to discover and consume APIs and a gateway to route API traffic through. Of course all parts of the product can be scaled horizontally to deal with massive loads. The WSO2 API Manager can be deployed either for internal consumption, external consumption or both. As with any other WSO2 product, this too is 100% open source. After you read Chris' whitepaper download this product and sit it next to your SOA infrastructure (whether its from us or not) and see what happens!




man

Understanding ESB Performance & Benchmarking

ESB performance is a hot (and disputed topic). In this post I don't want to talk about different vendors or different benchmarks. I'm simply trying to help people understand some of the general aspects of benchmarking ESBs and what to look out for in the results.

The general ESB model is that you have some service consumer, an ESB in the middle and a service provider (target service) that the ESB is calling. To benchmark this, you usually have a load driver client, an ESB, and a dummy service.

+-------------+      +---------+      +---------------+
| Load Driver |------|   ESB   |------| Dummy Service |
+-------------+      +---------+      +---------------+

Firstly, we want the Load Driver (LD), the ESB and the Dummy Service (DS) to be on different hardware. Why? Because we want to understand the ESB performance, not the performance of the DS or LD.

The second thing to be aware of is that the performance results are completely dependent on the hardware, memory, network, etc used. So never compare different results from different hardware.

Now there are three things we could look at:
A) Same LD, same DS, different vendors ESBs doing the same thing (e.g. content-based routing)
B) Same LD, same DS, different ESB configs for the same ESB, doing different things (e.g. static routing vs content-based routing)
C) Going via ESB compared to going Direct (e.g. LD--->DS without ESB)

Each of these provides useful data but each also needs to be understood.

Metrics
Before looking at the scenarios, lets look at how to measure the performance. The two metrics that are always a starting point in any benchmark of an ESB here are the throughput (requests/second) and the latency (how long each request takes). With latency we can consider overall latency - the time taken for a completed request observed at the LD, and the ESB latency, which is the time taken by the message in the ESB. The ESB latency can be hard to work out. A well designed ESB will already be sending bytes to the DS before its finished reading the bytes the LD has sent it. This is called pipelining. Some ESBs attempt to measure the ESB latency inside the ESB using clever calculations. Alternatively scenario C (comparing via ESB vs Direct) can give an idea of ESB Latency. 

But before we look at the metrics we need to understand the load driver.

There are two different models to doing Load Driving:
1) Do a realistic load test based on your requirements. For example if you know you want to support up to 50 concurrent clients each making a call every 5 seconds on average, you can simulate this.
2) Saturation! Have a large number of clients, each making a call as soon as the last one finishes.

The first one is aimed at testing what the ESB does before its fully CPU loaded. In other words, if you are looking to see the effect of adding an ESB, or the comparison of one ESB to another under realistic load, then #1 is the right approach. In this approach, looking at throughput may not be useful, because all the different approaches have similar results. If I'm only putting in 300 requests a sec on a modern system, I'm likely to see 300 request a sec. Nothing exciting. But the latency is revealing here. If one ESB responds in less time than another ESB thats a very good sign, because with the same DS the average time per request is very telling.

On the other hand the saturation test is where the throughput is interesting. Before you look at the throughput though, check three things:
1) Is the LD CPU running close to 100%?
2) Is the DS CPU running close to 100%?
3) Is the network bandwidth running close to 100%?

If any of these are true, you aren't doing a good test of the ESB throughput. Because if you are looking at throughput then you want the ESB to be the bottleneck. If something else is the bottleneck then the ESB is not providing its max throughput and you aren't giving it a fair chance. For this reason, most benchmarks use a very very lightweight LD or a clustered LD, and similarly use a DS that is superfast and not a realistic DS. Sometimes the DS is coded to do some real work or sleep the thread while its executing to provide a more realistic load test. In this case you probably want to look at latency more than throughput.

Finally you are looking to see a particular behaviour for throughput testing as you increase load.
Throughput vs Load
The shape of this graph shows an ideal scenario. As the LD puts more work through the ESB it responds linearly. At some point the CPU of the ESB hits maximum, and then the throughput stabilizes.  What we don't want to see is the line drooping at the far right. That would mean that the ESB is crumpling under the extra load, and its failing to manage the extra load effectively. This is like the office worker whose efficiency increases as you give them more work but eventually they start spending all their time re-organizing their todo lists and less work overall gets done.

Under the saturation test you really want to see the CPU of the ESB close to 100% utilised. Why? This is a sign that its doing as much as possible. Why would it not be 100%? Two reasons: I/O, multi-processing and thread locks: either the network card or disk or other I/O is holding it up, the code is not efficiently using the available cores, or there are thread contention issues.

Finally its worth noting that you expect the latency to increase a lot under the saturation test. A classic result is this: I do static routing for different size messages with 100 clients LD. For message sizes up to 100k maybe I see a constant 2ms overhead for using the ESB. Suddenly as the message size grows from 100k to 200k I see the overhead growing in proportion to the message size.


Is this such a bad thing? No, in fact this is what you would expect. Before 100K message size, the ESB is underloaded. The straight line up to this point is a great sign that the ESB is pipelining properly. Once the CPU becomes loaded, each request is taking longer because its being made to wait its turn at the ESB while the ESB deals with the increased load.

A big hint here: When you look at this graph, the most interesting latency numbers occur before the CPU is fully loaded. The latency after the CPU is fully loaded is not that interesting, because its simply a function of the number of queued requests.

Now we understand the metrics, lets look at the actual scenarios.

A. Different Vendors, Same Workload
For the first comparison (different vendors) the first thing to be careful of is that the scenario is implemented in the best way possible in each ESB. There are usually a number of ways of implementing the same scenario. For example the same ESB may offer two different HTTP transports (or more!). For example blocking vs non-blocking, servlet vs library, etc. There may be an optimum approach and its worth reading the docs and talking to the vendor to understand the performance tradeoffs of each approach.

Another thing to be careful of in this scenario is the tuning parameters. Each ESB has various tuning aspects that may affect the performance depending on the available hardware. For example, setting the number of threads and memory based on the number of cores and physical memory may make a big difference.

Once you have your results, assuming everything we've already looked at is tickety-boo, then both latency and throughput are interesting and valid comparisons here. 

B. Different Workloads, Same Vendor
What this is measuring is what it costs you to do different activities with the same ESB. For example, doing a static routing is likely to be faster than a content-based routing, which in turn is faster than a transformation. The data from this tells you the cost of doing different functions with the ESB. For example you might want to do a security authentication/authorization check. You should see a constant bump in latency for the security check, irrespective of message size. But if you were doing complex transformation, you would expect to see higher latency for larger messages, because they take more time to transform. 

C. Direct vs ESB
This is an interesting one. Usually this is done for a simple static routing/passthrough scenario. In other words, we are testing the ESB doing its minimum possible. Why bother? Well there are two different reasons. Firstly ESB vendors usually do this for their own benefit as a baseline test. In other words, once you understand the passthrough performance you can then see the cost of doing more work (e.g. logging a header, validating security, transforming the message). 

Remember the two testing methodologies (realistic load vs saturation)? You will see very very different results in each for this, and the data may seem surprising. For the realistic test, remember we want to look at latency. This is a good comparison for the ESB. How much extra time is spent going through the ESB per request under normal conditions. For example, if the average request to the backend takes 18ms and the average request via the ESB takes 19ms, we have an average ESB latency of 1ms. This is a good result - the client is not going to notice much difference - less than 5% extra. 

The saturation test here is a good test to compare different ESBs. For example, suppose I can get 5000 reqs/sec direct. Via ESB_A the number is 3000 reqs/sec and via ESB_B the number is 2000 reqs/sec, I can say that ESB_A is providing better throughput than ESB_B. 

What is not  a good metric here is comparing throughput in saturation mode for direct vs ESB. 


Why not? The reason here is a little complex to explain. Remember how we coded DS to be as fast as possible so as not to be a bottleneck? So what is DS doing? Its really just reading bytes and sending bytes as fast as it can. Assuming the DS code is written efficiently using something really fast (e.g. just a servlet), what this is testing is how fast the hardware (CPU plus Network Card) can read and write through user space in the operating system. On a modern server hardware box you might get a very high number of transactions/sec. Maybe 5000req/s with each message in and out being 1k in size.

So we have 1k in and 1k out = 2k IO.
2k IO x 5000 reqs/sec x 8bits gives us the total network bandwidth of 80Mbits/sec (excluding ethernet headers and overhead).

Now lets look at the ESB. Imagine it can handle 100% of the direct load. There is no slowdown in throughput for the ESB. For each request it has to read the message in from LD and send it out to DS. Even if its doing this in pipelining mode, there is still a CPU cost and an IO cost for this. So the ESB latency of the ESB maybe 1ms, but the CPU and IO cost is much higher. Now, for each response it also has to read it in from DS and write it out to LD. So if the DS is doing 80Mbits/second, the ESB must be doing 160Mbits/second. 

Here is a picture.

Now if the LD is good enough, it will have loaded the DS to the max. CPU or IO capacity or both will be maxed out. Suppose the ESB is running on the same hardware platform as the DS. If the DS machine can do 80Mbit/s flat out, there is no way that the same hardware running as an ESB can do 160Mbit/s! In fact, if the ESB and DS code are both as efficient as possible, then the throughput via ESB will always be 50% of the throughput direct to the DS. Now there is a possible way for the ESB to do better: it can be better coded than the DS. For example, if the ESB did transfers in kernel space instead of user space then it might make a difference. The real answer here is to look at the latency. What is the overhead of adding the ESB to each request. If the ESB latency is small, then we can solve this problem by clustering the ESB. In this case we would put two ESBs in and then get back to full throughput.

The real point of this discussion is that this is not a useful comparison. In reality backend target services are usually pretty slow. If the same dual core server is actually doing some real work - e.g. database lookups, calculations, business logic - then its much more likely to be doing 500 requests a second or even less. 

The following chart shows real data to demonstrate this. The X-Axis shows increasing complexity of work at the backend (DS). As the effort taken by the backend becomes more realistic, the loss in throughput of having an ESB in the way reduces. So with a blindingly fast backend, we see the ESB struggling to provide just 55% of the throughput of the direct case. But as the backend becomes more realistic, we see much better numbers. So at 2000 requests a second there is barely a difference (around 10% reduction in throughput). 


In real life, what we actually see is that often you have many fewer ESBs than backend servers. For example, if we took the scenario of a backend server that can handle 500 reqs/sec, then we might end up with a cluster of two ESBs handling a cluster of 8 backends. 

Conclusion
I hope this blog has given a good overview of ESB performance and benchmarking. In particular, when is a good idea to look at latency and when to use throughput. 





man

Gauche radicale : demandez le pogrom !

CHRONIQUE. Les accusations de << genocide >> contre Israel se multiplient, revelant la derive antisioniste d'une partie de la gauche radicale.




man

Omar Youssef Souleimane : « Ce que j’ai entendu dans des classes de banlieue »

L'ecrivain d'origine syrienne a anime dans des colleges franciliens des ateliers organises pour lutter contre la radicalisation et promouvoir la laicite. Il temoigne.




man

Integrating Personal Web Data through Semantically Enhanced Web Portal

Currently, the World Wide Web is mostly composed of isolated and loosely connected "data islands". Connecting them together and retrieving only the information that is of interest to the user is the common Web usage process. Creating infrastructure that would support automation of that process by aggregating and integrating Web data in accordance to user's personal preferences would greatly improve today's Web usage. A significant part of Web data is available only through the login and password protected applications. As that data is very important for the usefulness of described process, proposed infrastructure needs to support authorized access to user's personal data. In this paper we propose a semantically enhanced Web portal that presents unique personalized user's entry to the domain-specific Web information. We also propose an identity management system that supports authorized access to the protected Web data. To verify the proposed solution, we have built Sweb - a semantically enhanced Web portal that uses proposed identity management system.




man

The Use of Latent Semantic Indexing to Mitigate OCR Effects of Related Document Images

Due to both the widespread and multipurpose use of document images and the current availability of a high number of document images repositories, robust information retrieval mechanisms and systems have been increasingly demanded. This paper presents an approach to support the automatic generation of relationships among document images by exploiting Latent Semantic Indexing (LSI) and Optical Character Recognition (OCR). We developed the LinkDI (Linking of Document Images) service, which extracts and indexes document images content, computes its latent semantics, and defines relationships among images as hyperlinks. LinkDI was experimented with document images repositories, and its performance was evaluated by comparing the quality of the relationships created among textual documents as well as among their respective document images. Considering those same document images, we ran further experiments in order to compare the performance of LinkDI when it exploits or not the LSI technique. Experimental results showed that LSI can mitigate the effects of usual OCR misrecognition, which reinforces the feasibility of LinkDI relating OCR output with high degradation.




man

Developing a Mobile Collaborative Tool for Business Continuity Management

We describe the design of a mobile collaborative tool that helps teams managing critical computing infrastructures in organizations, a task that is usually designated Business Continuity Management. The design process started with a requirements definition phase based on interviews with professional teams. The elicited requirements highlight four main concerns: collaboration support, knowledge management, team performance, and situation awareness. Based on these concerns, we developed a data model and tool supporting the collaborative update of Situation Matrixes. The matrixes aim to provide an integrated view of the operational and contextual conditions that frame critical events and inform the operators' responses to events. The paper provides results from our preliminary experiments with Situation Matrixes.




man

An Empirical Study on Human and Information Technology Aspects in Collaborative Enterprise Networks

Small and Medium Enterprises (SMEs) face new challenges in the global market as customers require more complete and flexible solutions and continue to drastically reduce the number of suppliers. SMEs are trying to address these challenges through cooperation within collaborative enterprise networks (CENs). Human aspects constitute a fundamental issue in these networks as people, as opposed to organizations or Information Technology (IT) systems, cooperate. Since there is a lack of empirical studies on the role of human factors in IT-supported collaborative enterprise networks, this paper addresses the major human aspects encountered in this type of organization. These human aspects include trust issues, knowledge and know-how sharing, coordination and planning activities, and communication and mutual understanding, as well as their influence on the business processes of CENs supported by IT tools. This paper empirically proves that these aspects constitute key factors for the success or the failure of CENs. Two case studies performed on two different CENs in Switzerland are presented and the roles of human factors are identified with respect to the IT support systems. Results show that specific human factors, namely trust and communication and mutual understanding have to be well addressed in order to design and develop adequate software solutions for CENs.




man

Managing Mechanisms for Collaborative New-Product Development in the Ceramic Tile Design Chain

This paper focuses on improving the management of New-Product Development (NPD) processes within the particular context of a cluster of enterprises that cooperate through a network of intra- and inter-firm relations. Ceramic tile design chains have certain singularities that condition the NPD process, such as the lack of a strong hierarchy, fashion pressure or the existence of different origins for NPD projects. We have studied these particular circumstances in order to tailor Product Life-cycle Management (PLM) tools and some other management mechanisms to fit suitable sectoral reference models. Special emphasis will be placed on PLM templates for structuring and standardizing projects, and also on the roles involved in the process.




man

Improving Security Levels of IEEE802.16e Authentication by Involving Diffie-Hellman PKDS

Recently, IEEE 802.16 Worldwide Interoperability for Microwave Access (WiMAX for short) has provided us with low-cost, high efficiency and high bandwidth network services. However, as with the WiFi, the radio wave transmission also makes the WiMAX face the wireless transmission security problem. To solve this problem, the IEEE802.16Std during its development stage defines the Privacy Key Management (PKM for short) authentication process which offers a one-way authentication. However, using a one-way authentication, an SS may connect to a fake BS. Mutual authentication, like that developed for PKMv2, can avoid this problem. Therefore, in this paper, we propose an authentication key management approach, called Diffie-Hellman-PKDS-based authentication method (DiHam for short), which employs a secret door asymmetric one-way function, Public Key Distribution System (PKDS for short), to improve current security level of facility authentication between WiMAX's BS and SS. We further integrate the PKMv1 and the DiHam into a system, called PKM-DiHam (P-DiHam for short), in which the PKMv1 acts as the authentication process, and the DiHam is responsible for key management and delivery. By transmitting securely protected and well-defined parameters for SS and BS, the two stations can mutually authenticate each other. Messages including those conveying user data and authentication parameters can be then more securely delivered.




man

Semantic Web: Theory and Applicationsns




man

Towards Classification of Web Ontologies for the Emerging Semantic Web

The massive growth in ontology development has opened new research challenges such as ontology management, search and retrieval for the entire semantic web community. These results in many recent developments, like OntoKhoj, Swoogle, OntoSearch2, that facilitate tasks user have to perform. These semantic web portals mainly treat ontologies as plain texts and use the traditional text classification algorithms for classifying ontologies in directories and assigning predefined labels rather than using the semantic knowledge hidden within the ontologies. These approaches suffer from many types of classification problems and lack of accuracy, especially in the case of overlapping ontologies that share common vocabularies. In this paper, we define an ontology classification problem and categorize it into many sub-problems. We present a new ontological methodology for the classification of web ontologies, which has been guided by the requirements of the emerging Semantic Web applications and by the lessons learnt from previous systems. The proposed framework, OntClassifire, is tested on 34 ontologies with a certain degree of overlapping domain, and effectiveness of the ontological mechanism is verified. It benefits the construction, maintenance or expansion of ontology directories on the semantic web that help to focus on the crawling and improving the quality of search for the software agents and people. We conclude that the use of a context specific knowledge hidden in the structure of ontologies gives more accurate results for the ontology classification.




man

A Semantic Wiki Based on Spatial Hypertext

Spatial Hypertext Wiki (ShyWiki) is a wiki which represents knowledge using notes that are spatially distributed in wiki pages and have visual characteristics such as colour, size, or font type. The use of spatial and visual characteristics in wikis is important to improve human comprehension, creation and organization of knowledge. Another important capability in wikis is to allow machines to process knowledge. Wikis that formally structure knowledge for this purpose are called semantic wikis. This paper describes how ShyWiki can make use of spatial hypertext in order to be a semantic wiki. ShyWiki can represent knowledge at different levels of formality. Users of ShyWiki can annotate the content and represent semantic relations without being experts of semantic web data description languages. The spatial hypertext features make it suitable for users to represent unstructured knowledge and implicit graphic relations among concepts. In addition, semantic web and spatial hypertext features are combined to represent structured knowledge. The semantic web features of ShyWiki improve navigation and publish the wiki knowledge as RDF resources, including the implicit relations that are analyzed using a spatial parser.




man

A Ranking Tool Exploiting Semantic Descriptions for the Comparison of EQF-based Qualifications

Nowadays, one of the main issues discussed at the Community level is represented by the mobility of students and workers across Europe. During the last years, in order to deal with the above picture, several initiatives have been carried out: one of them is the definition of the European Qualification Framework (EQF), a common architecture for the description of qualifications. At the same time, several research activities were established with the aim of finding how semantic technologies could be exploited for qualifications comparison in the field of human resources acquisition. In this paper, the EQF specifications are taken into account and they are applied in a practical scenario to develop a ranking algorithm for the comparison of qualifications expressed in terms of knowledge, skill and competence concepts, potentially aimed at supporting European employers during the recruiting phase.




man

Ontology-based Competency Management: the Case Study of the Mihajlo Pupin Institute

Semantic-based technologies have been steadily increasing their relevance in recent years in both the research world and business world. Considering this, the present article discusses the process of design and implementation of a competency management system in information and communication technologies domain utilizing the latest Semantic Web tools and technologies including D2RQ server, TopBraid Composer, OWL 2, SPARQL, SPARQL Rules and common human resources related public vocabularies. In particular, the paper discusses the process of building individual and enterprise competence models in a form of ontology database, as well as different ways of meaningful search and retrieval of expertise data on the Semantic Web. The ontological knowledge base aims at storing the extracted and integrated competences from structured, as well as unstructured sources. By using the illustrative case study of deployment of such a system in the Human Resources sector at the Mihajlo Pupin Institute, this paper shows an example of new approaches to data integration and information management. The proposed approach extends the functionalities of existing enterprise information systems and offers possibilities for development of future Internet services. This allows organizations to express their core competences and talents in a standardized, machine processable and understandable format, and hence, facilitates their integration in the European Research Area and beyond.




man

Législatives : le cadeau de Jospin à Emmanuel Macron...

On semble s'acheminer vers une vague En Marche de forte magnitude aux prochaines Législatives. Si Les Républicains espèrent limiter la casse, c'est à dire avec un perte d'une centaine de députés. L'enjeu de LR, c'est l'après avec en ligne de mire, un...




man

Emmanuel Macron et les Réseaux Sociaux

C’est entendu pour Emmanuel Macron, les violences auraient pour cause les Réseaux Sociaux. Je cite « il y a un changement anthropologique de nos sociétés qui vient des réseaux sociaux ». J’avoue que ce type de raisonnement aussi simplistes me laisse pantois....




man

La police arrête des dizaines de personnes à Amsterdam après l'interdiction de manifestations

La police arrête des dizaines de personnes à Amsterdam après l'interdiction de manifestations




man

Trump confie l'immigration à Tom Homan, le "tsar des frontières"

Trump confie l'immigration à Tom Homan, le "tsar des frontières"




man

No Comment : des couronnes de coquelicots pour le Dimanche du Souvenir 

No Comment : des couronnes de coquelicots pour le Dimanche du Souvenir 




man

No Comment : manifestation pour la paix à la COP 29

No Comment : manifestation pour la paix à la COP 29




man

Serbie : des manifestants demandent la démission du gouvernement après le drame de Novi Sad

Serbie : des manifestants demandent la démission du gouvernement après le drame de Novi Sad




man

No Comment : Tyrannosaure Rex, un mannequin pas comme les autres

No Comment : Tyrannosaure Rex, un mannequin pas comme les autres









man

British writer Samantha Harvey wins Booker Prize for space novel Orbital - Al Jazeera English

  1. British writer Samantha Harvey wins Booker Prize for space novel Orbital  Al Jazeera English
  2. Samantha Harvey’s ‘beautiful and ambitious’ Orbital wins Booker prize  The Guardian
  3. Samantha Harvey wins the Booker prize for “Orbital”  The Economist
  4. British writer Samantha Harvey’s space-station novel ‘Orbital’ wins 2024 Booker Prize  CNN
  5. Booker Prize Is Awarded to Samantha Harvey’s ‘Orbital’  The New York Times





man

UN chief warns COP29 summit to pay up or face climate-led disaster for humanity - The Globe and Mail

  1. UN chief warns COP29 summit to pay up or face climate-led disaster for humanity  The Globe and Mail
  2. Climate Summit, in Early Days, Is Already on a ‘Knife Edge’  The New York Times
  3. At COP29 summit, nations big and small get chance to bear witness to climate change  The Globe and Mail
  4. Terence Corcoran: COP29 hit by political ‘dunkelflaute’  Financial Post
  5. COP29: Albania PM goes off script to ask 'What on Earth are we doing?'  Euronews





man

Crï¿œation de 3000 postes de "gendarmes verts" : la fausse promesse de Darmanin

A chaque jour, une nouvelle annonce. Cet ᅵtᅵ, le ministre de l'Intᅵrieur, Gᅵrald Darmanin, a multipliᅵ les dᅵplacements sur le terrain et les annonces. Pour lutter contre les pyromanes ᅵ...




man

An effectiveness analysis of enterprise financial risk management for cost control

This paper aims to analyse the effectiveness of cost control oriented enterprise financial risk management. Firstly, it analyses the importance of enterprise financial risk management. Secondly, the position of cost control in enterprise financial risk management was analysed. Cost control can be used to reduce the operating costs of enterprises, improve their profitability, and thus reduce the financial risks they face. Finally, a corporate financial risk management strategy is constructed from several aspects: establishing a sound risk management system, predicting and responding to various risks, optimising fund operation management, strengthening internal control, and enhancing employee risk awareness. The results show that after applying the proposed management strategy, the enterprise performs well in cost control oriented enterprise financial risk management, with a cost accounting accuracy of 95% and an audit system completeness of 90%. It can also help the enterprise develop emergency plans and provide comprehensive risk management strategy coverage.




man

Springs of digital disruption: mediation of blockchain technology adoption in retail supply chain management

Supply chain management practices are vital for success and survival in today's competitive Indian retail market. The advent of COVID-19 pandemic necessitates a digital disruption in retail supply chain management centred on efficient technology like blockchain in order to enhance supply chain performance. The present research aims to decipher the nature of associations between supply chain management practices, blockchain technology adoption and supply chain performance in retail firms. The research is based on primary survey of specific food and grocery retailers operating on a supermarket format stores in two Indian cities. The findings pointed towards the presence of significant and positive association of all the constructs with each other. Moreover, the mediating role of blockchain technology adoption was also revealed, i.e., it partially mediates the effects of supply chain management practices on supply chain performance.




man

Enhanced TCP BBR performance in wireless mesh networks (WMNs) and next-generation high-speed 5G networks

TCP BBR is one of the most powerful congestion control algorithms. In this article, we provide a comprehensive review of BBR analysis, expanding on existing knowledge across various fronts. Utilising ns3 simulations, we evaluate BBR's performance under diverse conditions, generating graphical representations. Our findings reveal flaws in the probe's RTT phase duration estimation and unequal bandwidth sharing between BBR and CUBIC protocols. Specifically, we demonstrated that the probe's RTT phase duration estimation algorithm is flawed and that BBR and CUBIC generally do not share bandwidth equally. Towards the end of the article, we propose a new improved version of TCP BBR which minimises these problems of inequity in bandwidth sharing and corrects the inaccuracies of the two key parameters RTprop and cwnd. Consequently, the BBR' protocol maintains very good fairness with the Cubic protocol, with an index that is almost equal to 0.98, and an equity index over 0.95.




man

International Journal of Agile Systems and Management




man

Business intelligence in human management strategies during COVID-19

The spread of COVID-19 results in disruption, uncertainty, complexity, and ambiguity in all businesses. Employees help companies achieve their aims. To manage human resources sustainably, analyse organisational strategy. This thorough research study attempts to find previously unidentified challenges, cutting-edge techniques, and surprising decisions in human resource management outside of healthcare organisations during the COVID-19 pandemic. The narrative review examined corporate human resource management measures to mitigate COVID-19. Fifteen publications were selected for the study after removing duplicates and applying the inclusion and exclusion criteria. This article examines HR's COVID-19 response. Human resource management's response to economic and financial crises has been extensively studied, but the COVID-19 pandemic has not. This paper reviewed the literature to reach its goal. The results followed the AMO framework for human resource policies and procedures and the HR management system. This document suggests COVID-19 pandemic-related changes to human resource management system architecture, policies, and practises. The study created a COVID-19 pandemic human resource management framework based on the literature. The COVID-19 pandemic had several negative effects, including social and behavioural changes, economic shock, and organisational disruption.




man

Access controllable multi-blockchain platform for enterprise R&D data management

In the era of big data, enterprises have accumulated a large amount of research and development data. Effective management of their precipitated data and safe sharing of data can improve the collaboration efficiency of research and development personnel, which has become the top priority of enterprise development. This paper proposes to use blockchain technology to assist the collaboration efficiency of enterprise R&D personnel. Firstly, the multi-chain blockchain platform is used to realise the data sharing of internal data of enterprise R&D data department, project internal data and enterprise data centre, and then the process of construction of multi-chain structure and data sharing is analysed. Finally, searchable encryption was introduced to achieve data retrieval and secure sharing, improving the collaboration efficiency of enterprise research and development personnel and maximising the value of data assets. Through the experimental verification, the multi-chain structure improves the collaboration efficiency of researchers and data security sharing.




man

Human resource management and organisation decision optimisation based on data mining

The utilisation of big data presents significant opportunities for businesses to create value and gain a competitive edge. This capability enables firms to anticipate and uncover information quickly and intelligently. The author introduces a human resource scheduling optimisation strategy using a parallel network fusion structure model. The author's approach involves designing a set of network structures based on parallel networks and streaming media, enabling the macro implementation of the enterprise parallel network fusion structure. Furthermore, the author proposes a human resource scheduling optimisation method based on a parallel deep learning network fusion structure. It combines convolutional neural networks and transformer networks to fuse streaming media features, thereby achieving comprehensive identification of the effectiveness of the current human resource scheduling in enterprises. The result shows that the macro and deep learning methods achieve a recognition rate of 87.53%, making it feasible to assess the current state of human resource scheduling in enterprises.




man

An empirical study on construction emergency disaster management and risk assessment in shield tunnel construction project with big data analysis

Emergency disaster management presents substantial risks and obstacles to shield tunnel building projects, particularly in the event of water leakage accidents. Contemporary water leak detection is critical for guaranteeing safety by reducing the likelihood of disasters and the severity of any resulting damages. However, it can be difficult. Deep learning models can analyse images taken inside the tunnel to look for signs of water damage. This study introduces a unique strategy that employs deep learning techniques, generative adversarial networks (GAN) with long short-term memory (LSTM) for water leakage detection i shield tunnel construction (WLD-STC) to conduct classification and prediction tasks on the massive image dataset. The results demonstrate that for identifying and analysing water leakage episodes during shield tunnel construction, the WLD-STC strategy using LSTM-based GAN networks outperformed other methods, particularly on huge data.




man

Application of AI intelligent technology in natural resource planning and management

This article studies the application of artificial intelligence technology in natural resource planning and management. This article first introduces the background of NR and AI intelligent technology, then conducts academic research and summary on NR planning management and AI intelligent technology. Then, an algorithm model based on multi-objective intelligent planning algorithm is established. Finally, simulation experiments are conducted, and experiments summary and discussion are provided. The experimental results show that the average efficiency value of the four stages of NR planning and management before use is 5.25, and the average efficiency value of the four stages of NR planning and management after use is 7. The difference in the average efficiency value before and after use is 1.75. It can be seen that the use of AI intelligent technology can effectively improve the efficiency of natural resource planning and management.




man

Applying a multiplex network perspective to understand performance in software development

A number of studies have applied social network analysis (SNA) to show that the patterns of social interaction between software developers explain important organisational outcomes. However, these insights are based on a single network relation (i.e., uniplex social ties) between software developers and do not consider the multiple network relations (i.e., multiplex social ties) that truly exist among project members. This study reassesses the understanding of software developer networks and what it means for performance in software development settings. A systematic review of SNA studies between 1990 and 2020 across six digital libraries within the IS and management science domain was conducted. The central contributions of this paper are an in-depth overview of SNA studies to date and the establishment of a research agenda to advance our knowledge of the concept of multiplexity on how a multiplex perspective can contribute to a software developer's coordination of tasks and performance advantages.