for

The times McLaren came closest to breaking 25-year constructors’ title drought | Formula 1

McLaren could be set to win their first constructors' title for 25 years this season. Here is how close they've come over that time.




for

Every way Verstappen can clinch the championship at the Las Vegas Grand Prix | Formula 1

Max Verstappen is poised to clinch the 2024 drivers' championship if he finishes ahead of Lando Norris one more time. Here's how he can seal a fourth title at the next race.




for

Why Mercedes put ‘a reminder of joy and pain’ on display in their factory lobby | Formula 1

Mercedes have put the car from Lewis Hamilton's controversial 2021 championship defeat on display in the lobby at their factory.




for

Bortoleto pushed for 2025 F1 debut to avoid missing a year of racing | Formula 1

Gabriel Bortoleto said he was determined not to sit out a year of racing in 2025 after Sauber confirmed he will make his debut for them in Formula 1 next year.




for

Alpine confirm switch to Mercedes power when Renault ends F1 engine project | Formula 1

Alpine have officially announced they will use Mercedes power units when Formula 1 introduces its new engine regulations in 2026.




for

Fallows steps down as Aston Martin’s technical director | Formula 1

Dan Fallows is stepping down as Aston Martin's technical director, two-and-a-half years after taking over the role.




for

Wittich replaced as F1 race director, Marques to take over from Las Vegas | Formula 1

Niels Wittich has unexpectedly stood down from his role as Formula 1's race director. The FIA named Rui Marques as his replacement.




for

F1 teams to reveal 2025 liveries together at first season launch event in London | Formula 1

All 10 Formula 1 teams will participate in a new "season launch event" in February next year to reveal their liveries together.




for

Alpine must make up for 0.3-second deficit with 2025 chassis – Briatore | RaceFans Round-up

In the round-up: Alpine must make up for 0.3-second deficit with 2025 chassis - Briatore • Stolen Lauda helmet goes on display • Wittich 'has not resigned'




for

Don’t underestimate how tough a job F1’s new race director faces | Formula 1

Niels Wittich's unexpected departure as Formula 1's race director recreates the circumstances in which Michael Masi was thrown in at the deep end.





for

‘We have to fight for the commanding heights of American culture’

American Culture Project’s John Tillman on winning through upstream engagement




for

Michigan needs new ideas for high absenteeism and falling student scores

Education choice is succeeding in other states




for

Licensing reforms would ease Michigan’s pain

Let anesthesiology assistants work for themselves




for

Adding structured data support for Product Variants

In 2022, Google expanded support for Product structured data, enabling enhanced product experiences in Google Search. Then, in 2023 we added support for shipping and returns structured data. Today, we are adding structured data support for Product variants, allowing merchants to easily show more variations of the products they sell, and show shoppers more relevant, helpful results. Providing variant structured data will also complement and enhance merchant center feeds, including automated feeds.




for

Adding markup support for organization-level return policies

We're adding support for return policies at the organization level, which means you'll be able to specify a general return policy for your business instead of having to define one for each individual product you sell.




for

SDL Trados Studio ? Corrupt file: Missing locked content for Oasis.Xliff 12.x.

I recently accepted a large proofreading job to be completed in SDL Trados Studio 2014. All seemed to be fine until I tried to open some of the project files. This article describes how to deal with “Corrupt file: Missing … Continue reading




for

Invoicing system for freelancers – beta testers needed

I used to keep the records on my clients, projects, invoices, etc. in Excel sheets and to generate my monthly invoices manually. However, with growing client base, invoicing became a time-consuming and annoying work that had to be performed at the … Continue reading




for

Nouvelles formations en entrepreneuriat pour les traducteurs

 

J’ai récemment collaboré avec le CI3M en vue de créer une offre de formation en entrepreneuriat destinée aux traducteurs professionnels souhaitant s’installer ou développer leur activité.

 

Déclinée en trois programmes, elle est :

  • enseignée à distance ;
  • remboursable par le FIF PL ou d’autres organismes de financement et
  • intègre un suivi hebdomadaire personnalisé, afin d’adapter son contenu à vos besoins.

formation CRÉATION ET Développement de l'activité

Ce programme complet sur 8 semaines a pour objectif d’aider les traducteurs à préparer, lancer, gérer et développer leur activité professionnelle. Par le biais d’exercices pratiques et d’un accompagnement personnalisé (8 h de suivi pédagogique par téléphone), vous découvrirez comment : 

 

1. Faire le point sur votre connaissance du métier de traducteur et évaluer votre degré de préparation à l’exercer en tant qu’indépendant, pour mieux préparer la création de votre entreprise.

 

2. Promouvoir et vendre vos services en définissant votre positionnement, votre politique tarifaire et votre stratégie de communication, afin de prospecter, de vendre et d’assurer le suivi auprès de vos clients de manière efficace.

 

3. Gérer une entreprise de traduction, notamment respecter un code de déontologie, ainsi que les nombreuses obligations comptables, fiscales et légales qu’implique l’exercice en libéral. Pour réussir et accroître vos revenus d’activité, vous découvrirez aussi comment piloter votre entreprise et anticiper les changements pour mieux saisir les opportunités.


4. Prendre du recul pour assurer l’équilibre entre votre vie personnelle et professionnelle.

 

formation marketing

Ce module spécifique s’adresse aux traducteurs déjà installés ou à ceux qui veulent aller droit à l’essentiel pour identifier et fidéliser leurs clients.

 

Après un premier diagnostic, qui permettra de cibler le périmètre de l’étude de marché à réaliser afin de bâtir une stratégie de marketing solide, le suivi pédagogique, réparti sur trois semaines, vous permettra d’acquérir les méthodes et les outils nécessaires pour vendre plus et mieux.

 

formation gestion et pilotage

Autre module spécifique, cette formation traite de tous les aspects souvent négligés par les traducteurs libéraux : les démarches administratives, la fiscalité, la comptabilité, la protection sociale, le pilotage et la stratégie de croissance. Bref, de toutes les connaissances et compétences nécessaires pour réussir en tant qu’entrepreneur.

 

Le suivi pédagogique, réalisé sur trois semaines, vise à répondre à vos questions, accompagner vos démarches et vous fournir des outils pour faciliter la gestion de votre entreprise au quotidien et assurer son avenir.

 

INSCRIPTION ET FINANCEMENT

En s’appuyant sur une équipe pédagogique composée de traducteurs experts, le CI3M propose des formations professionnelles à distance, pour certaines diplômantes, sur l’ensemble des compétences requises pour l’exécution de prestations de traduction ou de rédaction technique. Pour en savoir plus sur le coût de ces formations et les modalités d’inscription, contactez le CI3M au +33 (0)2 30 96 04 42.

 

Selon votre situation, tout ou partie des frais peuvent être remboursés ou pris en charge par le Fonds interprofessionnel de formation des professionnels libéraux (FIF PL), Pôle Emploi, votre Compte personnel de formation (CPF), etc.

 


L'auteur

Professionnelle accréditée en commerce international ayant travaillé plusieurs années en tant que conseillère auprès de PME, Gaële Gagné est traductrice indépendante depuis 2005. Aux commandes de Trëma Translations, elle traduit de l'anglais vers le français et partage ses connaissances en marketing et gestion d'entreprise avec ses collègues traducteurs par le biais d'un blog intitulé Mes petites affaires et de formations dispensées via le CI3M.


Et maintenant ?

Partagez



Abonnez-vous au flux

Consultez d'autres articles :

Diplôme de traduction : indispensable ou superflu ?
Bien facturer pour être payé
Rédiger un devis de traduction efficace
Traducteurs : 3 pistes pour se former sans se ruiner
Nouveaux traducteurs : 10 conseils pour bien démarrer




for

API Management: The missing link for SOA success

Nearly 2 years ago I tweeted:



Well, unfortunately, I had it a bit wrong.

APIs and service do have a very direct and 1-1 relationship: an API is the interface of a service. However, what is different is that one's about the implementation and is focused on the provider, and the other is about using the functionality and is focused on the consumer. The service of course is what matters to the provider and API is what matters to the consumer.

So its clearly more than just a new name.

Services: If you build it will they come?

One of the most common anti-patterns of SOA is the one service - one client pattern. That's when the developer who wrote the service also wrote its only client. In that case there's no sharing, no common data, no common authentication and no reuse of any kind. The number one reason for SOA (improving productivity by reusing functionality as services) is gone. Its simply client-server at the cost of having to use interoperable formats like XML, JSON, XML Schema, WSDL and SOAP. 

There are two primary reasons for this pattern being so prevalent: first is due to a management failure whereby everyone is required to create services for whatever they do because that's the new "blessed way". There's no architectural vision driving proper factoring. Instead its each person or at least each team for themselves. The resulting services are only really usable for that one scenario - so no wonder no one else uses them!

Writing services that can service many users requires careful design and thinking and willingness to invest in the common good. That's against human intuition and something that will happen only if its properly guided and incentivized. The cost of writing common services must be paid by someone and will not happen by itself.

That's in effect the second reason why this anti-pattern exists: the infrastructure in place for SOA does not support or encourage reuse. Even if you had a service that is reusable how do you find out how well it works? How do you know how many people are using it? Do you know what time of day they use it most? Do you know which operations of your service get hit the hardest? Next, how do others even find out you wrote a service and it may do what they need? 

SOA Governance (for which WSO2 has an excellent product: WSO2 Governance Registry) is not focused on encouraging service reuse but rather on governing the creation and management of services. The SOA world has lacked a solution for making it easy to help people discover available services and to manage and monitor their consumption. 

API Management

What's an API? Its the interface to a service. Simple. In other words, if you don't have any services, you have no APIs to expose and manage.

API Management is about managing the entire lifecycle of APIs. This involves someone who publishes the interface of a service into a store of some kind. Next it involves developers who browse the store to find APIs they care about and get access to them (typically by acquiring an access token of some sort) and then the developers using those keys to program accesses to the service via its interface.

Why is this important? In my opinion, API Management is to SOA what Amazon EC2 is to Virtualization. Of course virtualization has been around for a long time, but EC2 changed the game by making it trivially simple for someone to get a VM. It brought self service, serendipitous consumption, and elasticity to virtualization. Similarly, API Management brings self service & serendipitous consumption by allowing developers to discover, try and use services without requiring any type of "management approval". It allows consumers to not have to worry about scaling - they just indicate the desired SLA (typically in the form of a subscription plan) and its up to the provider to make it work right. 

API Management & SOA are married at the hip

If you have an SOA strategy in your organization but don't have an API Management plan then you are doomed to failure. Notice that I didn't even talk about externally exposing APIs- even internal service consumption should be managed through an API Management system so that everyone has clear visibility into who's using what service and how much is used when. Its patently obvious why external exposition of services requires API Management.

Chris Haddad, WSO2's VP of Technology Evangelism, recently wrote a superb whitepaper that discusses and explain the connection between SOA and API Management. Check out Promoting service reuse within your enterprise and maximizing SOA success and I can guarantee you will leave enlightened.

In May this year, a blog on highscalability.com talked about how "Startups Are Creating A New System Of The World For IT". In that the author talked about open source as the foundation of this new system and SOA as the load bearing walls of the new IT landscape. I will take it to the next level and say that API Management is the roof of the new IT house.

WSO2 API Manager

We recently introduced an API Management product: WSO2 API Manager. This product comes with an application for API Providers to create and manage APIs, a store application for API Developers to discover and consume APIs and a gateway to route API traffic through. Of course all parts of the product can be scaled horizontally to deal with massive loads. The WSO2 API Manager can be deployed either for internal consumption, external consumption or both. As with any other WSO2 product, this too is 100% open source. After you read Chris' whitepaper download this product and sit it next to your SOA infrastructure (whether its from us or not) and see what happens!




for

Time for me to stop commenting about politics and other sensitive topics

I've been cautioned and advised by several good friends that I should take a chill pill on commenting about various political things. Some of the topics I've been quite vocal about are high profile things involving high power people .. and I might be beginning to get noticed by them, which of course is not a good thing!

I get frustrated by political actions that I find to be stupid and I don't hesitate to tell it straight the way I think about it. Obviously every such statement bothers someone else. Its one thing when its irrelevant noise, but if it gets noisy then I'm a troublemaker.

I'm not keen to get to that state.

Its not because I have anything to hide or protect - not in the least. Further I'm not scared off by the PM telling private sector people like me to "go home" or "be exposed" but publicly naming private individuals in parliament is rather over the top IMO. Last thing I want is to get there.

I have an immediate family and an extended family of 500+ in WSO2 that I'm responsible for. I'm taping up my big mouth for their sake.

Instead I will try to blog constructively & informatively whenever time permits.

Similarly I will try to keep my big mouth controlled about US politics too. Its really not my problem to worry about issues there!

I should really kill off my FB account. However I do enjoy getting info about friends and family life events and FB is great for that. So instead I'll stop following everyone except for close friends and family.

Its been fun and I like intense intellectual debate. However, maybe another day - just not now.

(P.S.: No, no one threatened me or forced me to do this. I just don't want to come close to that possibility!)




for

Platform-as-a-Service freedom or lock-in

There has been a set of discussions about lock-in around Platform-as-a-Service (PaaS): Joe McKendrick and Lori MacVittie in particular bring out some of the real challenges here.

Lori brings out the difference between portability and mobility. While I'm not in 100% agreement with Lori's definitions, there is a key point here: its not just code, its the services that the code relies on that buy lock-in into a cloud.

So for example, if you use Amazon SQS, Force.com Chatter Collaboration, Google App Engine's bigtable data store, all of these tie you into the cloud you are deployed onto. Amazon isn't really a PaaS yet, so the tie-in is minimal, but Google App Engine (GAE) is based on Authentication, Logging, Data, Cache and other core services. Its almost impossible to imagine building an app without these, and they all tie you into GAE. Similarly, VMForce relies on a set of services from force.com.

But its not just about mobility between force.com and Google: between two PaaSes. The typical enterprise needs a private cloud as much as public cloud. So there is a bigger question:

Can you move your application from a private PaaS to a public Paas and back again?
In other words, even if Google and Force got together and defined a mobility layer, can I then take an app I built and run it internally? Neither Google nor Force is offering a private PaaS.

The second key question is this:
How can I leverage standard Enterprise Architecture in a PaaS?
What I'm getting at here is that as the world starts to implement PaaS, does this fit with existing models? Force.com and Google App Engine have effectively designed their own world view. VMForce and the recent Spring/Google App Engine announcement address one aspect of that - what Lori calls portability. By using Spring as an application model, there is at least a passing similarity to current programming models in Enterprises. But Enterprise Architectures are not just about Java code: what about an ESB? What about a Business Process engine (BPMS)? What about a standard XACML-based entitlement engine? So far PaaS has generally only addressed the most basic requirements of Enterprise core services: databases and a identity model.

So my contention is this: you need a PaaS that supports the same core services that a modern Enterprise architecture has: ESB, BPMS, Authentication/Authorization, Portal, Data, Cache, etc. And you need a PaaS that works inside your organization as well as in a public Cloud. And if you really don't want any lock-in.... hadn't that PaaS better be Open Source as well? And yes, this is a hint of things coming very soon!




for

WSO2 Stratos - Platform-as-a-Service for private and public cloud

Yesterday we announced something I believe is a game-changer: WSO2 Stratos. What is Stratos?

WSO2 Stratos is a complete SOA and developer platform offered as a self-service, multi-tenant, elastic runtime for private and public cloud infrastructures.
What that means is that our complete SOA platform - now enhanced with Tomcat and Webapp support - is available as a  "cloud native" runtime that you can either use on the Web (yes - you can try it out right now), on Amazon VPC, or on your own internal private cloud based on Ubuntu Enterprise Cloud, Eucalyptus and (coming soon) vmWare vSphere. It is a complete Platform-as-a-Service for private and public clouds.

I'll be writing more about Stratos over the coming weeks and months, and I'll also provide links and tweets to other Stratos blogs, but in this blog I want to simply answer three questions:

  1. I'm already talking to {vmWare, Eucalyptus, Ubuntu, Savvis, Joyent} about private cloud - what does WSO2 add that they don't have?
  2. What is the difference between Stratos and the Cloud Images that WSO2 already ships?
  3. Why would I choose WSO2 over the other vendors offering Platform-as-a-Service?
In order to answer the first question, lets look at the cloud computing space, which is most easily divided up into:
  • Infrastructure-as-a-Service (IaaS): this is where Amazon, Eucalyptus, vmWare, Saavis and Joyent play
  • Platform-as-a-Service (PaaS): Google App Engine, vmForce, Tibco Silver and now WSO2 Stratos play in this space.
  • Software-as-a-Service (SaaS): Google Apps, Google Mail, Microsoft Office Live, Salesforce, SugarOnDemand - these and many more make up the SaaS category.
To generalize wildly, most people talking about public cloud today are talking about SaaS. And most people talking about private cloud today are talking about IaaS.

SaaS is fantastic for quick productivity and low cost. WSO2 uses Google Apps, Sugar on Demand and several other SaaS apps. But SaaS doesn't create competitive advantage. Mule also uses Google Apps. They may well use Salesforce. SaaS cannot produce competitive advantage because your competitors get access to exactly the same low-cost services you do. In order to create competitive advantage you need to build as well as buy. For example, we use our Mashup Server together with our Sugar Business Messaging Adapter to provide insight and management of our pipeline that goes beyond what Sugar offers.

IaaS is of course a great basis to build apps. But it's just infrastructure. Yes - you get your VM hosted quicker. But someone has to create a useful VM. And that is where PaaS comes in. PaaS is how to speed up cloud development.

What does Stratos give you on top of an IaaS? It gives you an Application Server, Registry, Identity Server, Portal, ESB, Business Activity Monitor and Mashup Server. And it gives you these as-a-Service: completely self-service, elasticly scalable, and granularly metered and monitored. Someone in your team needs an ESB - they can provision one for themselves instantly. And because it's multi-tenant, it costs nothing to run until it gets used. How do you know how it's used? The metering and monitoring tells you exactly how much each tenant uses.

2. What is the difference between Stratos and the existing WSO2 Cloud Images?

The cloud images we started shipping in December are not Cloud Native. Stratos is Cloud Native. In practice, this means that when you log into Stratos (go on try it now) you can instantly provision your own domain, together with a set of Stratos services. This saves memory - instead of allocating a new VM and minimum half a gigabyte of memory to each new server you get a new ESB with zero extra memory cost. And it's much easier. The new ESB will automatically be governed and monitored. It's automatically elastically clustered.

3. Why would I choose WSO2 over other PaaS vendors?

Firstly, if you look at PaaS as a whole there is a huge divide between Public PaaS and Private PaaS. The public PaaS vendors simply don't offer private options. You can't run force.com or Google App Engine applications internally, even if you want to. WSO2 bridges that gap with a PaaS you can use in the public Web, on a virtual private cloud, or on premises.

The second big differentiator between WSO2 and the existing PaaS offerings is the architecture. Mostly PaaS is a way of building webapps. WSO2 offers a complete enterprise architecture - governance, business process, integration, portal, identity and mashups. And we support the common Enterprise Programming Model (not just Java, WebApp, JAX-WS, but also BPEL, XSLT, XPath, Google Gadgets, WSDL, etc). The only other PaaS that I know of that offers a full Enterprise architecture is Tibco Silver.

The third and most important differentiator is about lock-in. Software vendors love lock-in - and Cloud vendors love it even more. So if you code to Google App Engine, you are tied into Google's identity model, Google's Bigtable, etc. If you code to force.com or vmForce - you are tied to force's infrastructure services. If you code to Tibco Silver, you are tied to Tibco. WSO2 fights this in three ways:
  • No code lock-in: we use standards-based coding (WAR, JAX-WS, POJO) and Stratos is 100% Apache License Open Source.
  • No model lock-in: we use standards-based services: 
    • Identity is based on OpenID, OAuth, XACML, WS-Trust
    • Registry is based on AtomPub and REST
    • Business Process is based on BPEL, etc
  • No hosting lock-in: you can take you apps and data from our public PaaS and re-deploy internally or on your own virtual private cloud anytime you like.
I hope you found this a useful introduction to Stratos. If you want more information, contact me paul@wso2.com, or check out the Stratos website or code.




for

Using OSGi as the core of a middleware platform

Ross Mason of Mulesoft recently blogged: "OSGi - no thanks". Ross is a smart guy and he usually has something interesting to say. In this case, I think Ross has made a lot of good points:

1. Ross is right - OSGi is a great technology for middleware vendors.
2. Ross is right - Developers shouldn't be forced to mess with OSGi.
3. Ross is wrong - You can make both of these work together.

At WSO2 we went through exactly the same issues. We simply came to a different conclusion - that we can provide the benefits of OSGi (modularity, pluggability, dynamic loading) without giving pain to end-users. In WSO2 Carbon, customers can deploy their systems in exactly the same way that worked pre-OSGi.

Why did we choose OSGi? We also looked at building our own dynamic loading schemes. In fact, we've had dynamic classloading capabilities in our platform from day one. The reasons we went with OSGi are:

  • A structured and versioned approach to dynamic classloading
  • An industry standard approach - hence better understood, better skills, better resources
  • It solves more than just dynamic loading: as well as providing versions and dynamic loading, it also really gives proper modularity - which means hiding classes as much as exposing classes.
  • It provides (through Equinox p2) a proper provisioning model.
It wasn't easy. We struggled with OSGi to start with, but in the end we have a much stronger solution than if we had built our own. And we have done some great improvements. Our new Carbon Studio tooling gives a simple model to build complete end-to-end applications and hides OSGi completely from the end-user. The web admin consoles and deployment models allow complete deployment with zero OSGi. Drop a JAR in and we take care of the OSGi bundling for you.

The result - the best of both worlds - ease of use for developers and great middleware.




for

Understanding ESB Performance & Benchmarking

ESB performance is a hot (and disputed topic). In this post I don't want to talk about different vendors or different benchmarks. I'm simply trying to help people understand some of the general aspects of benchmarking ESBs and what to look out for in the results.

The general ESB model is that you have some service consumer, an ESB in the middle and a service provider (target service) that the ESB is calling. To benchmark this, you usually have a load driver client, an ESB, and a dummy service.

+-------------+      +---------+      +---------------+
| Load Driver |------|   ESB   |------| Dummy Service |
+-------------+      +---------+      +---------------+

Firstly, we want the Load Driver (LD), the ESB and the Dummy Service (DS) to be on different hardware. Why? Because we want to understand the ESB performance, not the performance of the DS or LD.

The second thing to be aware of is that the performance results are completely dependent on the hardware, memory, network, etc used. So never compare different results from different hardware.

Now there are three things we could look at:
A) Same LD, same DS, different vendors ESBs doing the same thing (e.g. content-based routing)
B) Same LD, same DS, different ESB configs for the same ESB, doing different things (e.g. static routing vs content-based routing)
C) Going via ESB compared to going Direct (e.g. LD--->DS without ESB)

Each of these provides useful data but each also needs to be understood.

Metrics
Before looking at the scenarios, lets look at how to measure the performance. The two metrics that are always a starting point in any benchmark of an ESB here are the throughput (requests/second) and the latency (how long each request takes). With latency we can consider overall latency - the time taken for a completed request observed at the LD, and the ESB latency, which is the time taken by the message in the ESB. The ESB latency can be hard to work out. A well designed ESB will already be sending bytes to the DS before its finished reading the bytes the LD has sent it. This is called pipelining. Some ESBs attempt to measure the ESB latency inside the ESB using clever calculations. Alternatively scenario C (comparing via ESB vs Direct) can give an idea of ESB Latency. 

But before we look at the metrics we need to understand the load driver.

There are two different models to doing Load Driving:
1) Do a realistic load test based on your requirements. For example if you know you want to support up to 50 concurrent clients each making a call every 5 seconds on average, you can simulate this.
2) Saturation! Have a large number of clients, each making a call as soon as the last one finishes.

The first one is aimed at testing what the ESB does before its fully CPU loaded. In other words, if you are looking to see the effect of adding an ESB, or the comparison of one ESB to another under realistic load, then #1 is the right approach. In this approach, looking at throughput may not be useful, because all the different approaches have similar results. If I'm only putting in 300 requests a sec on a modern system, I'm likely to see 300 request a sec. Nothing exciting. But the latency is revealing here. If one ESB responds in less time than another ESB thats a very good sign, because with the same DS the average time per request is very telling.

On the other hand the saturation test is where the throughput is interesting. Before you look at the throughput though, check three things:
1) Is the LD CPU running close to 100%?
2) Is the DS CPU running close to 100%?
3) Is the network bandwidth running close to 100%?

If any of these are true, you aren't doing a good test of the ESB throughput. Because if you are looking at throughput then you want the ESB to be the bottleneck. If something else is the bottleneck then the ESB is not providing its max throughput and you aren't giving it a fair chance. For this reason, most benchmarks use a very very lightweight LD or a clustered LD, and similarly use a DS that is superfast and not a realistic DS. Sometimes the DS is coded to do some real work or sleep the thread while its executing to provide a more realistic load test. In this case you probably want to look at latency more than throughput.

Finally you are looking to see a particular behaviour for throughput testing as you increase load.
Throughput vs Load
The shape of this graph shows an ideal scenario. As the LD puts more work through the ESB it responds linearly. At some point the CPU of the ESB hits maximum, and then the throughput stabilizes.  What we don't want to see is the line drooping at the far right. That would mean that the ESB is crumpling under the extra load, and its failing to manage the extra load effectively. This is like the office worker whose efficiency increases as you give them more work but eventually they start spending all their time re-organizing their todo lists and less work overall gets done.

Under the saturation test you really want to see the CPU of the ESB close to 100% utilised. Why? This is a sign that its doing as much as possible. Why would it not be 100%? Two reasons: I/O, multi-processing and thread locks: either the network card or disk or other I/O is holding it up, the code is not efficiently using the available cores, or there are thread contention issues.

Finally its worth noting that you expect the latency to increase a lot under the saturation test. A classic result is this: I do static routing for different size messages with 100 clients LD. For message sizes up to 100k maybe I see a constant 2ms overhead for using the ESB. Suddenly as the message size grows from 100k to 200k I see the overhead growing in proportion to the message size.


Is this such a bad thing? No, in fact this is what you would expect. Before 100K message size, the ESB is underloaded. The straight line up to this point is a great sign that the ESB is pipelining properly. Once the CPU becomes loaded, each request is taking longer because its being made to wait its turn at the ESB while the ESB deals with the increased load.

A big hint here: When you look at this graph, the most interesting latency numbers occur before the CPU is fully loaded. The latency after the CPU is fully loaded is not that interesting, because its simply a function of the number of queued requests.

Now we understand the metrics, lets look at the actual scenarios.

A. Different Vendors, Same Workload
For the first comparison (different vendors) the first thing to be careful of is that the scenario is implemented in the best way possible in each ESB. There are usually a number of ways of implementing the same scenario. For example the same ESB may offer two different HTTP transports (or more!). For example blocking vs non-blocking, servlet vs library, etc. There may be an optimum approach and its worth reading the docs and talking to the vendor to understand the performance tradeoffs of each approach.

Another thing to be careful of in this scenario is the tuning parameters. Each ESB has various tuning aspects that may affect the performance depending on the available hardware. For example, setting the number of threads and memory based on the number of cores and physical memory may make a big difference.

Once you have your results, assuming everything we've already looked at is tickety-boo, then both latency and throughput are interesting and valid comparisons here. 

B. Different Workloads, Same Vendor
What this is measuring is what it costs you to do different activities with the same ESB. For example, doing a static routing is likely to be faster than a content-based routing, which in turn is faster than a transformation. The data from this tells you the cost of doing different functions with the ESB. For example you might want to do a security authentication/authorization check. You should see a constant bump in latency for the security check, irrespective of message size. But if you were doing complex transformation, you would expect to see higher latency for larger messages, because they take more time to transform. 

C. Direct vs ESB
This is an interesting one. Usually this is done for a simple static routing/passthrough scenario. In other words, we are testing the ESB doing its minimum possible. Why bother? Well there are two different reasons. Firstly ESB vendors usually do this for their own benefit as a baseline test. In other words, once you understand the passthrough performance you can then see the cost of doing more work (e.g. logging a header, validating security, transforming the message). 

Remember the two testing methodologies (realistic load vs saturation)? You will see very very different results in each for this, and the data may seem surprising. For the realistic test, remember we want to look at latency. This is a good comparison for the ESB. How much extra time is spent going through the ESB per request under normal conditions. For example, if the average request to the backend takes 18ms and the average request via the ESB takes 19ms, we have an average ESB latency of 1ms. This is a good result - the client is not going to notice much difference - less than 5% extra. 

The saturation test here is a good test to compare different ESBs. For example, suppose I can get 5000 reqs/sec direct. Via ESB_A the number is 3000 reqs/sec and via ESB_B the number is 2000 reqs/sec, I can say that ESB_A is providing better throughput than ESB_B. 

What is not  a good metric here is comparing throughput in saturation mode for direct vs ESB. 


Why not? The reason here is a little complex to explain. Remember how we coded DS to be as fast as possible so as not to be a bottleneck? So what is DS doing? Its really just reading bytes and sending bytes as fast as it can. Assuming the DS code is written efficiently using something really fast (e.g. just a servlet), what this is testing is how fast the hardware (CPU plus Network Card) can read and write through user space in the operating system. On a modern server hardware box you might get a very high number of transactions/sec. Maybe 5000req/s with each message in and out being 1k in size.

So we have 1k in and 1k out = 2k IO.
2k IO x 5000 reqs/sec x 8bits gives us the total network bandwidth of 80Mbits/sec (excluding ethernet headers and overhead).

Now lets look at the ESB. Imagine it can handle 100% of the direct load. There is no slowdown in throughput for the ESB. For each request it has to read the message in from LD and send it out to DS. Even if its doing this in pipelining mode, there is still a CPU cost and an IO cost for this. So the ESB latency of the ESB maybe 1ms, but the CPU and IO cost is much higher. Now, for each response it also has to read it in from DS and write it out to LD. So if the DS is doing 80Mbits/second, the ESB must be doing 160Mbits/second. 

Here is a picture.

Now if the LD is good enough, it will have loaded the DS to the max. CPU or IO capacity or both will be maxed out. Suppose the ESB is running on the same hardware platform as the DS. If the DS machine can do 80Mbit/s flat out, there is no way that the same hardware running as an ESB can do 160Mbit/s! In fact, if the ESB and DS code are both as efficient as possible, then the throughput via ESB will always be 50% of the throughput direct to the DS. Now there is a possible way for the ESB to do better: it can be better coded than the DS. For example, if the ESB did transfers in kernel space instead of user space then it might make a difference. The real answer here is to look at the latency. What is the overhead of adding the ESB to each request. If the ESB latency is small, then we can solve this problem by clustering the ESB. In this case we would put two ESBs in and then get back to full throughput.

The real point of this discussion is that this is not a useful comparison. In reality backend target services are usually pretty slow. If the same dual core server is actually doing some real work - e.g. database lookups, calculations, business logic - then its much more likely to be doing 500 requests a second or even less. 

The following chart shows real data to demonstrate this. The X-Axis shows increasing complexity of work at the backend (DS). As the effort taken by the backend becomes more realistic, the loss in throughput of having an ESB in the way reduces. So with a blindingly fast backend, we see the ESB struggling to provide just 55% of the throughput of the direct case. But as the backend becomes more realistic, we see much better numbers. So at 2000 requests a second there is barely a difference (around 10% reduction in throughput). 


In real life, what we actually see is that often you have many fewer ESBs than backend servers. For example, if we took the scenario of a backend server that can handle 500 reqs/sec, then we might end up with a cluster of two ESBs handling a cluster of 8 backends. 

Conclusion
I hope this blog has given a good overview of ESB performance and benchmarking. In particular, when is a good idea to look at latency and when to use throughput. 





for

Treat For Legal Interpreters and an Archive for Translators As Well

OpenCourt is an experimental project run by WBUR, Boston’s NPR news station, that uses digital technology to make Quincy District Court more accessible to the public.

Full Article

for

Berkeley, UCLA, Harvard, MIT, Princeton, Stanford y Yale en Academic Earth

Si utilizás el buscador Google Chrome podés encontrar herramientas de mucho provecho. Acabo de toparme con la extensión en línea de Academic Earth, que ofrece el acceso gratuito a videos de los cursos y conferencias de las universidades más destacadas de Estados Unidos y en las materias más diversas. ENJOY KNOWLEDGE!

 
Podrás presenciar conferencias como, por ejemplo,
Language in the Brain, Mouth and the Hands
By Paul Bloom - Yale



Watch it on Academic Earth





for

¡Guau! Buscador de bibliografía científica. Fuente infinita de información

SciVerse


for

Training for legal translators. Part IV. Make at least one big study commitment.

This is the last part of a series on training for legal translators. See the first post here. To put yourself on the path to becoming a good legal translator, you need to make one big study commitment. A big study commitment is anything that takes at least a year, challenges you, and costs a lot of […]




for

Translating notary terms 2: What are public-form and private-form notarial acts?

A public-form notarial act is a document drafted by a notary that contains the entire notarial act. It is narrated from the notary’s perspective and includes all the details and circumstances of the act. All Spanish notarial acts are in public form (documents elevados a público). In England and Wales, notarial acts are usually in […]




for

Translating notary terms 3: How to translate the names of Spanish public-form notarial acts into English

This post looks at how to translate the names of the two* main types of public-form Spanish notarial acts, escrituras públicas and actas notariales. It also identifies handy language to use in translations of them. Escritura pública An escritura pública records an act executed before a notary. How you translate the name of an escritura […]




for

Translating notary terms 4: Is “deed” a good translation for escritura pública?

“Deed” is sometimes used as a translation for escritura pública. Is it a good translation? What is a deed? A deed is a formal legal document. In England and Wales, transfers of land, mortgages, powers of attorney, some business agreements and wills must be executed as deeds. In the US, deeds are only required for […]




for

Capitalisation : les pistes de la Fondapol pour réformer les retraites

Selon l'economiste Bertrand Martinot, il faut introduire une dose de capitalisation pour des raisons d'equite entre generations et d'efficience economique.




for

Impact of CPU-bound Processes on IP Forwarding of Linux and Windows XP

These days, commodity-off-the-shelf (COTS) hardware and software are used to build high-end and powerful workstations and servers to be deployed in today's local area networks of private homes and small- to medium-sized business. Typically, these servers are multipurpose and shared - running networking functionalities involving IP packet forwarding in addition to other CPU intensive applications. In this paper we study and investigate the impact of running CPU-bound applications on the performance of IP packet forwarding. We measure and compare the impact and performance for the two operating systems of choice for home and small-business users, namely Linux and Windows XP. The performance is studied in terms of key performance metrics which include throughput, packet loss, round-trip delay, and CPU availability. For our measurements, we consider today's typical home network hosts of modern processors and Gigabit network cards. We also consider different configuration setups and utilize open-source tools to generate relatively high traffic rates. Our empirical results show that Linux exhibits superior performance over Windows XP in terms of IP forwarding performance. Results also show that, unlike Windows XP, the IP forwarding performance of Linux is not significantly impacted by running CPU-bound applications.




for

Information Consolidation in Large Bodies of Information

Due to information technologies the problem we are facing today is not a lack of information but too much information. This phenomenon becomes very clear when we consider two figures that are often quoted: Knowledge is doubling in many fields (biology, medicine, computer science, ...) within some 6 years; yet information is doubling every 8 months! This implies that the same piece of information/knowledge is published a large number of times with small variations.

Just look at an arbitrary news item. If considered of some general interest reports of it will appear in all major newspapers, journals, electronic media, etc. This is also the problem with information portals that tie together a number of large databases.

It is our contention that we need methods to reduce the huge set of information concerning a particular topic to a number of pieces of information (let us call each such piece an "essay" in what follows) that present a good cross-section of potential points of view. We will explain why one essay is usually not enough, yet the problem of reducing a huge amount of contributions to a digestible number of essays is formidable, indeed is science fiction at the moment. We will argue in this paper that it is one of the important tasks of computer sciences to start tackling this problem, and we will show that in some special cases partial solutions are possible.




for

A New Approach to Water Flow Algorithm for Text Line Segmentation

This paper proposes a new approach to water flow algorithm for the text line segmentation. Original method assumes hypothetical water flows under a few specified angles to the document image frame from left to right and vice versa. As a result, unwetted image frames are extracted. These areas are of major importance for text line segmentation. Method modifications mean extension values of water flow angle and unwetted image frames function enlargement. Results are encouraging due to text line segmentation improvement which is the most challenging process stage in document image processing.




for

An OCR Free Method for Word Spotting in Printed Documents: the Evaluation of Different Feature Sets

An OCR free word spotting method is developed and evaluated under a strong experimental protocol. Different feature sets are evaluated under the same experimental conditions. In addition, a tuning process in the document segmentation step is proposed which provides a significant reduction in terms of processing time. For this purpose, a complete OCR-free method for word spotting in printed documents was implemented, and a document database containing document images and their corresponding ground truth text files was created. A strong experimental protocol based on 800 document images allows us to compare the results of the three feature sets used to represent the word image.




for

Fusion of Complementary Online and Offline Strategies for Recognition of Handwritten Kannada Characters

This work describes an online handwritten character recognition system working in combination with an offline recognition system. The online input data is also converted into an offline image, and in parallel recognized by both online and offline strategies. Features are proposed for offline recognition and a disambiguation step is employed in the offline system for the samples for which the confidence level of the classier is low. The outputs are then combined probabilistically resulting in a classier out-performing both individual systems. Experiments are performed for Kannada, a South Indian Language, over a database of 295 classes. The accuracy of the online recognizer improves by 11% when the combination with offline system is used.




for

Developing a Mobile Collaborative Tool for Business Continuity Management

We describe the design of a mobile collaborative tool that helps teams managing critical computing infrastructures in organizations, a task that is usually designated Business Continuity Management. The design process started with a requirements definition phase based on interviews with professional teams. The elicited requirements highlight four main concerns: collaboration support, knowledge management, team performance, and situation awareness. Based on these concerns, we developed a data model and tool supporting the collaborative update of Situation Matrixes. The matrixes aim to provide an integrated view of the operational and contextual conditions that frame critical events and inform the operators' responses to events. The paper provides results from our preliminary experiments with Situation Matrixes.




for

An Empirical Study on Human and Information Technology Aspects in Collaborative Enterprise Networks

Small and Medium Enterprises (SMEs) face new challenges in the global market as customers require more complete and flexible solutions and continue to drastically reduce the number of suppliers. SMEs are trying to address these challenges through cooperation within collaborative enterprise networks (CENs). Human aspects constitute a fundamental issue in these networks as people, as opposed to organizations or Information Technology (IT) systems, cooperate. Since there is a lack of empirical studies on the role of human factors in IT-supported collaborative enterprise networks, this paper addresses the major human aspects encountered in this type of organization. These human aspects include trust issues, knowledge and know-how sharing, coordination and planning activities, and communication and mutual understanding, as well as their influence on the business processes of CENs supported by IT tools. This paper empirically proves that these aspects constitute key factors for the success or the failure of CENs. Two case studies performed on two different CENs in Switzerland are presented and the roles of human factors are identified with respect to the IT support systems. Results show that specific human factors, namely trust and communication and mutual understanding have to be well addressed in order to design and develop adequate software solutions for CENs.




for

Managing Mechanisms for Collaborative New-Product Development in the Ceramic Tile Design Chain

This paper focuses on improving the management of New-Product Development (NPD) processes within the particular context of a cluster of enterprises that cooperate through a network of intra- and inter-firm relations. Ceramic tile design chains have certain singularities that condition the NPD process, such as the lack of a strong hierarchy, fashion pressure or the existence of different origins for NPD projects. We have studied these particular circumstances in order to tailor Product Life-cycle Management (PLM) tools and some other management mechanisms to fit suitable sectoral reference models. Special emphasis will be placed on PLM templates for structuring and standardizing projects, and also on the roles involved in the process.




for

Security and Privacy Preservation for Mobile E-Learning via Digital Identity Attributes

This paper systematically discusses the security and privacy concerns for e-learning systems. A five-layer architecture of e-learning system is proposed. The security and privacy concerns are addressed respectively for five layers. This paper further examines the relationship among the security and privacy policy, the available security and privacy technology, and the degree of e-learning privacy and security. The digital identity attributes are introduced to e-learning portable devices to enhance the security and privacy of e-learning systems. This will provide significant contributions to the knowledge of e-learning security and privacy research communities and will generate more research interests.




for

Realising the Potential of Web 2.0 for Collaborative Learning Using Affordances

With the emergence of the Web 2.0 phenomena, technology-assisted social networking has become the norm. The potential of social software for collaborative learning purposes is clear, but as yet there is little evidence of realisation of the benefits. In this paper we consider Information and Communication Technology student attitudes to collaboration and via two case studies the extent to which they exploit the use of wikis for group collaboration. Even when directed to use a particular wiki designed for the type of project they are involved with, we found that groups utilized the wiki in different ways according to the affordances ascribed to the wiki. We propose that the integration of activity theory with an affordances perspective may lead to improved technology, specifically Web 2.0, assisted collaboration.




for

Coordinated System for Real Time Muscle Deformation during Locomotion

This paper presents a system that simulates, in real time, the volumetric deformation of muscles during human locomotion. We propose a two-layered motion model. The requirements of realism and real time computation lead to a hybrid locomotion system that uses a skeleton as first layer. The muscles, represented by an anatomical surface model, constitute the second layer, whose deformations are simulated with a finite element method (FEM). The FEM subsystem is fed by the torques and forces got from the locomotion system, through a line of action model, and takes into account the geometry and material properties of the muscles. High level parameters (like height, weight, physical constitution, step frequency, step length or speed) allow to customize the individuals and the locomotion and therefore, the deformation of the persons' muscles.




for

The Architectural Design of a System for Interpreting Multilingual Web Documents in E-speranto

E-speranto is a formal language for generating multilingual texts on the World Wide Web. It is currently still under development. The vocabulary and grammar rules of E-speranto are based on Esperanto; the syntax of E-speranto, however, is based on XML (eXtensible Markup Language). The latter enables the integration of documents generated in E-speranto into web pages. When a user accesses a web page generated in E-speranto, the interpreter interprets the document into a chosen natural language, which enables the user to read the document in any arbitrary language supported by the interpreter.

The basic parts of the E-speranto interpreting system are the interpreters and information resources, which complies with the principle of separating the interpretation process from the data itself. The architecture of the E-speranto interpreter takes advantage of the resemblance between the languages belonging to the same linguistic group, which consequently results in a lower production cost of the interpreters for the same linguistic group.

We designed a proof-of-concept implementation for interpreting E-speranto in three Slavic languages: Slovenian, Serbian and Russian. These languages share many common features in addition to having a similar syntax and vocabulary. The content of the information resources (vocabulary, lexicon) was limited to the extent that was needed to interpret the test documents. The testing confirmed the applicability of our concept and also indicated the guidelines for future development of both the interpreters and E-speranto itself.




for

On Compound Purposes and Compound Reasons for Enabling Privacy

This paper puts forward a verification method for compound purposes and compound reasons to be used during purpose limitation.

When it is absolutely necessary to collect privacy related information, it is essential that privacy enhancing technologies (PETs) protect access to data - in general accomplished by using the concept of purposes bound to data. Compound purposes and reasons are an enhancement of purposes used during purpose limitation and binding and are more expressive than purposes in their general form. Data users specify their access needs by making use of compound reasons which are defined in terms of (compound) purposes. Purposes are organised in a lattice with purposes near the greatest lower bound (GLB) considered weak (less specific) and purposes near the least upper bound (LUB) considered strong (most specific).

Access is granted based on the verification of the statement of intent (from the data user) against the compound purpose bound to the data; however, because purposes are in a lattice, the data user is not limited to a statement of intent that matches the purposes bound to the data exactly - the statement can be a true reflection of their intent with the data. Hence, the verification of compound reasons against compound purposes cannot be accomplished by current published verification algorithms.

Before presenting the verification method, compound purposes and reasons, as well as the structures used to represent them, and the operators that are used to define compounds is presented. Finally, some thoughts on implementation are provided.




for

IDEA: A Framework for a Knowledge-based Enterprise 2.0

This paper looks at the convergence of knowledge management and Enterprise 2.0 and describes the possibilities for an over-arching exchange and transfer of knowledge in Enterprise 2.0. This will be underlined by the presentation of the concrete example of T-System Multimedia Solutions (MMS), which describes the establishment of a new enterprise division "IG eHealth". This is typified by the decentralised development of common ideas, collaboration and the assistance available to performing responsibilities as provided by Enterprise 2.0 tools. Taking this archetypal example and the derived abstraction of the problem regarding the collaboration of knowledge workers as the basis, a regulatory framework will be developed for knowledge management to serve as a template for the systemisation and definition of specific Enterprise 2.0 activities. The paper will conclude by stating factors of success and supporting Enterprise 2.0 activities, which will facilitate the establishment of a practical knowledge management system for the optimisation of knowledge transfer.




for

Enterprise Microblogging for Advanced Knowledge Sharing: The References@BT Case Study

Siemens is well known for ambitious efforts in knowledge management, providing a series of innovative tools and applications within the intranet. References@BT is such a web-based application with currently more than 7,300 registered users from more than 70 countries. Its goal is to support the sharing of knowledge, experiences and best-practices globally within the Building Technologies division. Launched in 2005, References@BT features structured knowledge references, discussion forums, and a basic social networking service. In response to use demand, a new microblogging service, tightly integrated into References@BT, was implemented in March 2009. More than 500 authors have created around 2,600 microblog postings since then. Following a brief introduction into the community platform References@BT, we comprehensively describe the motivation, experiences and advantages for an organization in providing internal microblogging services. We provide detailed microblog usage statistics, analyzing the top ten users regarding postings and followers as well as the top ten topics. In doing so, we aim to shed light on microblogging usage and adoption within a globally distributed organization.




for

A Clustering Approach for Collaborative Filtering Recommendation Using Social Network Analysis

Collaborative Filtering(CF) is a well-known technique in recommender systems. CF exploits relationships between users and recommends items to the active user according to the ratings of his/her neighbors. CF suffers from the data sparsity problem, where users only rate a small set of items. That makes the computation of similarity between users imprecise and consequently reduces the accuracy of CF algorithms. In this article, we propose a clustering approach based on the social information of users to derive the recommendations. We study the application of this approach in two application scenarios: academic venue recommendation based on collaboration information and trust-based recommendation. Using the data from DBLP digital library and Epinion, the evaluation shows that our clustering technique based CF performs better than traditional CF algorithms.




for

Bio-Inspired Mechanisms for Coordinating Multiple Instances of a Service Feature in Dynamic Software Product Lines

One of the challenges in Dynamic Software Product Line (DSPL) is how to support the coordination of multiple instances of a service feature. In particular, there is a need for a decentralized decision-making capability that will be able to seamlessly integrate new instances of a service feature without an omniscient central controller. Because of the need for decentralization, we are investigating principles from self-organization in biological organisms. As an initial proof of concept, we have applied three bio-inspired techniques to a simple smart home scenario: quorum sensing based service activation, a firefly algorithm for synchronization, and a gossiping (epidemic) protocol for information dissemination. In this paper, we first explain why we selected those techniques using a set of motivating scenarios of a smart home and then describe our experiences in adopting them.