form

The Cross-Hope, Transformation, Warning




form

Transfiguration and Transformation

What can the Transfiguration teach us about being the bee?




form

Faith Informing Politics

Amir Azarvan, progressive economic political science lecturer at Kennesaw State University and William Hinkle (SbDn Basil), Minority Whip in the Washington state house, speak about how their shared Orthodox faith has shaped some very different political thinking.




form

Early Lutheran/Orthodox Dialog After The Reformation

Most Christians are not aware that in the latter part of the 16th century, early Lutheran Reformers - close colleagues and followers of Martin Luther - set in motion an eight year contact and correspondence with the (then) Ecumenical Patriarch, Jeremias II of Constantinople. The outcome might have changed the course of Christian history. Kevin Allen speaks with scholar Dr Paraskeve (Eve) Tibbs about this fascinating and largely unknown chapter in post-Reformation history.




form

Rock and Sand - Orthodoxy and Reformed Theology

We welcome back Kevin Allen to AFR with his inaugural come back podcast which is part 1 of an interview with Fr. Josiah Trenham. They discuss his new book Rock and Sand - An Orthodox Appraisal of the Protestant Reformers and their Teaching, published by New Rome Press. This interview is also available in video format from Patristic Nectar Films.




form

Rock and Sand - Orthodoxy and Reformed Theology - Part 2

In part 2 of his interview with Fr. Josiah Trenham, Kevin Allen gets specific on some of the modern day expressions of Reformed teaching and how it differs from the Orthodox Church. Fr. Josiah authored the book Rock and Sand: An Orthodox Appraisal of the Protestant Reformers and Their Theology. Click here to view the video version of this interview.




form

Steps to Transformation

In this first episode of 2016, host Kevin Allen speaks with Archimandrite and Abbot Sergius of the Monastery of Saint Tikhon of Zadonsk, the oldest Orthodox monastery in the U.S., about practical ways Christians can cooperate with the grace of God to "...be transformed ..." (Rom. 12:2) into the Likeness of Christ. Abbot Sergius is the author of the book "Acquiring the Mind of Christ: Embracing the Vision of the Orthodox Church" (St Tikhon Seminary Press). Father Sergius is the 16th Abbot of Saint Tikhon’s Monastery and Lecturer of Orthodox Spirituality at Saint Tikhon’s Orthodox Theological Seminary.




form

Story: Xooglers, Google's former Marketing Director tells his story

Some great stories about Google's early days, with more to come.




form

Former Wales lock Ball to retire at end of season

Former Wales lock Jake Ball announces he will retire from professional rugby at the end of the season.




form

Council plans 118 new homes on former golf course

The proposals for Inverness include 30 flats and almost 40 houses.




form

Education reforms to be introduced in phases

Deputy Andrea Dudley-Owen says changes to the law will be introduced in "bite-sized chunks".




form

Pulling fastest in historic all-female Formula E test

Britain's Abbi Pulling makes history by finishing fastest in the first all-female Formula E testing session in Spain.




form

Former pub site to become 68-bedroom care home

Plans to demolish the former Wordsworth Tavern in Parson Cross are approved by the council.




form

Bath’s Jamie Chadwick signs as Formula E Jaguar test driver

She’s one of 22 female drivers taking part in a Formula E test event in Valencia.




form

Robbie Williams to perform at 'iconic' venue

Pop star Robbie Williams will perform at Bath's Royal Crescent in June next year.




form

Singer 'couldn't leave the house to perform'

Singer Catherine Lawless features in a film about agoraphobia, showing at Headfest in Bedford.




form

Hayden has 'no doubt' Carlisle will turn form around

Carlisle United defender Aaron Hayden is convinced the team is on the right track after they ended a seven-game winless streak.




form

Council to sell four storey block of former flats

Douglas Council's 1930s development on Lord Street remains boarded after tenants were moved in 2022.




form

Reward of £1,000 for information on wanted man

Ryan Ward, 31, is wanted in connection with incidents of assault and criminal damage.




form

Why is Cromer closing its information centre?

The tourist information centre in Cromer has been vital for some, but is set to close.




form

How AI is Transforming User Experience (UX) 

Artificial Intelligence (AI) is changing how user experience design is handled across various industries by playing a vital role in developing tailored and seamless experiences for users. Starting from app […]

The post How AI is Transforming User Experience (UX)  appeared first on Tech Digest.




form

“That’s how we silence them”: Verstappen’s stunning Brazil win from start to finish | Formula 1

From pre-race confusion to post-race joy, from 17th on the grid to a stunning win, here's how Max Verstappen's Brazilian Grand Prix unfolded on his radio.




form

Interlagos must improve “very bad” new track surface for 2025, say F1 drivers | Formula 1

Formula 1 drivers urged the operators of the Interlagos circuit to improve the new surface they laid ahead of this year's event.




form

Horner controversy “for sure had a negative impact” on Red Bull staff | Formula 1

The controversy which surrounded Red Bull team principal Christian Horner earlier this year "had an impact" on their staff, according to a long-serving ex-F1 engineer.




form

The times McLaren came closest to breaking 25-year constructors’ title drought | Formula 1

McLaren could be set to win their first constructors' title for 25 years this season. Here is how close they've come over that time.




form

Every way Verstappen can clinch the championship at the Las Vegas Grand Prix | Formula 1

Max Verstappen is poised to clinch the 2024 drivers' championship if he finishes ahead of Lando Norris one more time. Here's how he can seal a fourth title at the next race.




form

Why Mercedes put ‘a reminder of joy and pain’ on display in their factory lobby | Formula 1

Mercedes have put the car from Lewis Hamilton's controversial 2021 championship defeat on display in the lobby at their factory.




form

Bortoleto pushed for 2025 F1 debut to avoid missing a year of racing | Formula 1

Gabriel Bortoleto said he was determined not to sit out a year of racing in 2025 after Sauber confirmed he will make his debut for them in Formula 1 next year.




form

Alpine confirm switch to Mercedes power when Renault ends F1 engine project | Formula 1

Alpine have officially announced they will use Mercedes power units when Formula 1 introduces its new engine regulations in 2026.




form

Fallows steps down as Aston Martin’s technical director | Formula 1

Dan Fallows is stepping down as Aston Martin's technical director, two-and-a-half years after taking over the role.




form

Wittich replaced as F1 race director, Marques to take over from Las Vegas | Formula 1

Niels Wittich has unexpectedly stood down from his role as Formula 1's race director. The FIA named Rui Marques as his replacement.




form

F1 teams to reveal 2025 liveries together at first season launch event in London | Formula 1

All 10 Formula 1 teams will participate in a new "season launch event" in February next year to reveal their liveries together.




form

Don’t underestimate how tough a job F1’s new race director faces | Formula 1

Niels Wittich's unexpected departure as Formula 1's race director recreates the circumstances in which Michael Masi was thrown in at the deep end.




form

Licensing reforms would ease Michigan’s pain

Let anesthesiology assistants work for themselves




form

Nouvelles formations en entrepreneuriat pour les traducteurs

 

J’ai récemment collaboré avec le CI3M en vue de créer une offre de formation en entrepreneuriat destinée aux traducteurs professionnels souhaitant s’installer ou développer leur activité.

 

Déclinée en trois programmes, elle est :

  • enseignée à distance ;
  • remboursable par le FIF PL ou d’autres organismes de financement et
  • intègre un suivi hebdomadaire personnalisé, afin d’adapter son contenu à vos besoins.

formation CRÉATION ET Développement de l'activité

Ce programme complet sur 8 semaines a pour objectif d’aider les traducteurs à préparer, lancer, gérer et développer leur activité professionnelle. Par le biais d’exercices pratiques et d’un accompagnement personnalisé (8 h de suivi pédagogique par téléphone), vous découvrirez comment : 

 

1. Faire le point sur votre connaissance du métier de traducteur et évaluer votre degré de préparation à l’exercer en tant qu’indépendant, pour mieux préparer la création de votre entreprise.

 

2. Promouvoir et vendre vos services en définissant votre positionnement, votre politique tarifaire et votre stratégie de communication, afin de prospecter, de vendre et d’assurer le suivi auprès de vos clients de manière efficace.

 

3. Gérer une entreprise de traduction, notamment respecter un code de déontologie, ainsi que les nombreuses obligations comptables, fiscales et légales qu’implique l’exercice en libéral. Pour réussir et accroître vos revenus d’activité, vous découvrirez aussi comment piloter votre entreprise et anticiper les changements pour mieux saisir les opportunités.


4. Prendre du recul pour assurer l’équilibre entre votre vie personnelle et professionnelle.

 

formation marketing

Ce module spécifique s’adresse aux traducteurs déjà installés ou à ceux qui veulent aller droit à l’essentiel pour identifier et fidéliser leurs clients.

 

Après un premier diagnostic, qui permettra de cibler le périmètre de l’étude de marché à réaliser afin de bâtir une stratégie de marketing solide, le suivi pédagogique, réparti sur trois semaines, vous permettra d’acquérir les méthodes et les outils nécessaires pour vendre plus et mieux.

 

formation gestion et pilotage

Autre module spécifique, cette formation traite de tous les aspects souvent négligés par les traducteurs libéraux : les démarches administratives, la fiscalité, la comptabilité, la protection sociale, le pilotage et la stratégie de croissance. Bref, de toutes les connaissances et compétences nécessaires pour réussir en tant qu’entrepreneur.

 

Le suivi pédagogique, réalisé sur trois semaines, vise à répondre à vos questions, accompagner vos démarches et vous fournir des outils pour faciliter la gestion de votre entreprise au quotidien et assurer son avenir.

 

INSCRIPTION ET FINANCEMENT

En s’appuyant sur une équipe pédagogique composée de traducteurs experts, le CI3M propose des formations professionnelles à distance, pour certaines diplômantes, sur l’ensemble des compétences requises pour l’exécution de prestations de traduction ou de rédaction technique. Pour en savoir plus sur le coût de ces formations et les modalités d’inscription, contactez le CI3M au +33 (0)2 30 96 04 42.

 

Selon votre situation, tout ou partie des frais peuvent être remboursés ou pris en charge par le Fonds interprofessionnel de formation des professionnels libéraux (FIF PL), Pôle Emploi, votre Compte personnel de formation (CPF), etc.

 


L'auteur

Professionnelle accréditée en commerce international ayant travaillé plusieurs années en tant que conseillère auprès de PME, Gaële Gagné est traductrice indépendante depuis 2005. Aux commandes de Trëma Translations, elle traduit de l'anglais vers le français et partage ses connaissances en marketing et gestion d'entreprise avec ses collègues traducteurs par le biais d'un blog intitulé Mes petites affaires et de formations dispensées via le CI3M.


Et maintenant ?

Partagez



Abonnez-vous au flux

Consultez d'autres articles :

Diplôme de traduction : indispensable ou superflu ?
Bien facturer pour être payé
Rédiger un devis de traduction efficace
Traducteurs : 3 pistes pour se former sans se ruiner
Nouveaux traducteurs : 10 conseils pour bien démarrer




form

Platform-as-a-Service freedom or lock-in

There has been a set of discussions about lock-in around Platform-as-a-Service (PaaS): Joe McKendrick and Lori MacVittie in particular bring out some of the real challenges here.

Lori brings out the difference between portability and mobility. While I'm not in 100% agreement with Lori's definitions, there is a key point here: its not just code, its the services that the code relies on that buy lock-in into a cloud.

So for example, if you use Amazon SQS, Force.com Chatter Collaboration, Google App Engine's bigtable data store, all of these tie you into the cloud you are deployed onto. Amazon isn't really a PaaS yet, so the tie-in is minimal, but Google App Engine (GAE) is based on Authentication, Logging, Data, Cache and other core services. Its almost impossible to imagine building an app without these, and they all tie you into GAE. Similarly, VMForce relies on a set of services from force.com.

But its not just about mobility between force.com and Google: between two PaaSes. The typical enterprise needs a private cloud as much as public cloud. So there is a bigger question:

Can you move your application from a private PaaS to a public Paas and back again?
In other words, even if Google and Force got together and defined a mobility layer, can I then take an app I built and run it internally? Neither Google nor Force is offering a private PaaS.

The second key question is this:
How can I leverage standard Enterprise Architecture in a PaaS?
What I'm getting at here is that as the world starts to implement PaaS, does this fit with existing models? Force.com and Google App Engine have effectively designed their own world view. VMForce and the recent Spring/Google App Engine announcement address one aspect of that - what Lori calls portability. By using Spring as an application model, there is at least a passing similarity to current programming models in Enterprises. But Enterprise Architectures are not just about Java code: what about an ESB? What about a Business Process engine (BPMS)? What about a standard XACML-based entitlement engine? So far PaaS has generally only addressed the most basic requirements of Enterprise core services: databases and a identity model.

So my contention is this: you need a PaaS that supports the same core services that a modern Enterprise architecture has: ESB, BPMS, Authentication/Authorization, Portal, Data, Cache, etc. And you need a PaaS that works inside your organization as well as in a public Cloud. And if you really don't want any lock-in.... hadn't that PaaS better be Open Source as well? And yes, this is a hint of things coming very soon!




form

WSO2 Stratos - Platform-as-a-Service for private and public cloud

Yesterday we announced something I believe is a game-changer: WSO2 Stratos. What is Stratos?

WSO2 Stratos is a complete SOA and developer platform offered as a self-service, multi-tenant, elastic runtime for private and public cloud infrastructures.
What that means is that our complete SOA platform - now enhanced with Tomcat and Webapp support - is available as a  "cloud native" runtime that you can either use on the Web (yes - you can try it out right now), on Amazon VPC, or on your own internal private cloud based on Ubuntu Enterprise Cloud, Eucalyptus and (coming soon) vmWare vSphere. It is a complete Platform-as-a-Service for private and public clouds.

I'll be writing more about Stratos over the coming weeks and months, and I'll also provide links and tweets to other Stratos blogs, but in this blog I want to simply answer three questions:

  1. I'm already talking to {vmWare, Eucalyptus, Ubuntu, Savvis, Joyent} about private cloud - what does WSO2 add that they don't have?
  2. What is the difference between Stratos and the Cloud Images that WSO2 already ships?
  3. Why would I choose WSO2 over the other vendors offering Platform-as-a-Service?
In order to answer the first question, lets look at the cloud computing space, which is most easily divided up into:
  • Infrastructure-as-a-Service (IaaS): this is where Amazon, Eucalyptus, vmWare, Saavis and Joyent play
  • Platform-as-a-Service (PaaS): Google App Engine, vmForce, Tibco Silver and now WSO2 Stratos play in this space.
  • Software-as-a-Service (SaaS): Google Apps, Google Mail, Microsoft Office Live, Salesforce, SugarOnDemand - these and many more make up the SaaS category.
To generalize wildly, most people talking about public cloud today are talking about SaaS. And most people talking about private cloud today are talking about IaaS.

SaaS is fantastic for quick productivity and low cost. WSO2 uses Google Apps, Sugar on Demand and several other SaaS apps. But SaaS doesn't create competitive advantage. Mule also uses Google Apps. They may well use Salesforce. SaaS cannot produce competitive advantage because your competitors get access to exactly the same low-cost services you do. In order to create competitive advantage you need to build as well as buy. For example, we use our Mashup Server together with our Sugar Business Messaging Adapter to provide insight and management of our pipeline that goes beyond what Sugar offers.

IaaS is of course a great basis to build apps. But it's just infrastructure. Yes - you get your VM hosted quicker. But someone has to create a useful VM. And that is where PaaS comes in. PaaS is how to speed up cloud development.

What does Stratos give you on top of an IaaS? It gives you an Application Server, Registry, Identity Server, Portal, ESB, Business Activity Monitor and Mashup Server. And it gives you these as-a-Service: completely self-service, elasticly scalable, and granularly metered and monitored. Someone in your team needs an ESB - they can provision one for themselves instantly. And because it's multi-tenant, it costs nothing to run until it gets used. How do you know how it's used? The metering and monitoring tells you exactly how much each tenant uses.

2. What is the difference between Stratos and the existing WSO2 Cloud Images?

The cloud images we started shipping in December are not Cloud Native. Stratos is Cloud Native. In practice, this means that when you log into Stratos (go on try it now) you can instantly provision your own domain, together with a set of Stratos services. This saves memory - instead of allocating a new VM and minimum half a gigabyte of memory to each new server you get a new ESB with zero extra memory cost. And it's much easier. The new ESB will automatically be governed and monitored. It's automatically elastically clustered.

3. Why would I choose WSO2 over other PaaS vendors?

Firstly, if you look at PaaS as a whole there is a huge divide between Public PaaS and Private PaaS. The public PaaS vendors simply don't offer private options. You can't run force.com or Google App Engine applications internally, even if you want to. WSO2 bridges that gap with a PaaS you can use in the public Web, on a virtual private cloud, or on premises.

The second big differentiator between WSO2 and the existing PaaS offerings is the architecture. Mostly PaaS is a way of building webapps. WSO2 offers a complete enterprise architecture - governance, business process, integration, portal, identity and mashups. And we support the common Enterprise Programming Model (not just Java, WebApp, JAX-WS, but also BPEL, XSLT, XPath, Google Gadgets, WSDL, etc). The only other PaaS that I know of that offers a full Enterprise architecture is Tibco Silver.

The third and most important differentiator is about lock-in. Software vendors love lock-in - and Cloud vendors love it even more. So if you code to Google App Engine, you are tied into Google's identity model, Google's Bigtable, etc. If you code to force.com or vmForce - you are tied to force's infrastructure services. If you code to Tibco Silver, you are tied to Tibco. WSO2 fights this in three ways:
  • No code lock-in: we use standards-based coding (WAR, JAX-WS, POJO) and Stratos is 100% Apache License Open Source.
  • No model lock-in: we use standards-based services: 
    • Identity is based on OpenID, OAuth, XACML, WS-Trust
    • Registry is based on AtomPub and REST
    • Business Process is based on BPEL, etc
  • No hosting lock-in: you can take you apps and data from our public PaaS and re-deploy internally or on your own virtual private cloud anytime you like.
I hope you found this a useful introduction to Stratos. If you want more information, contact me paul@wso2.com, or check out the Stratos website or code.




form

Using OSGi as the core of a middleware platform

Ross Mason of Mulesoft recently blogged: "OSGi - no thanks". Ross is a smart guy and he usually has something interesting to say. In this case, I think Ross has made a lot of good points:

1. Ross is right - OSGi is a great technology for middleware vendors.
2. Ross is right - Developers shouldn't be forced to mess with OSGi.
3. Ross is wrong - You can make both of these work together.

At WSO2 we went through exactly the same issues. We simply came to a different conclusion - that we can provide the benefits of OSGi (modularity, pluggability, dynamic loading) without giving pain to end-users. In WSO2 Carbon, customers can deploy their systems in exactly the same way that worked pre-OSGi.

Why did we choose OSGi? We also looked at building our own dynamic loading schemes. In fact, we've had dynamic classloading capabilities in our platform from day one. The reasons we went with OSGi are:

  • A structured and versioned approach to dynamic classloading
  • An industry standard approach - hence better understood, better skills, better resources
  • It solves more than just dynamic loading: as well as providing versions and dynamic loading, it also really gives proper modularity - which means hiding classes as much as exposing classes.
  • It provides (through Equinox p2) a proper provisioning model.
It wasn't easy. We struggled with OSGi to start with, but in the end we have a much stronger solution than if we had built our own. And we have done some great improvements. Our new Carbon Studio tooling gives a simple model to build complete end-to-end applications and hides OSGi completely from the end-user. The web admin consoles and deployment models allow complete deployment with zero OSGi. Drop a JAR in and we take care of the OSGi bundling for you.

The result - the best of both worlds - ease of use for developers and great middleware.




form

Understanding ESB Performance & Benchmarking

ESB performance is a hot (and disputed topic). In this post I don't want to talk about different vendors or different benchmarks. I'm simply trying to help people understand some of the general aspects of benchmarking ESBs and what to look out for in the results.

The general ESB model is that you have some service consumer, an ESB in the middle and a service provider (target service) that the ESB is calling. To benchmark this, you usually have a load driver client, an ESB, and a dummy service.

+-------------+      +---------+      +---------------+
| Load Driver |------|   ESB   |------| Dummy Service |
+-------------+      +---------+      +---------------+

Firstly, we want the Load Driver (LD), the ESB and the Dummy Service (DS) to be on different hardware. Why? Because we want to understand the ESB performance, not the performance of the DS or LD.

The second thing to be aware of is that the performance results are completely dependent on the hardware, memory, network, etc used. So never compare different results from different hardware.

Now there are three things we could look at:
A) Same LD, same DS, different vendors ESBs doing the same thing (e.g. content-based routing)
B) Same LD, same DS, different ESB configs for the same ESB, doing different things (e.g. static routing vs content-based routing)
C) Going via ESB compared to going Direct (e.g. LD--->DS without ESB)

Each of these provides useful data but each also needs to be understood.

Metrics
Before looking at the scenarios, lets look at how to measure the performance. The two metrics that are always a starting point in any benchmark of an ESB here are the throughput (requests/second) and the latency (how long each request takes). With latency we can consider overall latency - the time taken for a completed request observed at the LD, and the ESB latency, which is the time taken by the message in the ESB. The ESB latency can be hard to work out. A well designed ESB will already be sending bytes to the DS before its finished reading the bytes the LD has sent it. This is called pipelining. Some ESBs attempt to measure the ESB latency inside the ESB using clever calculations. Alternatively scenario C (comparing via ESB vs Direct) can give an idea of ESB Latency. 

But before we look at the metrics we need to understand the load driver.

There are two different models to doing Load Driving:
1) Do a realistic load test based on your requirements. For example if you know you want to support up to 50 concurrent clients each making a call every 5 seconds on average, you can simulate this.
2) Saturation! Have a large number of clients, each making a call as soon as the last one finishes.

The first one is aimed at testing what the ESB does before its fully CPU loaded. In other words, if you are looking to see the effect of adding an ESB, or the comparison of one ESB to another under realistic load, then #1 is the right approach. In this approach, looking at throughput may not be useful, because all the different approaches have similar results. If I'm only putting in 300 requests a sec on a modern system, I'm likely to see 300 request a sec. Nothing exciting. But the latency is revealing here. If one ESB responds in less time than another ESB thats a very good sign, because with the same DS the average time per request is very telling.

On the other hand the saturation test is where the throughput is interesting. Before you look at the throughput though, check three things:
1) Is the LD CPU running close to 100%?
2) Is the DS CPU running close to 100%?
3) Is the network bandwidth running close to 100%?

If any of these are true, you aren't doing a good test of the ESB throughput. Because if you are looking at throughput then you want the ESB to be the bottleneck. If something else is the bottleneck then the ESB is not providing its max throughput and you aren't giving it a fair chance. For this reason, most benchmarks use a very very lightweight LD or a clustered LD, and similarly use a DS that is superfast and not a realistic DS. Sometimes the DS is coded to do some real work or sleep the thread while its executing to provide a more realistic load test. In this case you probably want to look at latency more than throughput.

Finally you are looking to see a particular behaviour for throughput testing as you increase load.
Throughput vs Load
The shape of this graph shows an ideal scenario. As the LD puts more work through the ESB it responds linearly. At some point the CPU of the ESB hits maximum, and then the throughput stabilizes.  What we don't want to see is the line drooping at the far right. That would mean that the ESB is crumpling under the extra load, and its failing to manage the extra load effectively. This is like the office worker whose efficiency increases as you give them more work but eventually they start spending all their time re-organizing their todo lists and less work overall gets done.

Under the saturation test you really want to see the CPU of the ESB close to 100% utilised. Why? This is a sign that its doing as much as possible. Why would it not be 100%? Two reasons: I/O, multi-processing and thread locks: either the network card or disk or other I/O is holding it up, the code is not efficiently using the available cores, or there are thread contention issues.

Finally its worth noting that you expect the latency to increase a lot under the saturation test. A classic result is this: I do static routing for different size messages with 100 clients LD. For message sizes up to 100k maybe I see a constant 2ms overhead for using the ESB. Suddenly as the message size grows from 100k to 200k I see the overhead growing in proportion to the message size.


Is this such a bad thing? No, in fact this is what you would expect. Before 100K message size, the ESB is underloaded. The straight line up to this point is a great sign that the ESB is pipelining properly. Once the CPU becomes loaded, each request is taking longer because its being made to wait its turn at the ESB while the ESB deals with the increased load.

A big hint here: When you look at this graph, the most interesting latency numbers occur before the CPU is fully loaded. The latency after the CPU is fully loaded is not that interesting, because its simply a function of the number of queued requests.

Now we understand the metrics, lets look at the actual scenarios.

A. Different Vendors, Same Workload
For the first comparison (different vendors) the first thing to be careful of is that the scenario is implemented in the best way possible in each ESB. There are usually a number of ways of implementing the same scenario. For example the same ESB may offer two different HTTP transports (or more!). For example blocking vs non-blocking, servlet vs library, etc. There may be an optimum approach and its worth reading the docs and talking to the vendor to understand the performance tradeoffs of each approach.

Another thing to be careful of in this scenario is the tuning parameters. Each ESB has various tuning aspects that may affect the performance depending on the available hardware. For example, setting the number of threads and memory based on the number of cores and physical memory may make a big difference.

Once you have your results, assuming everything we've already looked at is tickety-boo, then both latency and throughput are interesting and valid comparisons here. 

B. Different Workloads, Same Vendor
What this is measuring is what it costs you to do different activities with the same ESB. For example, doing a static routing is likely to be faster than a content-based routing, which in turn is faster than a transformation. The data from this tells you the cost of doing different functions with the ESB. For example you might want to do a security authentication/authorization check. You should see a constant bump in latency for the security check, irrespective of message size. But if you were doing complex transformation, you would expect to see higher latency for larger messages, because they take more time to transform. 

C. Direct vs ESB
This is an interesting one. Usually this is done for a simple static routing/passthrough scenario. In other words, we are testing the ESB doing its minimum possible. Why bother? Well there are two different reasons. Firstly ESB vendors usually do this for their own benefit as a baseline test. In other words, once you understand the passthrough performance you can then see the cost of doing more work (e.g. logging a header, validating security, transforming the message). 

Remember the two testing methodologies (realistic load vs saturation)? You will see very very different results in each for this, and the data may seem surprising. For the realistic test, remember we want to look at latency. This is a good comparison for the ESB. How much extra time is spent going through the ESB per request under normal conditions. For example, if the average request to the backend takes 18ms and the average request via the ESB takes 19ms, we have an average ESB latency of 1ms. This is a good result - the client is not going to notice much difference - less than 5% extra. 

The saturation test here is a good test to compare different ESBs. For example, suppose I can get 5000 reqs/sec direct. Via ESB_A the number is 3000 reqs/sec and via ESB_B the number is 2000 reqs/sec, I can say that ESB_A is providing better throughput than ESB_B. 

What is not  a good metric here is comparing throughput in saturation mode for direct vs ESB. 


Why not? The reason here is a little complex to explain. Remember how we coded DS to be as fast as possible so as not to be a bottleneck? So what is DS doing? Its really just reading bytes and sending bytes as fast as it can. Assuming the DS code is written efficiently using something really fast (e.g. just a servlet), what this is testing is how fast the hardware (CPU plus Network Card) can read and write through user space in the operating system. On a modern server hardware box you might get a very high number of transactions/sec. Maybe 5000req/s with each message in and out being 1k in size.

So we have 1k in and 1k out = 2k IO.
2k IO x 5000 reqs/sec x 8bits gives us the total network bandwidth of 80Mbits/sec (excluding ethernet headers and overhead).

Now lets look at the ESB. Imagine it can handle 100% of the direct load. There is no slowdown in throughput for the ESB. For each request it has to read the message in from LD and send it out to DS. Even if its doing this in pipelining mode, there is still a CPU cost and an IO cost for this. So the ESB latency of the ESB maybe 1ms, but the CPU and IO cost is much higher. Now, for each response it also has to read it in from DS and write it out to LD. So if the DS is doing 80Mbits/second, the ESB must be doing 160Mbits/second. 

Here is a picture.

Now if the LD is good enough, it will have loaded the DS to the max. CPU or IO capacity or both will be maxed out. Suppose the ESB is running on the same hardware platform as the DS. If the DS machine can do 80Mbit/s flat out, there is no way that the same hardware running as an ESB can do 160Mbit/s! In fact, if the ESB and DS code are both as efficient as possible, then the throughput via ESB will always be 50% of the throughput direct to the DS. Now there is a possible way for the ESB to do better: it can be better coded than the DS. For example, if the ESB did transfers in kernel space instead of user space then it might make a difference. The real answer here is to look at the latency. What is the overhead of adding the ESB to each request. If the ESB latency is small, then we can solve this problem by clustering the ESB. In this case we would put two ESBs in and then get back to full throughput.

The real point of this discussion is that this is not a useful comparison. In reality backend target services are usually pretty slow. If the same dual core server is actually doing some real work - e.g. database lookups, calculations, business logic - then its much more likely to be doing 500 requests a second or even less. 

The following chart shows real data to demonstrate this. The X-Axis shows increasing complexity of work at the backend (DS). As the effort taken by the backend becomes more realistic, the loss in throughput of having an ESB in the way reduces. So with a blindingly fast backend, we see the ESB struggling to provide just 55% of the throughput of the direct case. But as the backend becomes more realistic, we see much better numbers. So at 2000 requests a second there is barely a difference (around 10% reduction in throughput). 


In real life, what we actually see is that often you have many fewer ESBs than backend servers. For example, if we took the scenario of a backend server that can handle 500 reqs/sec, then we might end up with a cluster of two ESBs handling a cluster of 8 backends. 

Conclusion
I hope this blog has given a good overview of ESB performance and benchmarking. In particular, when is a good idea to look at latency and when to use throughput. 





form

¡Guau! Buscador de bibliografía científica. Fuente infinita de información

SciVerse


form

Translating notary terms 2: What are public-form and private-form notarial acts?

A public-form notarial act is a document drafted by a notary that contains the entire notarial act. It is narrated from the notary’s perspective and includes all the details and circumstances of the act. All Spanish notarial acts are in public form (documents elevados a público). In England and Wales, notarial acts are usually in […]




form

Translating notary terms 3: How to translate the names of Spanish public-form notarial acts into English

This post looks at how to translate the names of the two* main types of public-form Spanish notarial acts, escrituras públicas and actas notariales. It also identifies handy language to use in translations of them. Escritura pública An escritura pública records an act executed before a notary. How you translate the name of an escritura […]




form

Capitalisation : les pistes de la Fondapol pour réformer les retraites

Selon l'economiste Bertrand Martinot, il faut introduire une dose de capitalisation pour des raisons d'equite entre generations et d'efficience economique.




form

Information Consolidation in Large Bodies of Information

Due to information technologies the problem we are facing today is not a lack of information but too much information. This phenomenon becomes very clear when we consider two figures that are often quoted: Knowledge is doubling in many fields (biology, medicine, computer science, ...) within some 6 years; yet information is doubling every 8 months! This implies that the same piece of information/knowledge is published a large number of times with small variations.

Just look at an arbitrary news item. If considered of some general interest reports of it will appear in all major newspapers, journals, electronic media, etc. This is also the problem with information portals that tie together a number of large databases.

It is our contention that we need methods to reduce the huge set of information concerning a particular topic to a number of pieces of information (let us call each such piece an "essay" in what follows) that present a good cross-section of potential points of view. We will explain why one essay is usually not enough, yet the problem of reducing a huge amount of contributions to a digestible number of essays is formidable, indeed is science fiction at the moment. We will argue in this paper that it is one of the important tasks of computer sciences to start tackling this problem, and we will show that in some special cases partial solutions are possible.




form

An Empirical Study on Human and Information Technology Aspects in Collaborative Enterprise Networks

Small and Medium Enterprises (SMEs) face new challenges in the global market as customers require more complete and flexible solutions and continue to drastically reduce the number of suppliers. SMEs are trying to address these challenges through cooperation within collaborative enterprise networks (CENs). Human aspects constitute a fundamental issue in these networks as people, as opposed to organizations or Information Technology (IT) systems, cooperate. Since there is a lack of empirical studies on the role of human factors in IT-supported collaborative enterprise networks, this paper addresses the major human aspects encountered in this type of organization. These human aspects include trust issues, knowledge and know-how sharing, coordination and planning activities, and communication and mutual understanding, as well as their influence on the business processes of CENs supported by IT tools. This paper empirically proves that these aspects constitute key factors for the success or the failure of CENs. Two case studies performed on two different CENs in Switzerland are presented and the roles of human factors are identified with respect to the IT support systems. Results show that specific human factors, namely trust and communication and mutual understanding have to be well addressed in order to design and develop adequate software solutions for CENs.




form

Coordinated System for Real Time Muscle Deformation during Locomotion

This paper presents a system that simulates, in real time, the volumetric deformation of muscles during human locomotion. We propose a two-layered motion model. The requirements of realism and real time computation lead to a hybrid locomotion system that uses a skeleton as first layer. The muscles, represented by an anatomical surface model, constitute the second layer, whose deformations are simulated with a finite element method (FEM). The FEM subsystem is fed by the torques and forces got from the locomotion system, through a line of action model, and takes into account the geometry and material properties of the muscles. High level parameters (like height, weight, physical constitution, step frequency, step length or speed) allow to customize the individuals and the locomotion and therefore, the deformation of the persons' muscles.




form

Context-Aware Composition and Adaptation based on Model Transformation

Using pre-existing software components (COTS) to develop software systems requires the composition and adaptation of the component interfaces to solve mismatch problems. These mismatches may appear at different interoperability levels (signature, behavioural, quality of service and semantic). In this article, we define an approach which supports composition and adaptation of software components based on model transformation by taking into account the four levels. Signature and behavioural levels are addressed by means of transition systems. Context-awareness and semanticbased techniques are used to tackle quality of service and semantic, respectively, but also both consider the signature level. We have implemented and validated our proposal for the design and application of realistic and complex systems. Here, we illustrate the need to support the variability of the adaptation process in a context-aware pervasive system through a real-world case study, where software components are implemented using Windows Workflow Foundation (WF). We apply our model transformation process to extract transition systems (CA-STS specifications) from WF components. These CA-STSs are used to tackle the composition and adaptation. Then, we generate a CASTS adaptor specification, which is transformed into its corresponding WF adaptor component with the purpose of interacting with all the WF components of the system, thereby avoiding mismatch problems.




form

A Framework to Evaluate Interface Suitability for a Given Scenario of Textual Information Retrieval

Visualization of search results is an essential step in the textual Information Retrieval (IR) process. Indeed, Information Retrieval Interfaces (IRIs) are used as a link between users and IR systems, a simple example being the ranked list proposed by common search engines. Due to the importance that takes visualization of search results, many interfaces have been proposed in the last decade (which can be textual, 2D or 3D IRIs). Two kinds of evaluation methods have been developed: (1) various evaluation methods of these interfaces were proposed aiming at validating ergonomic and cognitive aspects; (2) various evaluation methods were applied on information retrieval systems (IRS) aiming at measuring their effectiveness. However, as far as we know, these two kinds of evaluation methods are disjoint. Indeed, considering a given IRI associated to a given IRS, what happens if we associate this IRI to another IRS not having the same effectiveness. In this context, we propose an IRI evaluation framework aimed at evaluating the suitability of any IRI to different IR scenarios. First of all, we define the notion of IR scenario as a combination of features related to users, IR tasks and IR systems. We have implemented the framework through a specific evaluation platform that enables performing IRI evaluations and that helps end-users (e.g. IRS developers or IRI designers) in choosing the most suitable IRI for a specific IR scenario.




form

An Ontology based Agent Generation for Information Retrieval on Cloud Environment

Retrieving information or discovering knowledge from a well organized data center in general is requested to be familiar with its schema, structure, and architecture, which against the inherent concept and characteristics of cloud environment. An effective approach to retrieve desired information or to extract useful knowledge is an important issue in the emerging information/knowledge cloud. In this paper, we propose an ontology-based agent generation framework for information retrieval in a flexible, transparent, and easy way on cloud environment. While user submitting a flat-text based request for retrieving information on a cloud environment, the request will be automatically deduced by a Reasoning Agent (RA) based on predefined ontology and reasoning rule, and then be translated to a Mobile Information Retrieving Agent Description File (MIRADF) that is formatted in a proposed Mobile Agent Description Language (MADF). A generating agent, named MIRA-GA, is also implemented to generate a MIRA according to the MIRADF. We also design and implement a prototype to integrate these agents and show an interesting example to demonstrate the feasibility of the architecture.




form

Les nouvelles formes d'escroquerie sur le Net.

Présent sur le Net depuis deux bonnes décennies, j’ai pu constater l’évolution des arnaques pour le simple quidam webonautes que nous sommes. Au départ, on a connu le fameux pishing via des mails grossièrement rédigés avec des entêtes de tel ou tel fournisseur...