understanding

Understanding the (Sri Lankan) IT Industry

In the last 3+ weeks there's been war raging in the IT Crowd in Sri Lanka about the proposed CEPA/ETCA thing: Basically the part of a free trade agreement with India which might allow Indians in the IT and ship building industries to work freely in Sri Lanka. I know nothing about building ships so I don't have any opinion about whether the proposal addresses a real problem or not. I do know a thing or two about "IT" and am most certainly opinionated about it :-).

I also know little real info about CEPA/ETCA because the government has chosen to keep the draft agreement secret. Never a good thing.

There have been various statements made by various pundits, politicians, random Joes (Jagath's I guess in Sinhalese ;-)) and all sorts of people about how the Sri Lankan IT crowd is
  • Scared to their wits that their jobs will be taken by Indians
  • Looking for the state to give them protection from global competition
  • Unable to compete with the world's IT industry without help from Indians
  • Unpatriotic because a lot of them leave the country after getting quality free education
  • Living in a bubble because some of them get paid Rs. 150k/month straight out of university
  • Etc. etc..
I will address a lot of these in subsequent blogs (hopefully .. every time I plan to blog a lot that plan gets bogged on).

The purpose of this blog is to try to educate the wider community about the mythical thing called the (Sri Lankan) "IT industry". For each area I will also briefly touch upon the possible Indian relationship. Of course this is all my opinion and others in the industry (especially in the specific areas that I touch upon) may vehemently disagree with my opinion. Caveat emptor. YMMV.

So here goes an attempt at a simple taxonomy:
  • Hardware Resellers/Vendors
  • Hardware Manufacturers
  • Software Resellers/Vendors
  • Software Manufacturers
  • System Integrators - Local Market Focused
  • System Integrators - Outsourcers
  • Enterprise Internal IT Teams
  • IT Enabled Services (ITES) and Business Process Outsourcers (BPO)
  • Universities
  • IT Training Institutes
This became way more of a treatise than I intended. I'm sure its full of things that people will disagree with. I'll try to update it based on feedback and note changes here.

Hardware Resellers/Vendors

IBM Sri Lanka has been in Sri Lanka for more than 40 years I think. I imagine they came when Central Bank or some big organization bought an IBM mainframe. I remember seeing Data General, WANG, and a host of other now-dead names growing up (70s and 80s). 

These guys basically import equipment from wherever, sell it to local customers and provide on-going support and maintenance. 

Some of these players don't sell entire computers or systems but rather parts - visit Unity Plaza to see a plethora of them.

Not too many Indian hardware brands being sold in Sri Lanka AFAIK but probably MicroMax (the phone) is an exception. So having the Indian IT Crowd here really has no impact on this segment.

Hardware Manufacturers

These are people who make some kind of "IT thing" and sell it locally or export it. When it comes to technology no one makes all of anything any more - even an iPhone consists of parts from several countries and is finally assembled in China. Same with any computer you buy or any phone you buy.

There are a few people here who "make" (aka put together / assemble) computers and sell under their own brand. There are also a few who export them (I believe).

There are also some others who make specific hardware devices that target specific solutions - best is the company that makes various PoS type systems that get sold as Motorola.

Fundamentally not many hardware manufacturers in Sri Lanka yet AFAIK. In any case, they're not likely to be affected by Indians being in Sri Lanka as this is a very specialized market and its unlikely the specialized skill will migrate to Sri Lanka given that skill base has excellent opportunities anywhere. If at all, electronics related graduates in Sri Lanka do not have enough good career opportunities yet as we don't have many companies buildings things yet.

Software Resellers/Vendors

Takes Microsoft Sri Lanka or the 100s of other agents of global software brands that sell their wares in Sri Lanka. These guys get a cut out of the sale in some fashion. 

Yes of course some of them sell (very good) Indian software. For example, a bunch of banks use InfoSys' Finnacle (sp?) core banking system.

Software, used well, can increase any organization's productivity (after all, software is eating the world and all that). If there are Indian companies which have technology that can be used to improve LK orgs productivity - by all means do come and sell it here! That may even require Indian engineers to come and install / customize them - no problem at all.

So, this segment will simply welcome more Indian presence in terms of companies. In terms of the Indian IT Crowd coming here for this segment - I guess experienced sales people are solutions engineers to help sell and deploy the Indian products are always welcome. To be successful the company will need to send good people (good luck selling software if the sales engineer sucks) - and good people are welcome anywhere.

I should mention the global SaaS software products (e.g. Salesforce, Netsuite, Google Apps, Office 365 etc.). Most of those don't have regional sales teams etc. - you just go to the website and sign up and use it. However, they will often have local system integrators who know how to help deploy, tune, customize and integrate those systems to whatever enterprise systems are already in place.

Software Manufacturers

These guys make some kind of software product and sell it to whoever will buy it. More and more are selling them online as SaaS offerings only.

Competing in the software product market means you just need to build a better product or at least have a good enough product that's cheap. To create great products you need great people who think and innovate faster and better than anyone else out in the world. More and more pretty much every product competes globally as even the smallest customer can simply use globally available SaaS offerings (some made in Sri Lanka even). 

Every idea someone has for a product in Sri Lanka is guaranteed also conceived by at least multiple Indians. And multiple Americans. And multiple Europeans. Etc. etc..

"Ideas are cheap. Execution is not." - Mano Sekaram at a talk he gave at the WSO2 Hackathon a few years ago.

To make products and get them to market is not easy. Will having some Indian employees help? SURE - if they're awesome people. The 2m people who applied for a clerical job really wouldn't help. Will marketing experience help? Of course - but again high quality product marketing experience is hard to come by in Sri Lanka, in India and even in California (speaking from personal experience). 

Despite idiotic politician statements about how advanced the Indian IT industry is, they are much more a global outsourcer and BPO operator than a product development country. That's changing rapidly but the numbers in the product side of the equation are much lower than the other side. In fact, I'd venture to say that as a %ge there are more product companies in Sri Lanka's IT ecosystem than in India's. In any case, the word "advanced" is very hard to quantify in the software world.

So sure, let anyone come - but good luck getting too many jobs in product companies that have no patience or interest with mediocre people. You need a few superb people to build a great product and fewer great people to market and sell it. If you're a super engineer or a marketer in India, there are tons of opportunities for you in India already, so the only way you'll come is if we offer a better total package: Check out WorkInSriLanka. I hope you come and stay and never leave! 

For WSO2, we're a BoI company. If we find a high quality person from ANYWHERE who wants to work in Sri Lanka we can bring them over. Piece of cake really - visa wise. We will NOT pay higher salaries for foreign people though - something that I know many do and something I soooooo detest. Sri Lanka seems to love reverse discrimination.

System Integrators - Local Market Focused

These companies take software and hardware from whoever and produce solutions for customers. These are systems that solve a particular business problem for some organization. For example, the vehicle registration system at the Department of Motor Vehicles.

The work these guys do involve working with the customer to understand the problem domain, figure out a good solution architecture, figure out which technology to apply and then to build the full solution. All very important stuff!

Who works in these places? Typically a combination of business analysts, architects, engineers of all kinds (software, QA, UI etc. etc.), project managers and so on.

Sri Lankan enterprises are quite slow to adopt software technology. This (IMO) is primarily because labor costs are low, because customer expectations are still not hard meaning competition is not that intense as it is in say US. That will change and we will need a LOT more people to integrate and build solutions for local companies. Can we meet the demand with local skill - my guess is yes. If we need a few more, the integrator companies can easily import people too.

There is one segment of this market that is special however. Small enterprises are also picking up low end solutions. These are often implemented by the owners daughter/son or niece/nephew type person. Basically some trusted computer geeky relative who "automates" the place in some form. That used to be with an Access database + VB type thing .. not sure what is in play today in that space.

That market is critical to help develop the local IT Crowd as it gives business (aka employment) to many many relatively low skilled yet value-adding people. The people working in these places don't need 4 year CS degrees. They're simply people with a bit of knowledge (acquired from a tutory type place) and a good knack for computing. Its critical to support and protect this community because they deliver technology to the wider mom&pop / small kade business community. 

I think a bunch of lower cost people from India working in Sri Lanka in this market could be a negative thing as it could threaten employment for low end IT workers. However, many of these deals are struck based on trust and relationships so it'll be really hard for anyone to break in.

System Integrators - Outsourcers

These guys take work from a foreign country (typically a more wealthy country but could be one that simply has a dearth of technical capacity) and bring it here to do the work. Virtusa is of course the largest (~3000 or so people AFAIK) but there are TONS of smaller players employing a few 10s of people and a few dozen or so in the 100s range I think.

The smaller ones always start with a single contract the owner managed to get from his/her work in the foreign country or thru a friend/relative outside. Do one task well at 1/5th to 1/3rd the price in the US and you can clearly keep get more business. Capitalism at work.

The bigger of these companies are great places to work for the best of the best. They may give opportunities to learn a ton of stuff, travel, develop soft skills etc. etc..  Lots of passionate employees who will not move easily.

The middle sized ones (> 25, < too many 100s) are usually great companies. They pay people well, they provide a quality work environment, they have passionate employees and often specialize in one or few areas (e.g. Alfresco or Mobile apps or whatever) and therefore command a higher charge out rate. 

The small companies (<= 25) tend to be more sweat-shop like from what I've seen - pay the people as little as possible and use crazy micro project management to deliver. No passionate employees typically. Its just a job that gives a paycheck for people who are relatively low skilled (and low initiative powered too).

Virtusa has offices in India too with like 7000 people I think. If they want to hire Indians they can hire them there. If they want to bring people down here they can do it and undoubtedly do it already. (You need to go thru the Board of Investment but its trivially easy. FAR FAR FAR easier than hiring a foreigner in the US .. or I imagine India.)

Does this part of the IT Crowd get affected by possible mass migration of the Indian IT Crowd to Sri Lanka? Not for the Virtusa's of the world IMO. However, for the smaller players, the small company CEOs who are milking money off the small outsourcing contracts, yes getting cheaper invisible people will be better for them. That could indeed mean a reduction in employment opportunities for the lower end of the technical community who work in these places as there indeed will be Indians willing to work for less (see Two million apply for 300 clerical jobs and 80% of Indian Engineering Graduates are Unemployable as recent examples).

It would be great to have multiple Virtusa's in Sri Lanka. In 2009, Mphasis (apparently India's 7th largest service provider then) tarted operations in Sri Lanka with intent to hire 2000 but AFAIK have packed up and gone or are nowhere as big. I'm sure someone who knows will reply and I'll add a note.

Would Infosys or TCS or whatever open up here if they have to bring people from India to Sri Lanka? I can't see why .. then why not just execute that in India itself. What am I missing in that equation?

So I cannot see the larger players affected by this. The smaller players (and by that I mean the really small ones .. < 25 people) will probably benefit by getting cheaper workers. Will we see tons of iOS developers in LK with this? No, because they're a scarce commodity anywhere. Period. For the middle sized guys (> 25, < too many 100s) certainly getting more senior, experienced people from India will be a good thing. However, I see that as no different from attracting any national to come to Sri Lanka to work. I ABSOLUTELY want that - that's why I helped form WorkInSriLanka and am still part of it. 

High end people (of ANY origin) moving to Sri Lanka is critical for our future .. we need to become a net brain importer and not an exporter. However, they will come only if (a) you pay them properly and (b) if the quality of life is really good. These are things that WorkInSriLanka is addressing / informing about.

Enterprise Internal IT Teams

This literally the IT Crowd in the companies. (Haven't seen the awesomely funny British comedy? Check it out.) 

Well actually often they do much much more than that crowd. The IT Crowd guys are only IT operations - they keep computers running, keep networks running etc.. That's absolutely critical. But now more and more companies are using information as a key business strategy. What that means is that internal IT is becoming more and more important. Companies cannot afford to buy prepackaged solutions nor simply outsource to others - they need to innovate inside the company to create real business value for themselves in a way that differentiates them from their competitors.

Not easy stuff.

You need really good people. Not 100s, but a good number of really really good people and a bigger number of good people. You also need a visionary to be the CIO/CTO to drive that effort. Not at all easy.

Sri Lanka is still in transition to that. Some big companies are doing it really well, but there's a massive dearth of really innovative CIOs in Sri Lanka yet. We're developing them as they move up the ranks but IT was kept away from the business and that needs to change for this to work. 

Is it a possibility to import talent for this from India? Of course! However, they are not cheap as those people have 1000x more work in India than here! What will happen to less skilled people who might come to this space? Good luck getting a job.

For smaller companies, they don't have enterprise IT. Then they have the IT guy - the jack-of-all-trades who knows how to help with Powerpoint to debugging why he can't get to FB to cleaning up after he stupidly clicked on yet another get-rich-quick email. Those guys don't have (and don't need) CS degrees or IT/IS degrees. They need some training and lot of experience. They also get paid very little (think 25-50k/month). 

Those guys could get crunched if we allow hundreds of such people to come from India. That would be just stupid.

IT Enabled Services (ITES) and Business Process Outsourcers (BPO)

This is where the numbers are. Order a pizza in Texas? An Indian will answer. Call Delta airlines with an issue? An Philippino will answer. Call HSBC about an issue. A Sri Lankan will answer.

These started off as call centers but more and more they take an entire process (e.g. claim processing for medical claims) and run the entire process in a lower cost location. All you need is a good network connection and a lot of (young) people who will work for a little amount and work odd hours and be happy with it. Sri Lanka also claims to be the largest producer of UK qualified accountants after UK .. and so does a lot of financial process outsourcing too.

There's also high end parts of this market - research outsourcing, analytics outsourcing etc.. Great. Do more. 

Sri Lanka produces 300-400 THOUSAND 18 years each year. Only like 25,000 get to a university of some kind (who are the ones who have a chance at a higher value job). The rest need work. 

This low end kind of ITES/BPO work is great .. it gets them a salary and if the country keeps devaluing the LKR they even get salary raises every year! Keeping people employed prevents them from wanting to join revolutions.

Some BPOs claim that they couldn't scale enough in LK because they can't find the large number of passionate, English capable young people. Probably true. 

MAYBE its possible to import them from India, but presumably only those that couldn't get jobs in the myriad of Indian BPOs. However, how that helps provide employment to the droves of young people who need work in Sri Lanka I do not know.

Universities

These guys of course produce the IT guys. We have state universities, private universities that grant their own degrees and a plethora of private ones that provide a learning environment to get a foreign university degree.

As with anything the quality varies. The top govt engineering / science universities and the top private ones produce AWESOME graduates who are absolutely as good as the best in any country (India, US included). WSO2 is lucky that a bunch of these guys join us :-). 

But my focus here is on the teachers. We need more PhDs to teach in our universities - ask Jaffna Univ CS dept for example. Will Indian PhDs (good ones) come and teach there? Great if they want to! Salary is pretty poor but its what it is. Even private universities will happily hire teachers. 

We also need top research focused scientists to come here so we can improve our research capacity. I don't think opening employment to Indians will make a single IIT professor to come :(. Even right now, they can come (visa is easy) - so please, if you want to come and teach in Sri Lanka reach out thru WorkInSriLanka and we'll help you! And don't ever leave.

India has absolutely fantastic universities. If they want to come and set up shop in LK and offer education to our people - great! India also has a LOT of crappy universities (see the article about unemployable graduates) - we certainly don't need them here.

IT Training Institutes

These are the literally hundreds (and maybe even thousands) of places that offer this course or that course on this or that. 90% of them in my opinion is crap. There's too little quality control. People are getting swindled daily by these jackassses who teach their children next to nothing and yet charge a ton of money. Even some local governments are in on it - I know in Dehiwala (my area) they run a program where literally 100s of people come for IT education. Each pays like Rs. 3000/month. Poor parents can't say no so they do it somehow.

Do we need more of these? Yes, IF THEY ARE GOOD. We need to get our house in order, put regulations in to quality control these places and then of course its great if more teachers come and teach more. 

India has absolutely fantastic training institutes. Would be great to get them to open shop here.

India also UNDOUBTEDLY has at least 10x crappy places than we do. Most certainly we don't need them here - we already have enough people robbing money from poor parents who desperately want to educate their children in "IT".

(p.s.: Blogger.com has the world's WORST editor. I'm bailing to medium.com soon.)




understanding

Understanding ESB Performance & Benchmarking

ESB performance is a hot (and disputed topic). In this post I don't want to talk about different vendors or different benchmarks. I'm simply trying to help people understand some of the general aspects of benchmarking ESBs and what to look out for in the results.

The general ESB model is that you have some service consumer, an ESB in the middle and a service provider (target service) that the ESB is calling. To benchmark this, you usually have a load driver client, an ESB, and a dummy service.

+-------------+      +---------+      +---------------+
| Load Driver |------|   ESB   |------| Dummy Service |
+-------------+      +---------+      +---------------+

Firstly, we want the Load Driver (LD), the ESB and the Dummy Service (DS) to be on different hardware. Why? Because we want to understand the ESB performance, not the performance of the DS or LD.

The second thing to be aware of is that the performance results are completely dependent on the hardware, memory, network, etc used. So never compare different results from different hardware.

Now there are three things we could look at:
A) Same LD, same DS, different vendors ESBs doing the same thing (e.g. content-based routing)
B) Same LD, same DS, different ESB configs for the same ESB, doing different things (e.g. static routing vs content-based routing)
C) Going via ESB compared to going Direct (e.g. LD--->DS without ESB)

Each of these provides useful data but each also needs to be understood.

Metrics
Before looking at the scenarios, lets look at how to measure the performance. The two metrics that are always a starting point in any benchmark of an ESB here are the throughput (requests/second) and the latency (how long each request takes). With latency we can consider overall latency - the time taken for a completed request observed at the LD, and the ESB latency, which is the time taken by the message in the ESB. The ESB latency can be hard to work out. A well designed ESB will already be sending bytes to the DS before its finished reading the bytes the LD has sent it. This is called pipelining. Some ESBs attempt to measure the ESB latency inside the ESB using clever calculations. Alternatively scenario C (comparing via ESB vs Direct) can give an idea of ESB Latency. 

But before we look at the metrics we need to understand the load driver.

There are two different models to doing Load Driving:
1) Do a realistic load test based on your requirements. For example if you know you want to support up to 50 concurrent clients each making a call every 5 seconds on average, you can simulate this.
2) Saturation! Have a large number of clients, each making a call as soon as the last one finishes.

The first one is aimed at testing what the ESB does before its fully CPU loaded. In other words, if you are looking to see the effect of adding an ESB, or the comparison of one ESB to another under realistic load, then #1 is the right approach. In this approach, looking at throughput may not be useful, because all the different approaches have similar results. If I'm only putting in 300 requests a sec on a modern system, I'm likely to see 300 request a sec. Nothing exciting. But the latency is revealing here. If one ESB responds in less time than another ESB thats a very good sign, because with the same DS the average time per request is very telling.

On the other hand the saturation test is where the throughput is interesting. Before you look at the throughput though, check three things:
1) Is the LD CPU running close to 100%?
2) Is the DS CPU running close to 100%?
3) Is the network bandwidth running close to 100%?

If any of these are true, you aren't doing a good test of the ESB throughput. Because if you are looking at throughput then you want the ESB to be the bottleneck. If something else is the bottleneck then the ESB is not providing its max throughput and you aren't giving it a fair chance. For this reason, most benchmarks use a very very lightweight LD or a clustered LD, and similarly use a DS that is superfast and not a realistic DS. Sometimes the DS is coded to do some real work or sleep the thread while its executing to provide a more realistic load test. In this case you probably want to look at latency more than throughput.

Finally you are looking to see a particular behaviour for throughput testing as you increase load.
Throughput vs Load
The shape of this graph shows an ideal scenario. As the LD puts more work through the ESB it responds linearly. At some point the CPU of the ESB hits maximum, and then the throughput stabilizes.  What we don't want to see is the line drooping at the far right. That would mean that the ESB is crumpling under the extra load, and its failing to manage the extra load effectively. This is like the office worker whose efficiency increases as you give them more work but eventually they start spending all their time re-organizing their todo lists and less work overall gets done.

Under the saturation test you really want to see the CPU of the ESB close to 100% utilised. Why? This is a sign that its doing as much as possible. Why would it not be 100%? Two reasons: I/O, multi-processing and thread locks: either the network card or disk or other I/O is holding it up, the code is not efficiently using the available cores, or there are thread contention issues.

Finally its worth noting that you expect the latency to increase a lot under the saturation test. A classic result is this: I do static routing for different size messages with 100 clients LD. For message sizes up to 100k maybe I see a constant 2ms overhead for using the ESB. Suddenly as the message size grows from 100k to 200k I see the overhead growing in proportion to the message size.


Is this such a bad thing? No, in fact this is what you would expect. Before 100K message size, the ESB is underloaded. The straight line up to this point is a great sign that the ESB is pipelining properly. Once the CPU becomes loaded, each request is taking longer because its being made to wait its turn at the ESB while the ESB deals with the increased load.

A big hint here: When you look at this graph, the most interesting latency numbers occur before the CPU is fully loaded. The latency after the CPU is fully loaded is not that interesting, because its simply a function of the number of queued requests.

Now we understand the metrics, lets look at the actual scenarios.

A. Different Vendors, Same Workload
For the first comparison (different vendors) the first thing to be careful of is that the scenario is implemented in the best way possible in each ESB. There are usually a number of ways of implementing the same scenario. For example the same ESB may offer two different HTTP transports (or more!). For example blocking vs non-blocking, servlet vs library, etc. There may be an optimum approach and its worth reading the docs and talking to the vendor to understand the performance tradeoffs of each approach.

Another thing to be careful of in this scenario is the tuning parameters. Each ESB has various tuning aspects that may affect the performance depending on the available hardware. For example, setting the number of threads and memory based on the number of cores and physical memory may make a big difference.

Once you have your results, assuming everything we've already looked at is tickety-boo, then both latency and throughput are interesting and valid comparisons here. 

B. Different Workloads, Same Vendor
What this is measuring is what it costs you to do different activities with the same ESB. For example, doing a static routing is likely to be faster than a content-based routing, which in turn is faster than a transformation. The data from this tells you the cost of doing different functions with the ESB. For example you might want to do a security authentication/authorization check. You should see a constant bump in latency for the security check, irrespective of message size. But if you were doing complex transformation, you would expect to see higher latency for larger messages, because they take more time to transform. 

C. Direct vs ESB
This is an interesting one. Usually this is done for a simple static routing/passthrough scenario. In other words, we are testing the ESB doing its minimum possible. Why bother? Well there are two different reasons. Firstly ESB vendors usually do this for their own benefit as a baseline test. In other words, once you understand the passthrough performance you can then see the cost of doing more work (e.g. logging a header, validating security, transforming the message). 

Remember the two testing methodologies (realistic load vs saturation)? You will see very very different results in each for this, and the data may seem surprising. For the realistic test, remember we want to look at latency. This is a good comparison for the ESB. How much extra time is spent going through the ESB per request under normal conditions. For example, if the average request to the backend takes 18ms and the average request via the ESB takes 19ms, we have an average ESB latency of 1ms. This is a good result - the client is not going to notice much difference - less than 5% extra. 

The saturation test here is a good test to compare different ESBs. For example, suppose I can get 5000 reqs/sec direct. Via ESB_A the number is 3000 reqs/sec and via ESB_B the number is 2000 reqs/sec, I can say that ESB_A is providing better throughput than ESB_B. 

What is not  a good metric here is comparing throughput in saturation mode for direct vs ESB. 


Why not? The reason here is a little complex to explain. Remember how we coded DS to be as fast as possible so as not to be a bottleneck? So what is DS doing? Its really just reading bytes and sending bytes as fast as it can. Assuming the DS code is written efficiently using something really fast (e.g. just a servlet), what this is testing is how fast the hardware (CPU plus Network Card) can read and write through user space in the operating system. On a modern server hardware box you might get a very high number of transactions/sec. Maybe 5000req/s with each message in and out being 1k in size.

So we have 1k in and 1k out = 2k IO.
2k IO x 5000 reqs/sec x 8bits gives us the total network bandwidth of 80Mbits/sec (excluding ethernet headers and overhead).

Now lets look at the ESB. Imagine it can handle 100% of the direct load. There is no slowdown in throughput for the ESB. For each request it has to read the message in from LD and send it out to DS. Even if its doing this in pipelining mode, there is still a CPU cost and an IO cost for this. So the ESB latency of the ESB maybe 1ms, but the CPU and IO cost is much higher. Now, for each response it also has to read it in from DS and write it out to LD. So if the DS is doing 80Mbits/second, the ESB must be doing 160Mbits/second. 

Here is a picture.

Now if the LD is good enough, it will have loaded the DS to the max. CPU or IO capacity or both will be maxed out. Suppose the ESB is running on the same hardware platform as the DS. If the DS machine can do 80Mbit/s flat out, there is no way that the same hardware running as an ESB can do 160Mbit/s! In fact, if the ESB and DS code are both as efficient as possible, then the throughput via ESB will always be 50% of the throughput direct to the DS. Now there is a possible way for the ESB to do better: it can be better coded than the DS. For example, if the ESB did transfers in kernel space instead of user space then it might make a difference. The real answer here is to look at the latency. What is the overhead of adding the ESB to each request. If the ESB latency is small, then we can solve this problem by clustering the ESB. In this case we would put two ESBs in and then get back to full throughput.

The real point of this discussion is that this is not a useful comparison. In reality backend target services are usually pretty slow. If the same dual core server is actually doing some real work - e.g. database lookups, calculations, business logic - then its much more likely to be doing 500 requests a second or even less. 

The following chart shows real data to demonstrate this. The X-Axis shows increasing complexity of work at the backend (DS). As the effort taken by the backend becomes more realistic, the loss in throughput of having an ESB in the way reduces. So with a blindingly fast backend, we see the ESB struggling to provide just 55% of the throughput of the direct case. But as the backend becomes more realistic, we see much better numbers. So at 2000 requests a second there is barely a difference (around 10% reduction in throughput). 


In real life, what we actually see is that often you have many fewer ESBs than backend servers. For example, if we took the scenario of a backend server that can handle 500 reqs/sec, then we might end up with a cluster of two ESBs handling a cluster of 8 backends. 

Conclusion
I hope this blog has given a good overview of ESB performance and benchmarking. In particular, when is a good idea to look at latency and when to use throughput. 





understanding

Understanding Logging in the Cloud

I recently read an interesting pair of articles about Application Logging in OpenShift. While these are great articles on how to use log4j and Apache Commons Logging, they don't address the cloud logging issue at all.

What is the cloud logging issue?

Suppose I have an application I want to deploy in the cloud. I also want to automatically elastically scale this app. In fact I'm hoping that this app will succeed - and then I'm going to want to deploy it in different geos. I'm using EC2 for starters, but I might need to move it later. Ok, so that sounds a bit YAGNI. Let's cut back the requirements. I'm running my app in the cloud, on a single server in a single geo.

I do not want to log to the local filesystem.

Why not? Well firstly if this is say EC2, then the server might get terminated and I'm going to lose my logs. If it doesn't get restarted then they are going to grow and kill my local filesystem. Either way, I'm in a mess.

I need to log my logs somewhere that is:
1) designed to support getting logs from multiple places - e.g. whichever EC2 or other instance my server happens to be hosted today
2) separate from my worker instance so when that gets stopped and started it lives
3) supports proper log rotation, etc

If I have this then it supports my initial problem, but it actually also supports my bigger requirements around autoscaling and geos.

Stratos is an open source Platform-as-a-Service foundation that we've created at WSO2. In Stratos we had to deal with this early on because we support elastic auto-scaling by default.

In Stratos 1.x we built a model based on syslog-ng. Basically we used log4j for applications to log. So just as any normal log4j logging you would do something like:


Logger  logger = Logger.getLogger("org.fremantle.myApp");
logger.warn("This is a warning");


We automatically setup the log appenders in the Stratos services to use the log4j syslog appender. When we start an instance we automatically set it up under the covers to pipe the syslog output to syslog-ng. Then we automatically collate these logs and make them available.

In Stratos 2.x we have improved this.
The syslog-ng model is not as efficient as we needed, and also we needed a better way of slicing and dicing the resulting log files.

In the Stratos PaaS we also have another key requirement - multi-tenancy. We have lots of instances of servers, some of which are one instance per tenant/domain, and some which are shared between tenants. In both cases we need to split out the logs so that each tenant only sees their own logs.

So in Stratos 2.x (due in the next couple of months) we have a simple Apache Thrift interface (and a JSON/REST one too). We already have a log4j target that pushes to this. So exactly the same code as above works in Stratos 2.x with no changes. 



We are also going to add models for non-Java (e.g. syslog, log4php, etc).

Now what happens next? The local agent on the cloud instance is setup automatically to publish to the local central log server. This takes the logs and publishes them to an Apache Cassandra database. We then run Apache Hive scripts that slice the logs per tenant and per application. These are then available to the user via our web interface and also via simple network calls. Why this model? This is really scalable. I mean really, really scalable. Cassandra can scale to hundreds of nodes, if necessary. Also its really fast. Our benchmarks show that we can write >10k entries/second on a normal server.

Summary

Logging in the cloud isn't just about logging to your local disk. That is not a robust or scalable answer. Logging to the cloud needs a proper cloud logging model. In Stratos we have built one. You can use it from Java today and from Stratos 2.0 we are adding support to publish log entries just with a simple REST interface, or a super-fast highly scalable approach with Apache Thrift.




understanding

Students’ Understanding of Advanced Properties of Java Exceptions




understanding

Towards Understanding Information Systems Students’ Experience of Learning Introductory Programming: A Phenomenographic Approach

Aim/Purpose: This study seeks to understand the various ways information systems (IS) students experience introductory programming to inform IS educators on effective pedagogical approaches to teaching programming. Background: Many students who choose to major in information systems (IS), enter university with little or no experience of learning programming. Few studies have dealt with students’ learning to program in the business faculty, who do not necessarily have the computer science goal of programming. It has been shown that undergraduate IS students struggle with programming. Methodology: The qualitative approach was used in this study to determine students’ notions of learning to program and to determine their cognitive processes while learning to program in higher education. A cohort of 47 students, who were majoring in Information Systems within the Bachelor of Commerce degree programme were part of the study. Reflective journals were used to allow students to record their experiences and to study in-depth their insights and experiences of learning to program during the course. Using phenomenographic methods, categories of description that uniquely characterises the various ways IS students experience learning to program were determined. Contribution: This paper provides educators with empirical evidence on IS students’ experiences of learning to program, which play a crucial role in informing IS educators on how they can lend support and modify their pedagogical approach to teach programming to students who do not necessarily need to have the computer science goal of programming. This study contributes additional evidence that suggests more categories of description for IS students within a business degree. It provides valuable pedagogical insights for IS educators, thus contributing to the body of knowledge Findings: The findings of this study reveal six ways in which IS students’ experience the phenomenon, learning to program. These ways, referred to categories of description, formed an outcome space. Recommendations for Practitioners: Use the experiences of students identified in this study to determine approach to teaching and tasks or assessments assigned Recommendation for Researchers: Using phenomenographic methods researchers in IS or IT may determine pedagogical content knowledge in teaching specific aspects of IT or IS. Impact on Society: More business students would be able to program and improve their logical thinking and coding skills. Future Research: Implement the recommendations for practice and evaluate the students’ performance.




understanding

Misunderstandings about social problems and social value in solving social problems

Though there have been many approaches to dealing with social problems in recent years, the concepts of social value have yet to be discussed thoroughly. Upon examining these concepts in existing studies and testing them with two case studies, the article shows that there is the possibility that a group's shared wants may not be widely recognised as a social problem, and targeting these unserved populations is a precondition for solving social issues. It is essential to identify hidden social problems by understanding what is still left, the number of people sharing the same want, the severity of the unmet want, and the possible resources for solution generation. Social value in its narrower definition means meeting the satisfaction of the group sharing the same want, while in its broader definition, it means meeting the satisfaction of wider society. Finding workable solutions involves not only the group of people sharing the same want but also others who do not have the same want but who do recognise the importance of acknowledging the want of the subgroup.




understanding

A Single Case Study Approach to Teaching: Effects on Learning and Understanding




understanding

Understanding Intention to Use Multimedia Information Systems for Learning




understanding

Understanding Information Technology:   What do Graduates from Business-oriented IS Curricula Need to Know?




understanding

Applying and Evaluating Understanding-Oriented ICT User Training in Upper Secondary Education




understanding

Reinforcing and Enhancing Understanding of Students in Learning Computer Architecture




understanding

Understanding Online Learning Based on Different Age Categories

Aim/Purpose: To understand readiness of students for learning in online environments across different age groups. Background: Online learners today are diverse in age due to increasing adult/mature students who continue their higher education while they are working. Understanding the influence of the learners’ age on their online learning experience is limited. Methodology: A survey methodology approach was followed. A sample of one thousand nine hundred and twenty surveys were used. Correlation analysis was performed. Contribution: The study contributes by adding to the limited body of knowledge in this area and adds to the dimensions of the Online Learning Readiness Survey additional dimensions such as usefulness, tendency, anxiety, and attitudes. Findings: Older students have more confidence than younger ones in computer proficiency and learning skills. They are more motivated, show better attitudes and are less anxious. Recommendations for Practitioners: Practitioners should consider preferences that allow students to configure the learning approach to their age. These preferences should be tied to the dimensions of the online learning readiness survey (OLRS). Recommendations for Researchers: More empirical research is required using OLRS for online learning environments. OLRS factors are strong and can predict student readiness and performance. These are opportunities for artificial intelligence in the support of technology-mediated tools for learning.




understanding

Understanding ICT Based Advantages: A Techno Savvy Case Study




understanding

Understanding Internal Information Systems Security Policy Violations as Paradoxes

Aim/Purpose: Violations of Information Systems (IS) security policies continue to generate great anxiety amongst many organizations that use information systems, partly because these violations are carried out by internal employees. This article addresses IS security policy violations in organizational settings, and conceptualizes and problematizes IS security violations by employees of organizations from a paradox perspective. Background: The paradox is that internal employees are increasingly being perceived as more of a threat to the security of organizational systems than outsiders. The notion of paradox is exemplified in four organizational contexts of belonging paradox, learning paradox, organizing paradox and performing paradox. Methodology : A qualitative conceptual framework exemplifying how IS security violations occur as paradoxes in context to these four areas is presented at the end of this article. Contribution: The article contributes to IS security management practice and suggests how IS security managers should be positioned to understand violations in light of this paradox perspective. Findings: The employee generally in the process of carrying out ordinary activities using computing technology exemplifies unique tensions (or paradoxes in belonging, learning, organizing and performing) and these tensions would generally tend to lead to policy violations when an imbalance occurs. Recommendations for Practitioners: IS security managers must be sensitive to employees tensions. Future Research: A quantitative study, where statistical analysis could be applied to generalize findings, could be useful.




understanding

Information Technology Capabilities and SMEs Performance: An Understanding of a Multi-Mediation Model for the Manufacturing Sector

Aim/Purpose: Despite the fact that the plethora of studies demonstrate the positive impact of information technology (IT) capabilities on SMEs performance, the understanding of underlying mechanisms through which IT capabilities affect the firm performance is not yet clear. This study fills these gaps by explaining the roles of absorptive capacity and corporate entrepreneurship. The study also elaborates the effect of IT capability dimensions (IT integration and IT alignment) upon the SMEs performance outcomes through the mediating sequential process of absorptive capacity and corporate entrepreneurship. Methodology: This study empirically tests a theoretical model based on the Dynamic Capability View (DCV), by using the partial least square (PLS) technique with a sample of 489 manufacturing SMEs in Pakistan. A survey is employed for the data collection by following the cluster sampling approach. Contribution: This research contributes to the literature of IT by bifurcating the IT capability into two dimensions, IT integration and IT alignment, which allows us to distinguish between different sources of IT capabilities. Additionally, our findings shed the light on the dynamic capability view by theoretically and empirically demonstrating how absorptive capacity and corporate entrepreneurship sequentially affect the firms' performance outcomes. At last, this study contributes to the literature of SMEs by measuring the two levels of performance: innovation performance and firm performance. Findings: The results of the analysis show that the absorptive capacity and the corporate entrepreneurship significantly mediate the relationship between both dimensions of IT capability and performance outcomes.




understanding

Understanding the Determinants of Wearable Payment Adoption: An Empirical Study

Aim/Purpose: The aim of this study is to determine the variables which affect the intention to use Near Field Communication (NFC)-enabled smart wearables (e.g., smartwatches, rings, wristbands) payments. Background: Despite the enormous potential of wearable payments, studies investigating the adoption of this technology are scarce. Methodology: This study extends the Technology Acceptance Model (TAM) with four additional variables (Perceived Security, Trust, Perceived Cost, and Attractiveness of Alternatives) to investigate behavioral intentions to adopt wearable payments. The moderating role of gender was also examined. Data collected from 311 Kuwaiti respondents were analyzed using Structural Equation Modeling (SEM) and multi-group analysis (MGA). Contribution: The research model provided in this study may be useful for academics and scholars conducting further research into m-payments adoption, specifically in the case of wearable payments where studies are scarce and still in the nascent stage; hence, addressing the gap in existing literature. Further, this study is the first to have specifically investigated wearable payments in the State of Kuwait; therefore, enriching Kuwaiti context literature. Findings: This study empirically demonstrated that behavioral intention to adopt wearable payments is mainly predicted by attractiveness of alternatives, perceived usefulness, perceived ease of use, perceived security and trust, while the role of perceived cost was found to be insignificant. Recommendations for Practitioners: This study draws attention to the importance of cognitive factors, such as perceived usefulness and ease of use, in inducing users’ behavioral intention to adopt wearable payments. As such, in the case of perceived usefulness, smart wearable devices manufacturers and banks enhance the functionalities and features of these devices, expand on the financial services provided through them, and maintain the availability, performance, effectiveness, and efficiency of these tools. In relation to ease of use, smart wearable devices should be designed with an easy to use, high quality and customizable user interface. The findings of this study demonstrated the influence of trust and perceived security in motivating users to adopt wearable payments, Hence, banks are advised to focus on a relationship based on trust, especially during the early stages of acceptance and adoption of wearable payments. Recommendation for Researchers: The current study validated the role of attractiveness of alternatives, which was never examined in the context of wearable payments. This, in turn, provides a new dimension about a determinant factor considered by customers in predicting their behavioral intention to adopt wearable payments. Impact on Society: This study could be used in other countries to compare and verify the results. Additionally, the research model of this study could also be used to investigate other m-payments methods, such as m-wallets and P2P payments. Future Research: Future studies should investigate the proposed model in a cross-country and cross-cultural perspective with additional economic, environmental, and technological factors. Also, future research may conduct a longitudinal study to explain how temporal changes and usage experience affect users’ behavioral intentions to adopt wearable payments. Finally, while this study included both influencing factors and inhibiting factors, other factors such as social influence, perceived compatibility, personal innovativeness, mobility, and customization could be considered in future research.




understanding

Getting in Synch: Understanding Student Perceptions of Synchronous Online Instruction

Aim/Purpose: This study examines the impact of transitioning from in-person classrooms to remote online business education and provides analysis of key factors impacting course and instructor ratings as well as strategies for higher education institutions to provide engaging instruction. Background: “Zoom”ing into teaching and moving out of traditional classrooms during the COVID-19 pandemic has been a path full of twists and has impacted student perceptions of courses as well as instructors. One challenge has been to make the quality of synchronous online instruction perceived by students as positive as classroom-delivered ones. Methodology: We analyze primary data collected in the course evaluation process from Business & Accounting students over six semesters between Fall 2019 to Spring 2022, covering pre-pandemic instruction in the classroom and the conversion to virtual instruction via Zoom. A total of 1782 observations for 38 courses were examined using mean comparison, regression and correlation analyses, and pairwise comparisons. Contribution: We provide insights from the evaluation of those instructors who were able to make their Zoom-delivered courses perceived by students as equivalent or better than room-delivered ones. Specifically, clear presentation, stimulating delivery, providing feedback and encouraging discussion were positively correlated with successful online classes. Findings: We find that there is a clear downward shift in course and instructor ratings as the change to synchronous online delivery was made. However, in the Spring of 2022, even though instructors and students were still not completely back in the classroom, both instructor and course ratings moved back closer to the pre-pandemic levels. The parameters associated with instructor ratings, such as providing feedback, clear presentations, stimulating sessions, and encouraging discussion, showed similar downward fluctuations. Also, aspects related to course content were affected by the transition to online modality, including training on critical thinking quantitative analysis, research and writing abilities, and overall usefulness of the content. Moore’s model of Transactional Distance helps explain these changes. Recommendations for Practitioners: We recommend that practitioners allow sufficient time for students and faculty to learn through online instruction delivery and supply training for both populations in adapting to learning in this delivery mode. Recommendation for Researchers: The disruption in higher education caused by COVID-19 has provided a wealth of information on the pluses and minuses of online delivery. Careful inspection of trends can help provide guidance to higher education leaders. Impact on Society: One of the many changes the COVID-19 pandemic brought was the opportunity to try alternate ways of connecting and learning. This study shows how this experience can be used to guide the future of higher education. Future Research: Further research is needed to explore the in-depth reactions of students and faculty to the switch from classroom to online delivery, to explore whether these findings can be more broadly applied to other subjects and other types of universities.




understanding

Epidemic Intelligence Models in Air Traffic Networks for Understanding the Dynamics in Disease Spread - A Case Study

Aim/Purpose: The understanding of disease spread dynamics in the context of air travel is crucial for effective disease detection and epidemic intelligence. The Susceptible-Exposed-Infectious-Recovered-Hospitalized-Critical-Deaths (SEIR-HCD) model proposed in this research work is identified as a valuable tool for capturing the complex dynamics of disease transmission, healthcare demands, and mortality rates during epidemics. Background: The spread of viral diseases is a major problem for public health services all over the world. Understanding how diseases spread is important in order to take the right steps to stop them. In epidemiology, the SIS, SIR, and SEIR models have been used to mimic and study how diseases spread in groups of people. Methodology: This research focuses on the integration of air traffic network data into the SEIR-HCD model to enhance the understanding of disease spread in air travel settings. By incorporating air traffic data, the model considers the role of travel patterns and connectivity in disease dissemination, enabling the identification of high-risk routes, airports, and regions. Contribution: This research contributes to the field of epidemiology by enhancing our understanding of disease spread dynamics through the application of the SIS, SIR, and SEIR-HCD models. The findings provide insights into the factors influencing disease transmission, allowing for the development of effective strategies for disease control and prevention. Findings: The interplay between local outbreaks and global disease dissemination through air travel is empirically explored. The model can be further used for the evaluation of the effectiveness of surveillance and early detection measures at airports and transportation hubs. The proposed research contributes to proactive and evidence-based strategies for disease prevention and control, offering insights into the impact of air travel on disease transmission and supporting public health interventions in air traffic networks. Recommendations for Practitioners: Government intervention can be studied during difficult times which plays as a moderating variable that can enhance or hinder the efficacy of epidemic intelligence efforts within air traffic networks. Expert collaboration from various fields, including epidemiology, aviation, data science, and public health with an interdisciplinary approach can provide a more comprehensive understanding of the disease spread dynamics in air traffic networks. Recommendation for Researchers: Researchers can collaborate with international health organizations and authorities to share their research findings and contribute to a global understanding of disease spread in air traffic networks. Impact on Society: This research has significant implications for society. By providing a deeper understanding of disease spread dynamics, it enables policymakers, public health officials, and practitioners to make informed decisions to mitigate disease outbreaks. The recommendations derived from this research can aid in the development of effective strategies to control and prevent the spread of infectious diseases, ultimately leading to improved public health outcomes and reduced societal disruptions. Future Research: Practitioners of the research can contribute more effectively to disease outbreaks within the context of air traffic networks, ultimately helping to protect public health and global travel. By considering air traffic patterns, the SEIR-HCD model contributes to more accurate modeling and prediction of disease outbreaks, aiding in the development of proactive and evidence-based strategies to manage and mitigate the impact of infectious diseases in the context of air travel.




understanding

An Examination of Computer Attitudes, Anxieties, and Aversions Among Diverse College Populations: Issues Central to Understanding Information Sciences in the New Millennium




understanding

Applications of Geographical Information Systems in Understanding Spatial Distribution of Asthma




understanding

Understanding the Antecedents of Knowledge Sharing: An Organizational Justice Perspective




understanding

Understanding of the Quality of Computer-Mediated Communication Technology in the Context of Business Planning

Aim/Purpose: This study seeks to uncover the perceived quality factors of computer-mediated communication in business planning in which communication among teammates is crucial for collaboration. Background: Computer-mediated communication has made communicating with teammates easier and more affordable than ever. What motivates people to use a particular CMC technology during business planning is a major concern in this research. Methodology: This study seeks to address the issues by applying the concept of Information Product Quality (IPQ). Based on 21 factors derived from an extensive literature review on Information Product Quality (IPQ), an experimental study was conducted to identify the factors that are perceived as most relevant. Contribution: The findings in this study will help developers find a more customer-oriented approach to developing CMC technology design, specifically useful in collaborative work, such as business planning. Findings: This study extracted the three specific quality factors to use CMC technology in business planning: informational, physical, and service. Future Research: Future research will shed more light on the generality of these findings. Future studies should be extended to other population and contextual situations in the use of CMC.




understanding

Informing Academia Through Understanding of the Technology Use, Information Gathering Behaviors, and Social Concerns of Gen Z

Aim/Purpose: The aim of this paper is to examine Gen Z students located in a representative region of the United States when it comes to technology use, news and information gathering behaviors, civic engagement, and social concerns and whether differences exist based on institutional type. The purpose is to report this information so that academics can better understand the behaviors, priorities, and interests of current American students. Background: This paper investigates the mindset of Generation Z students living in the United States during a period of heightened civic unrest. Through the lens of the Theory of Generations, Uses and Gratifications Theory, and Intersectional Theory, this study aims to examine the Gen Z group and compare findings across populations. Methodology: An electronic survey was administered to students from 2019 through 2022. The survey included a combination of multiple responses, Likert scaled, dichotomous, open-ended, and ordinal questions. It was developed in the Survey Monkey system and reviewed by content and methodological experts to examine bias, vagueness, or potential semantic problems. The survey was pilot-tested in 2018 before implementation in order to explore the efficacy of the research methodology. It was then modified accordingly before widespread distribution to potential participants. The surveys were administered to students enrolled in classes taught by the authors, all of whom are educators. Participation was voluntary, optional, and anonymous. Contribution: This paper provides insight into the mindset of Generation Z students living in the United States, which is helpful to members of academia who should be informed about the current generation of students in higher education. Studying Generation Z helps us understand the future and can provide insight into the shifting needs and expectations of society. Findings: According to the findings, Gen Z are heavy users of digital technologies who use social media as their primary source for gathering news about current events as well as information for schoolwork. The majority of respondents considered themselves to be social activists. When institutional type was considered, there were notable differences with the students at the Historically Black College or University (HBCU), noting the greatest concern with a number of pressing issues, including racial justice/Black Lives Matter, women’s rights, gun violence, immigration reform, and human trafficking. Less significance across groups was found when LGBTQIA+ rights and climate change were considered. Recommendation for Researchers: As social media continues to proliferate in daily life and become a vital means of news and information gathering, additional studies such as the one presented here are needed. In other countries facing similarly turbulent times, measuring student interest, awareness, and engagement is highly informative. Future Research: Future research will explore the role that influencers have in opinion formation and the information-gathering habits of Gen Z.





understanding

The limits and possibilities of history: How a wider, deeper and more engaged understanding of business history can foster innovative thinking

Calls for greater diversity in management research, education and practice have increased in recent years, driven by a sense of fairness and ethical responsibility, but also because research shows that greater diversity of inputs into management processes can lead to greater innovation. But how can greater diversity of thought be encouraged when educating management students, beyond the advocacy of affirmative action and relating the research on the link between multiplicity and creativity? One way is to think again about how we introduce the subject. Introductory textbooks often begin by relaying the history of management. What is presented is a very limited mono-cultural and linear view of how management emerged. This article highlights the limits this view outlines for initiates in contrast to the histories of other comparable fields (medicine and architecture), and discusses how a wider, deeper and more engaged understanding of history can foster thinking differently.




understanding

Understanding the Direction, Magnitude, and Joint Effects of Reputation When Multiple Actors' Reputations Collide

Despite the extensive research into the effects of reputation, virtually all of this research has examined the effect of one type of reputation on one or more specific outcomes. In this study we ask the question: How do the reputations of analysts, CEOs, and firms individually and jointly affect firm outcomes? To answer this question we focus on a context where reputations are particularly relevant - changes in analyst recommendations and the effect of those changes on stock market reactions. Our study makes contributions to the growing reputation literature by being one of the first studies to recognize and measure how the market accounts for multiple reputations. Further, we argue and find that the reputations of different actors interact with each other when determining particular firm outcomes. We find that different actor's reputations influence the reactions of observers.




understanding

COMING FULL CIRCLE WITH REACTIONS: UNDERSTANDING THE STRUCTURE AND CORRELATES OF TRAINEE REACTIONS THROUGH THE AFFECT CIRCUMPLEX

Research suggests that the structure of trainee reactions is captured with as few as one or as many as eleven dimensions. It is commonly understood that reactions contain both affective and cognitive components. To date, however, training research focuses largely on affective reactions that range from pleasant to unpleasant (i.e., valence). Here, we expand and further refine the construct of affective trainee reactions by including reactions that are more and less activating versions of pleasantness (e.g., excitement and calm, respectively) and unpleasantness (e.g., stress and boredom, respectively). We develop and validate a new measure based on this model and argue that the structure of affective reactions has implications for better understanding learning and course reputation outcomes. Results from a short online training indicate that reactions were best explained by four factors: pleasant activation (e.g., excitement), pleasant deactivation (e.g., calm), unpleasant activation (e.g., stress), and unpleasant deactivation (e.g., boredom). The relationships between these reactions and training outcomes suggest what is most beneficial for course reputation outcomes (i.e., pleasant activating reactions) may not benefit learning; what is most beneficial for learning (i.e., pleasant deactivating reactions) may benefit course reputation outcomes but slightly less so.




understanding

Do You Need to Defragment an SSD? Understanding TRIM and SSD, NVME Optimization

...




understanding

Memorandum of Understanding signed at Bioinformatics Horizon Conference in Rome

At the Bioinformatics Horizon 2013 Conference (3 - 6 September 2013, Rome) a Memorandum of Understanding was signed between PESI and EU BON. Christoph Häuser, on behalf of EU BON and Yde de Jong on behalf of PESI (see picture below), signed the document to strengthen the cooperation and formalise the integrating efforts of the European species infrastructures.  

PESI is now a new associate partner of EU BON, a consortium with currently 30 partners from 18 countries. One of the common aims of EU BON and PESI will be to establish and sustain standard taxonomies for Europe.  EU BON will support the PESI backbone developments, including its components, with a focus on Fauna Europaea and Euro+Med. Besides analyzing current gaps, new ideas will be developed to trigger expert involvement and enhance the data management systems.

In a side-meeting at BIH 2013, some ideas were discussed with available EU BON and PESI partners. Important steps will be taken to secure the sustainability of databases and expertise networks combined with the development of technical innovations for users and stakeholders and to promote the implementation of PESI as a European (INSPIRE) standard. It will be also important to further integrate the huge expertise networks, outreach to PESI Focal Points and expand the geographical scope. Furthermore, it will be important to integrate additional data types and data-resources.





understanding

Memorandum of understanding signed between EU BON and CETAF (Consortium of European Taxonomic Facilities)

A memorandum of understanding has been signed between EU BON and CETAF (Consortium of European Taxonomic Facilities, AISBL). The document was signed by EU BON project coordinator Christoph Häuser and the Chair of CETAF, Dr. Michelle J. Price, during the 35th CETAF General Meeting in Oslo, 6-7 May, 2014.
 
CETAF is a networked consortium of scientific institutions in Europe formed to promote training, research and understanding of systematic biology and palaeobiology, Together, CETAF institutions hold very substantial biological (zoological and botanical), palaeobiological, and geological collections and provide the resources necessary for the work of thousands of researchers in a variety of scientific disciplines.
 
Meanwhile the list of MoU signed by EU BON has grown with further institutions/projects joining: http://www.eubon.eu/showpage.php?storyid=10373




understanding

Memorandum of Understanding signed between EU BON and BioVeL

A memorandum of understanding has been signed between EU BON and BioVeL (Biodiversity Virtual e-Laboratory Project). The document was signed by the BioVeL coordinator Alex Hardisty (Cardiff University, UK)  and handed over to Alexander Kroupa (Museum für Naturkunde Berlin, Germany), who was there on behalf of the EU BON consortium, during the SPNHC Conference in Cardiff, 22-27 June 2014.
BioVeL is a virtual e-laboratory that supports research on biodiversity issues using large amounts of data from cross-disciplinary sources. BioVeL offers the possibility to use computerized "workflows" (series of data analysis steps) to process data, be that from one's own research and/or from existing sources.
 
Meanwhile the list of MoU signed by EU BON has grown with further institutions/projects joining: http://www.eubon.eu/showpage.php?storyid=10373




understanding

Memorandum of Understanding signed between EU BON and NINA

A memorandum of understanding has been signed between EU BON and Norwegian Institute for Nature Research (NINA). The hand over took place at the 21st GBIF Governing Board (GB21) in New Delhi (India) on 16-18 September 2014 between EU BON project co-ordinator Christoph Häuser and Roald Vang (Head of Department on Information technology) and Frank Hanssen from NINA.

IMAGE: At the handover: EU BON co-ordinator Christoph Häuser and Roald Vang (Head of Department on Information technology) and Frank Hanssen from NINA.

The Norwegian Institute for Nature Research (NINA) is Norway’s leading institution for applied ecological research, with broad-based expertise on the genetic, population, species, ecosystem and landscape level, in terrestrial, freshwater and coastal marine environments. The core activities encompass strategic ecological research integrated with long-term monitoring, as well as a variety of environmental assessments and development of methodologies. Most work is aimed at improving the understanding of biodiversity, ecosystem services, ecological processes and their main drivers to facilitate better management of ecosystem services and resources. NINA addresses a wide variety of interdisciplinary issues involving both ecologists and social scientists, and plays an important role in European and other international research cooperation.





understanding

Memorandum of Understanding signed: EU BON and Socientize

A Memorandum of Understanding (MoU) was signed at the second EU BON Stakeholder Roundtable on Citizen Science between Christoph Häuser, on behalf of EU BON, and Fermin Serrano Sanz, on behalf of the Citizen Science Project Socientize at the 27th of November 2014. The Roundtable on Citizen Science took place at the Museum für Naturkunde in Berlin, Germany and followed the General Assembly Meeting of the European Citizen Science Association. 

Signing the MoU: (left) Christoph Häuser, EU BON and (right) Fermin Serrano Sanz, Socientize

Socientize (Society as e-Infrastructure through technology, innovation and creativity) is a Citizen Science Project that was funded by the European Union. The project aims to coordinate agents involved in the citizen science process and to foster and promote the usage of citizen science infrastructures.  There are several linkages between the citizen science related work of EU BON and Socientize (e.g. policy recommendations for citizen science) and by signing the MoU, a further exchange and follow-ups were agreed.





understanding

Understanding Plaster Finishes for Decorative Painters

The New York artisan returns with a new feature regarding intricate/ornate finishes for interior plaster.




understanding

Understanding silica dust: Washington state issues hazard alert

Tumwater, WA — The Washington State Department of Labor & Industries has published a hazard alert on the risks of worker exposure to silica dust.




understanding

Understanding work boot safety standards

How important is it for boot safety standards to match federal and state requirements?




understanding

Understanding responses to ‘unfair’ treatment could help workers’ comp systems: study

Waterloo, Ontario — Understanding the emotions injured workers experience – and the actions they take – when going through injury and claims processes they believe are unfair can be helpful to everyone involved in the workers’ compensation system, results of a recent study by Canadian researchers suggest.




understanding

Understanding the NIOSH lifting equation

How should the NIOSH lifting equation be used?




understanding

Understanding sensor speeds on gas monitors

What does “sensor speed” mean, and why is it an important aspect of gas monitors?




understanding

Understanding the final GHS deadline

The next and final GHS deadline is June 1, 2016. What does that means for me as an employer?




understanding

HazCom: Understanding ‘Hazard Not Otherwise Classified’

What are the criteria for determining if something is or is not an HNOC?




understanding

Understanding AVS-01 & its Impact on Video Monitoring

As monitoring companies take advantage of new video technology and grow their businesses to include video monitoring services, it is important to understand that there is a significant difference between installing a video system and a monitoring-ready video system.




understanding

Understanding cut resistance

I have two glove samples for my cut hazard. One is an ASTM cut level 4 and the other is labeled EN cut level 5. Which glove is more cut-resistant?




understanding

Better understanding of glove coatings

How critical is a glove’s coating when selecting cut-resistant hand protection?




understanding

Understanding the dangers of counterfeit products in the workplace

In environments that involve working with or around electrical equipment, it is important not to forget the risk that counterfeit electrical products can pose – a risk with potential safety threats that are preventable.




understanding

Understanding safety footwear ratings

What are the differences among EH, SD, CD and DI ratings on footwear?




understanding

Understanding and eliminating arc flash

What is the leading cause of arc flash, and what procedural changes can be adopted to eliminate the threat?




understanding

Understanding OSHA’s Special Emphasis Programs

What do I need to know about OSHA’s Special Emphasis Programs?




understanding

Hazardous Chemicals 101: Understanding the Difference Between Hazmat, Hazcom and Hazwaste

Written by J. J. Keller’s workplace safety and compliance experts, this white paper provides an overview of the differences between hazardous materials, hazard communication and hazardous waste, along with the regulations that apply to them.




understanding

Understanding occupational skin disorders

Skin diseases are the second most common type of occupational illness, with more than 13 million workers potentially exposed to chemicals that can be absorbed through the skin.