forma

Papal Reformation and the Great Schism: II

Fr. John continues his exploration of the pivotal reign of Pope Leo IX and the way in which its reforms led toward a confrontation with the Patriarchate of Constantinople in 1054.




forma

Papal Reformation and the Great Schism: III

In this conclusion to his account of the Great Schism, Fr. John reviews the leading controversies that aggravated relations between Rome and Constantinople during Pope Leo IX's military confinement, and how they resulted in the latter's posthumous act of excommunicating Patriarch Michael Cerularius in 1054.




forma

The Fall of Paradise I: Reformation Muenster as the New Jerusalem

In this anecdotal introduction to the final reflection of Part 2 of the podcast, Father John relates the extraordinary story of a Reformation-era town that declared itself the kingdom of Christ on earth, a "New Jerusalem." Expressing a profound absence of God in the world, however, the story of Reformation Muenster was in fact a sign of the fall of a Christendom centered upon the experience of paradise.




forma

The Fall of Paradise VI: The Reformation of Worship

In this episode Fr. John discusses Reformed attitudes toward worship, and the ways in which western Christendom's liturgical and sacramental foundations were eroded when they were put into practice.




forma

The Fall of Paradise II: The Reformation of Western Christendom

In this episode Father John describes some of the most noteworthy effects of the Protestant Reformation on Western Christendom, emphasizing the decline of a sacramental basis for civilization and the rise of a primarily moral one.




forma

The Crisis of Western Christendom I: Martin Luther's Reformation Breakthrough

Returning after a long absence from the podcast, Fr. John in this episode introduces a new reflection on the crisis of western Christendom prior to the Reformation by discussing the penitential context of Martin Luther's famous Ninety-Five Theses.




forma

Christian Calendars and the Spiritual Transformation of Time

Fr. John discusses the spiritual transformation of time by Christianity.




forma

Christian Temples and the Spiritual Transformation of Space

Fr. John discusses the ways in which the Church tries to create a sanctified topography in Christendom.




forma

Replacing Reformational Christianity

In this episode Fr. John Strickland discusses various ways in which Christendom's leadership rejected the reformational Christianity that had provoked the wars of Western religion and replaced it with science, philosophy, pietistic Christianity, and a new religion known as deism.




forma

Light From (and Upon) the Readable Books 8: Misinformation, Decrees, and the Life of Leaders

In this episode we read Esther 3:13a-g, 5:1-13 LXX, and 8:12a-I, considering the royal decrees of the king, the dramatic scene where Esther enters his presence without invitation, and the misinformation about the Jewish people which he finally rejects. We are helped in seeing the significance of these fascinating scenes by recourse to Psalm 85/6, Phil 2:5-11, and 1 Timothy 2:1-2




forma

Transformation and Growth

Fr. Tom discusses how the Christian life consists of personal transformation and a deep urgency to bring others into the fold.




forma

An Invitation to Transformation (Dn John Skowron)

Join Deacon John Skowron in today's discussion. What are we withholding from God's will? Are we making time for Christ in our lives?




forma

The Transformation of Suffering

Fr. Gregory introduces a guest preacher today who talks about the pattern of redemption: the Lord heals the soul and then the body.




forma

Called to the Banquet of Transformation




forma

Too Much Information

Dr. Rossi thinks about overthinking.




forma

Transformation: Part 1 - Made in His Image and Likeness

Part one of a four-part documentary called "Transformation: Same-Sex Attraction Through the Lens of Orthodox Christianity." In this first episode, we meet four individuals who are faithful, obedient Orthodox Christians in terms of celibacy, but are attracted to members of the same sex. What are their stories, struggles, and disappointments? How have they been received in the Orthodox Church? And what do they want the Church to know about that struggle? Resource: Christian Faith and Same Sex Attraction by Fr. Thomas Hopko




forma

Transformation: Part 2 - The Clear Teaching of the Church

Part two of "Transformation: Same-Sex Attraction Through the Lens of Orthodox Christianity." In part two, we take a deep dive into the theology surrounding same-sex attraction. What do the Scripture, canons, and Fathers have to say about it? Is it sinful to have a same-sex attraction? Archbishop Michael, Dr. Jeannie Constantinou, Fr. Harry Linsinbigler, Dr. Roxanne Louh, and Dr. Edith M. Humphrey are among our panelists. Resource: Christian Faith and Same Sex Attraction by Fr. Thomas Hopko




forma

Transformation: Part 3 - The Greatest of These is Love

Part three of our four-part documentary Transformation: Same-Sex Attraction Through the Lens of Orthodox Christianity How are we doing as a Church at showing love to everyone who walks in our doors? Are we welcoming or judgmental? Does a warm welcome translate into endorsement of someone's lifestyle? If we are to truly love one another and bear one another's burdens, we need to get to know them first. Resource: Christian Faith and Same Sex Attraction by Fr. Thomas Hopko




forma

Transformation: Part 4 - Listen and Learn

Part four of our four-part documentary, "Transformation: Same Sex Attraction Through The Lens Of Orthodox Christianity." In this episode, we will hear a call to listen, to engage, to show patience, and extend the benefit of the doubt wherever we can—especially our young people who are asking tough questions and deserve to be heard. Resource: Christian Faith and Same Sex Attraction by Fr. Thomas Hopko




forma

The Cross-Hope, Transformation, Warning




forma

Transfiguration and Transformation

What can the Transfiguration teach us about being the bee?




forma

Early Lutheran/Orthodox Dialog After The Reformation

Most Christians are not aware that in the latter part of the 16th century, early Lutheran Reformers - close colleagues and followers of Martin Luther - set in motion an eight year contact and correspondence with the (then) Ecumenical Patriarch, Jeremias II of Constantinople. The outcome might have changed the course of Christian history. Kevin Allen speaks with scholar Dr Paraskeve (Eve) Tibbs about this fascinating and largely unknown chapter in post-Reformation history.




forma

Steps to Transformation

In this first episode of 2016, host Kevin Allen speaks with Archimandrite and Abbot Sergius of the Monastery of Saint Tikhon of Zadonsk, the oldest Orthodox monastery in the U.S., about practical ways Christians can cooperate with the grace of God to "...be transformed ..." (Rom. 12:2) into the Likeness of Christ. Abbot Sergius is the author of the book "Acquiring the Mind of Christ: Embracing the Vision of the Orthodox Church" (St Tikhon Seminary Press). Father Sergius is the 16th Abbot of Saint Tikhon’s Monastery and Lecturer of Orthodox Spirituality at Saint Tikhon’s Orthodox Theological Seminary.




forma

Reward of £1,000 for information on wanted man

Ryan Ward, 31, is wanted in connection with incidents of assault and criminal damage.




forma

Why is Cromer closing its information centre?

The tourist information centre in Cromer has been vital for some, but is set to close.




forma

Nouvelles formations en entrepreneuriat pour les traducteurs

 

J’ai récemment collaboré avec le CI3M en vue de créer une offre de formation en entrepreneuriat destinée aux traducteurs professionnels souhaitant s’installer ou développer leur activité.

 

Déclinée en trois programmes, elle est :

  • enseignée à distance ;
  • remboursable par le FIF PL ou d’autres organismes de financement et
  • intègre un suivi hebdomadaire personnalisé, afin d’adapter son contenu à vos besoins.

formation CRÉATION ET Développement de l'activité

Ce programme complet sur 8 semaines a pour objectif d’aider les traducteurs à préparer, lancer, gérer et développer leur activité professionnelle. Par le biais d’exercices pratiques et d’un accompagnement personnalisé (8 h de suivi pédagogique par téléphone), vous découvrirez comment : 

 

1. Faire le point sur votre connaissance du métier de traducteur et évaluer votre degré de préparation à l’exercer en tant qu’indépendant, pour mieux préparer la création de votre entreprise.

 

2. Promouvoir et vendre vos services en définissant votre positionnement, votre politique tarifaire et votre stratégie de communication, afin de prospecter, de vendre et d’assurer le suivi auprès de vos clients de manière efficace.

 

3. Gérer une entreprise de traduction, notamment respecter un code de déontologie, ainsi que les nombreuses obligations comptables, fiscales et légales qu’implique l’exercice en libéral. Pour réussir et accroître vos revenus d’activité, vous découvrirez aussi comment piloter votre entreprise et anticiper les changements pour mieux saisir les opportunités.


4. Prendre du recul pour assurer l’équilibre entre votre vie personnelle et professionnelle.

 

formation marketing

Ce module spécifique s’adresse aux traducteurs déjà installés ou à ceux qui veulent aller droit à l’essentiel pour identifier et fidéliser leurs clients.

 

Après un premier diagnostic, qui permettra de cibler le périmètre de l’étude de marché à réaliser afin de bâtir une stratégie de marketing solide, le suivi pédagogique, réparti sur trois semaines, vous permettra d’acquérir les méthodes et les outils nécessaires pour vendre plus et mieux.

 

formation gestion et pilotage

Autre module spécifique, cette formation traite de tous les aspects souvent négligés par les traducteurs libéraux : les démarches administratives, la fiscalité, la comptabilité, la protection sociale, le pilotage et la stratégie de croissance. Bref, de toutes les connaissances et compétences nécessaires pour réussir en tant qu’entrepreneur.

 

Le suivi pédagogique, réalisé sur trois semaines, vise à répondre à vos questions, accompagner vos démarches et vous fournir des outils pour faciliter la gestion de votre entreprise au quotidien et assurer son avenir.

 

INSCRIPTION ET FINANCEMENT

En s’appuyant sur une équipe pédagogique composée de traducteurs experts, le CI3M propose des formations professionnelles à distance, pour certaines diplômantes, sur l’ensemble des compétences requises pour l’exécution de prestations de traduction ou de rédaction technique. Pour en savoir plus sur le coût de ces formations et les modalités d’inscription, contactez le CI3M au +33 (0)2 30 96 04 42.

 

Selon votre situation, tout ou partie des frais peuvent être remboursés ou pris en charge par le Fonds interprofessionnel de formation des professionnels libéraux (FIF PL), Pôle Emploi, votre Compte personnel de formation (CPF), etc.

 


L'auteur

Professionnelle accréditée en commerce international ayant travaillé plusieurs années en tant que conseillère auprès de PME, Gaële Gagné est traductrice indépendante depuis 2005. Aux commandes de Trëma Translations, elle traduit de l'anglais vers le français et partage ses connaissances en marketing et gestion d'entreprise avec ses collègues traducteurs par le biais d'un blog intitulé Mes petites affaires et de formations dispensées via le CI3M.


Et maintenant ?

Partagez



Abonnez-vous au flux

Consultez d'autres articles :

Diplôme de traduction : indispensable ou superflu ?
Bien facturer pour être payé
Rédiger un devis de traduction efficace
Traducteurs : 3 pistes pour se former sans se ruiner
Nouveaux traducteurs : 10 conseils pour bien démarrer




forma

Understanding ESB Performance & Benchmarking

ESB performance is a hot (and disputed topic). In this post I don't want to talk about different vendors or different benchmarks. I'm simply trying to help people understand some of the general aspects of benchmarking ESBs and what to look out for in the results.

The general ESB model is that you have some service consumer, an ESB in the middle and a service provider (target service) that the ESB is calling. To benchmark this, you usually have a load driver client, an ESB, and a dummy service.

+-------------+      +---------+      +---------------+
| Load Driver |------|   ESB   |------| Dummy Service |
+-------------+      +---------+      +---------------+

Firstly, we want the Load Driver (LD), the ESB and the Dummy Service (DS) to be on different hardware. Why? Because we want to understand the ESB performance, not the performance of the DS or LD.

The second thing to be aware of is that the performance results are completely dependent on the hardware, memory, network, etc used. So never compare different results from different hardware.

Now there are three things we could look at:
A) Same LD, same DS, different vendors ESBs doing the same thing (e.g. content-based routing)
B) Same LD, same DS, different ESB configs for the same ESB, doing different things (e.g. static routing vs content-based routing)
C) Going via ESB compared to going Direct (e.g. LD--->DS without ESB)

Each of these provides useful data but each also needs to be understood.

Metrics
Before looking at the scenarios, lets look at how to measure the performance. The two metrics that are always a starting point in any benchmark of an ESB here are the throughput (requests/second) and the latency (how long each request takes). With latency we can consider overall latency - the time taken for a completed request observed at the LD, and the ESB latency, which is the time taken by the message in the ESB. The ESB latency can be hard to work out. A well designed ESB will already be sending bytes to the DS before its finished reading the bytes the LD has sent it. This is called pipelining. Some ESBs attempt to measure the ESB latency inside the ESB using clever calculations. Alternatively scenario C (comparing via ESB vs Direct) can give an idea of ESB Latency. 

But before we look at the metrics we need to understand the load driver.

There are two different models to doing Load Driving:
1) Do a realistic load test based on your requirements. For example if you know you want to support up to 50 concurrent clients each making a call every 5 seconds on average, you can simulate this.
2) Saturation! Have a large number of clients, each making a call as soon as the last one finishes.

The first one is aimed at testing what the ESB does before its fully CPU loaded. In other words, if you are looking to see the effect of adding an ESB, or the comparison of one ESB to another under realistic load, then #1 is the right approach. In this approach, looking at throughput may not be useful, because all the different approaches have similar results. If I'm only putting in 300 requests a sec on a modern system, I'm likely to see 300 request a sec. Nothing exciting. But the latency is revealing here. If one ESB responds in less time than another ESB thats a very good sign, because with the same DS the average time per request is very telling.

On the other hand the saturation test is where the throughput is interesting. Before you look at the throughput though, check three things:
1) Is the LD CPU running close to 100%?
2) Is the DS CPU running close to 100%?
3) Is the network bandwidth running close to 100%?

If any of these are true, you aren't doing a good test of the ESB throughput. Because if you are looking at throughput then you want the ESB to be the bottleneck. If something else is the bottleneck then the ESB is not providing its max throughput and you aren't giving it a fair chance. For this reason, most benchmarks use a very very lightweight LD or a clustered LD, and similarly use a DS that is superfast and not a realistic DS. Sometimes the DS is coded to do some real work or sleep the thread while its executing to provide a more realistic load test. In this case you probably want to look at latency more than throughput.

Finally you are looking to see a particular behaviour for throughput testing as you increase load.
Throughput vs Load
The shape of this graph shows an ideal scenario. As the LD puts more work through the ESB it responds linearly. At some point the CPU of the ESB hits maximum, and then the throughput stabilizes.  What we don't want to see is the line drooping at the far right. That would mean that the ESB is crumpling under the extra load, and its failing to manage the extra load effectively. This is like the office worker whose efficiency increases as you give them more work but eventually they start spending all their time re-organizing their todo lists and less work overall gets done.

Under the saturation test you really want to see the CPU of the ESB close to 100% utilised. Why? This is a sign that its doing as much as possible. Why would it not be 100%? Two reasons: I/O, multi-processing and thread locks: either the network card or disk or other I/O is holding it up, the code is not efficiently using the available cores, or there are thread contention issues.

Finally its worth noting that you expect the latency to increase a lot under the saturation test. A classic result is this: I do static routing for different size messages with 100 clients LD. For message sizes up to 100k maybe I see a constant 2ms overhead for using the ESB. Suddenly as the message size grows from 100k to 200k I see the overhead growing in proportion to the message size.


Is this such a bad thing? No, in fact this is what you would expect. Before 100K message size, the ESB is underloaded. The straight line up to this point is a great sign that the ESB is pipelining properly. Once the CPU becomes loaded, each request is taking longer because its being made to wait its turn at the ESB while the ESB deals with the increased load.

A big hint here: When you look at this graph, the most interesting latency numbers occur before the CPU is fully loaded. The latency after the CPU is fully loaded is not that interesting, because its simply a function of the number of queued requests.

Now we understand the metrics, lets look at the actual scenarios.

A. Different Vendors, Same Workload
For the first comparison (different vendors) the first thing to be careful of is that the scenario is implemented in the best way possible in each ESB. There are usually a number of ways of implementing the same scenario. For example the same ESB may offer two different HTTP transports (or more!). For example blocking vs non-blocking, servlet vs library, etc. There may be an optimum approach and its worth reading the docs and talking to the vendor to understand the performance tradeoffs of each approach.

Another thing to be careful of in this scenario is the tuning parameters. Each ESB has various tuning aspects that may affect the performance depending on the available hardware. For example, setting the number of threads and memory based on the number of cores and physical memory may make a big difference.

Once you have your results, assuming everything we've already looked at is tickety-boo, then both latency and throughput are interesting and valid comparisons here. 

B. Different Workloads, Same Vendor
What this is measuring is what it costs you to do different activities with the same ESB. For example, doing a static routing is likely to be faster than a content-based routing, which in turn is faster than a transformation. The data from this tells you the cost of doing different functions with the ESB. For example you might want to do a security authentication/authorization check. You should see a constant bump in latency for the security check, irrespective of message size. But if you were doing complex transformation, you would expect to see higher latency for larger messages, because they take more time to transform. 

C. Direct vs ESB
This is an interesting one. Usually this is done for a simple static routing/passthrough scenario. In other words, we are testing the ESB doing its minimum possible. Why bother? Well there are two different reasons. Firstly ESB vendors usually do this for their own benefit as a baseline test. In other words, once you understand the passthrough performance you can then see the cost of doing more work (e.g. logging a header, validating security, transforming the message). 

Remember the two testing methodologies (realistic load vs saturation)? You will see very very different results in each for this, and the data may seem surprising. For the realistic test, remember we want to look at latency. This is a good comparison for the ESB. How much extra time is spent going through the ESB per request under normal conditions. For example, if the average request to the backend takes 18ms and the average request via the ESB takes 19ms, we have an average ESB latency of 1ms. This is a good result - the client is not going to notice much difference - less than 5% extra. 

The saturation test here is a good test to compare different ESBs. For example, suppose I can get 5000 reqs/sec direct. Via ESB_A the number is 3000 reqs/sec and via ESB_B the number is 2000 reqs/sec, I can say that ESB_A is providing better throughput than ESB_B. 

What is not  a good metric here is comparing throughput in saturation mode for direct vs ESB. 


Why not? The reason here is a little complex to explain. Remember how we coded DS to be as fast as possible so as not to be a bottleneck? So what is DS doing? Its really just reading bytes and sending bytes as fast as it can. Assuming the DS code is written efficiently using something really fast (e.g. just a servlet), what this is testing is how fast the hardware (CPU plus Network Card) can read and write through user space in the operating system. On a modern server hardware box you might get a very high number of transactions/sec. Maybe 5000req/s with each message in and out being 1k in size.

So we have 1k in and 1k out = 2k IO.
2k IO x 5000 reqs/sec x 8bits gives us the total network bandwidth of 80Mbits/sec (excluding ethernet headers and overhead).

Now lets look at the ESB. Imagine it can handle 100% of the direct load. There is no slowdown in throughput for the ESB. For each request it has to read the message in from LD and send it out to DS. Even if its doing this in pipelining mode, there is still a CPU cost and an IO cost for this. So the ESB latency of the ESB maybe 1ms, but the CPU and IO cost is much higher. Now, for each response it also has to read it in from DS and write it out to LD. So if the DS is doing 80Mbits/second, the ESB must be doing 160Mbits/second. 

Here is a picture.

Now if the LD is good enough, it will have loaded the DS to the max. CPU or IO capacity or both will be maxed out. Suppose the ESB is running on the same hardware platform as the DS. If the DS machine can do 80Mbit/s flat out, there is no way that the same hardware running as an ESB can do 160Mbit/s! In fact, if the ESB and DS code are both as efficient as possible, then the throughput via ESB will always be 50% of the throughput direct to the DS. Now there is a possible way for the ESB to do better: it can be better coded than the DS. For example, if the ESB did transfers in kernel space instead of user space then it might make a difference. The real answer here is to look at the latency. What is the overhead of adding the ESB to each request. If the ESB latency is small, then we can solve this problem by clustering the ESB. In this case we would put two ESBs in and then get back to full throughput.

The real point of this discussion is that this is not a useful comparison. In reality backend target services are usually pretty slow. If the same dual core server is actually doing some real work - e.g. database lookups, calculations, business logic - then its much more likely to be doing 500 requests a second or even less. 

The following chart shows real data to demonstrate this. The X-Axis shows increasing complexity of work at the backend (DS). As the effort taken by the backend becomes more realistic, the loss in throughput of having an ESB in the way reduces. So with a blindingly fast backend, we see the ESB struggling to provide just 55% of the throughput of the direct case. But as the backend becomes more realistic, we see much better numbers. So at 2000 requests a second there is barely a difference (around 10% reduction in throughput). 


In real life, what we actually see is that often you have many fewer ESBs than backend servers. For example, if we took the scenario of a backend server that can handle 500 reqs/sec, then we might end up with a cluster of two ESBs handling a cluster of 8 backends. 

Conclusion
I hope this blog has given a good overview of ESB performance and benchmarking. In particular, when is a good idea to look at latency and when to use throughput. 





forma

¡Guau! Buscador de bibliografía científica. Fuente infinita de información

SciVerse


forma

Information Consolidation in Large Bodies of Information

Due to information technologies the problem we are facing today is not a lack of information but too much information. This phenomenon becomes very clear when we consider two figures that are often quoted: Knowledge is doubling in many fields (biology, medicine, computer science, ...) within some 6 years; yet information is doubling every 8 months! This implies that the same piece of information/knowledge is published a large number of times with small variations.

Just look at an arbitrary news item. If considered of some general interest reports of it will appear in all major newspapers, journals, electronic media, etc. This is also the problem with information portals that tie together a number of large databases.

It is our contention that we need methods to reduce the huge set of information concerning a particular topic to a number of pieces of information (let us call each such piece an "essay" in what follows) that present a good cross-section of potential points of view. We will explain why one essay is usually not enough, yet the problem of reducing a huge amount of contributions to a digestible number of essays is formidable, indeed is science fiction at the moment. We will argue in this paper that it is one of the important tasks of computer sciences to start tackling this problem, and we will show that in some special cases partial solutions are possible.




forma

An Empirical Study on Human and Information Technology Aspects in Collaborative Enterprise Networks

Small and Medium Enterprises (SMEs) face new challenges in the global market as customers require more complete and flexible solutions and continue to drastically reduce the number of suppliers. SMEs are trying to address these challenges through cooperation within collaborative enterprise networks (CENs). Human aspects constitute a fundamental issue in these networks as people, as opposed to organizations or Information Technology (IT) systems, cooperate. Since there is a lack of empirical studies on the role of human factors in IT-supported collaborative enterprise networks, this paper addresses the major human aspects encountered in this type of organization. These human aspects include trust issues, knowledge and know-how sharing, coordination and planning activities, and communication and mutual understanding, as well as their influence on the business processes of CENs supported by IT tools. This paper empirically proves that these aspects constitute key factors for the success or the failure of CENs. Two case studies performed on two different CENs in Switzerland are presented and the roles of human factors are identified with respect to the IT support systems. Results show that specific human factors, namely trust and communication and mutual understanding have to be well addressed in order to design and develop adequate software solutions for CENs.




forma

Coordinated System for Real Time Muscle Deformation during Locomotion

This paper presents a system that simulates, in real time, the volumetric deformation of muscles during human locomotion. We propose a two-layered motion model. The requirements of realism and real time computation lead to a hybrid locomotion system that uses a skeleton as first layer. The muscles, represented by an anatomical surface model, constitute the second layer, whose deformations are simulated with a finite element method (FEM). The FEM subsystem is fed by the torques and forces got from the locomotion system, through a line of action model, and takes into account the geometry and material properties of the muscles. High level parameters (like height, weight, physical constitution, step frequency, step length or speed) allow to customize the individuals and the locomotion and therefore, the deformation of the persons' muscles.




forma

Context-Aware Composition and Adaptation based on Model Transformation

Using pre-existing software components (COTS) to develop software systems requires the composition and adaptation of the component interfaces to solve mismatch problems. These mismatches may appear at different interoperability levels (signature, behavioural, quality of service and semantic). In this article, we define an approach which supports composition and adaptation of software components based on model transformation by taking into account the four levels. Signature and behavioural levels are addressed by means of transition systems. Context-awareness and semanticbased techniques are used to tackle quality of service and semantic, respectively, but also both consider the signature level. We have implemented and validated our proposal for the design and application of realistic and complex systems. Here, we illustrate the need to support the variability of the adaptation process in a context-aware pervasive system through a real-world case study, where software components are implemented using Windows Workflow Foundation (WF). We apply our model transformation process to extract transition systems (CA-STS specifications) from WF components. These CA-STSs are used to tackle the composition and adaptation. Then, we generate a CASTS adaptor specification, which is transformed into its corresponding WF adaptor component with the purpose of interacting with all the WF components of the system, thereby avoiding mismatch problems.




forma

A Framework to Evaluate Interface Suitability for a Given Scenario of Textual Information Retrieval

Visualization of search results is an essential step in the textual Information Retrieval (IR) process. Indeed, Information Retrieval Interfaces (IRIs) are used as a link between users and IR systems, a simple example being the ranked list proposed by common search engines. Due to the importance that takes visualization of search results, many interfaces have been proposed in the last decade (which can be textual, 2D or 3D IRIs). Two kinds of evaluation methods have been developed: (1) various evaluation methods of these interfaces were proposed aiming at validating ergonomic and cognitive aspects; (2) various evaluation methods were applied on information retrieval systems (IRS) aiming at measuring their effectiveness. However, as far as we know, these two kinds of evaluation methods are disjoint. Indeed, considering a given IRI associated to a given IRS, what happens if we associate this IRI to another IRS not having the same effectiveness. In this context, we propose an IRI evaluation framework aimed at evaluating the suitability of any IRI to different IR scenarios. First of all, we define the notion of IR scenario as a combination of features related to users, IR tasks and IR systems. We have implemented the framework through a specific evaluation platform that enables performing IRI evaluations and that helps end-users (e.g. IRS developers or IRI designers) in choosing the most suitable IRI for a specific IR scenario.




forma

An Ontology based Agent Generation for Information Retrieval on Cloud Environment

Retrieving information or discovering knowledge from a well organized data center in general is requested to be familiar with its schema, structure, and architecture, which against the inherent concept and characteristics of cloud environment. An effective approach to retrieve desired information or to extract useful knowledge is an important issue in the emerging information/knowledge cloud. In this paper, we propose an ontology-based agent generation framework for information retrieval in a flexible, transparent, and easy way on cloud environment. While user submitting a flat-text based request for retrieving information on a cloud environment, the request will be automatically deduced by a Reasoning Agent (RA) based on predefined ontology and reasoning rule, and then be translated to a Mobile Information Retrieving Agent Description File (MIRADF) that is formatted in a proposed Mobile Agent Description Language (MADF). A generating agent, named MIRA-GA, is also implemented to generate a MIRA according to the MIRADF. We also design and implement a prototype to integrate these agents and show an interesting example to demonstrate the feasibility of the architecture.





forma

Online Journal of Nursing Informatics Archive

Online journal dedicated to nursing informatics




forma

Ascendancy of SNS information and age difference on intention to buy eco-friendly offerings: meaningful insights for e-tailers

Through the unparalleled espousal of theory of planned behaviour, this study intends to significantly add to the current knowledge on social networking sites (SNS) in <i>eWOM</i> information and its role in defining intentions to buy green products. In specie, this study seeks to first investigate the part played by <i>attitude towards SNS information</i> in influencing the <i>acceptance of SNS information</i> and then by <i>acceptance of SNS information</i> in effecting the <i>green purchase intention</i>. Besides this, it also aims to analyse the influence exerted by first <i>credibility of SNS information</i> on <i>acceptance of SNS information</i> and then by <i>acceptance of SNS information</i> on <i>green purchase intention</i>. In doing so, it also examines how well the age of the SNS users moderates all these four associations.




forma

International Journal of Vehicle Information and Communication Systems




forma

International Journal of Information and Computer Security




forma

The discussion of information security risk control in mobile banking

The emergence of digital technology and the increasing prevalence of smartphones have promoted innovations in payment options available in finance and consumption markets. Banks providing mobile payment must ensure the information security. Inadequate security control leads to information leakage, which severely affects user rights and service providers' reputations. This study uses control objectives for Information and Related Technologies 4.1 as the mobile payment security control framework to examine the emergent field of mobile payment. A literature review is performed to compile studies on the safety risk, regulations, and operations of mobile payments. In addition, the Delphi questionnaire is distributed among experts to determine the practical perspectives, supplement research gaps in the literature, and revise the prototype framework. According to the experts' opinions, 59 control objectives from the four domains of COBIT 4.1 are selected. The plan and organise, acquire and implement, deliver and support, and monitor and evaluate four domains comprised 2, 5, 10, and 2 control objectives that had mean importance scores of > 4.50. Thus, these are considered the most important objectives by the experts, respectively. The results of this study can serve as a reference for banks to construct secure frameworks in mobile payment services.




forma

Enhanced TCP BBR performance in wireless mesh networks (WMNs) and next-generation high-speed 5G networks

TCP BBR is one of the most powerful congestion control algorithms. In this article, we provide a comprehensive review of BBR analysis, expanding on existing knowledge across various fronts. Utilising ns3 simulations, we evaluate BBR's performance under diverse conditions, generating graphical representations. Our findings reveal flaws in the probe's RTT phase duration estimation and unequal bandwidth sharing between BBR and CUBIC protocols. Specifically, we demonstrated that the probe's RTT phase duration estimation algorithm is flawed and that BBR and CUBIC generally do not share bandwidth equally. Towards the end of the article, we propose a new improved version of TCP BBR which minimises these problems of inequity in bandwidth sharing and corrects the inaccuracies of the two key parameters RTprop and cwnd. Consequently, the BBR' protocol maintains very good fairness with the Cubic protocol, with an index that is almost equal to 0.98, and an equity index over 0.95.




forma

Design of intelligent financial sharing platform driven by consensus mechanism under mobile edge computing and accounting transformation

The intelligent financial sharing platform in the online realm is capable of collecting, storing, processing, analysing and sharing financial data through the integration of AI and big data processing technologies. However, as data volume grows exponentially, the cost of financial data storage and processing increases, and the asset accounting and financial profit data sharing analysis structure in financial sharing platforms is inadequate. To address the issue of data security sharing in the intelligent financial digital sharing platform, this paper proposes a data-sharing framework based on blockchain and edge computing. Building upon this framework, a non-separable task distribution algorithm based on data sharing is developed, which employs multiple nodes for cooperative data storage, reducing the pressure on the central server for data storage and solving the problem of non-separable task distribution. Multiple sets of comparative experiments confirm the proposed scheme has good feasibility in improving algorithm performance and reducing energy consumption and latency.




forma

Design of an intelligent financial sharing platform driven by digital economy and its role in optimising accounting transformation production

With the expansion of business scope, the environment faced by enterprises has also changed, and competition is becoming increasingly fierce. Traditional financial systems are increasingly difficult to handle complex tasks and predict potential financial risks. In the context of the digital economy era, the booming financial sharing services have reduced labour costs and improved operational efficiency. This paper designs and implements an intelligent financial sharing platform, establishes a fund payment risk early warning model based on an improved support vector machine algorithm, and tests it on the Financial Distress Prediction dataset. The experimental results show that the effectiveness of using F2 score and AUC evaluation methods can reach 0.9484 and 0.9023, respectively. After using this system, the average financial processing time per order decreases by 43%, and the overall financial processing time decreases by 27%. Finally, this paper discusses the role of intelligent financial sharing platform in accounting transformation and optimisation of production.




forma

International Journal of Data Mining and Bioinformatics




forma

Digital transformation in universities: models, frameworks and road map

Digital Transformation seeks to improve the processes of an organisation by integrating digital technology in all its areas, this is inevitable due to technological evolution that generates new demands, new habits and greater demands on customers and users, therefore Digital Transformation is important. In organisations to maintain competitiveness. In this context, universities are no strangers to this reality, but they find serious problems in their execution, it is not clear how to deal with an implementation of this type. The work seeks to identify tools that can be used in the implementation of Digital Transformation in universities, for this a systematic review of literature is carried out with a method based on three stages, 23 models, 13 frameworks and 8 roadmaps are identified. The elements found are analysed, obtaining eight main components with their relationships and dependencies, which can be used to generate more optimal models for universities.




forma

Fostering innovative work behaviour in Indian IT firms: the mediating influence of employee psychological capital in the context of transformational leadership

This empirical study investigates the mediating role of two components of psychological capital (PsyCap), namely self-efficacy and optimism, in the context of the relationship between transformational leadership (TL), work engagement (WE), and innovative work behaviour (IWB). The study was conducted among IT professionals with a minimum of three years of experience employed in Chennai, India. Data collection was executed using a Google Form, and both measurement and structural models were examined using SPSS 25.0 and AMOS 23.0. The findings of this study reveal several significant relationships. Firstly, transformational leadership (TL) demonstrates a robust positive association with work engagement (WE). Furthermore, work engagement (WE) positively correlates substantially with innovative work behaviour (IWB). Notably, the study underscores that two crucial components of psychological capital, specifically self-efficacy and optimism, mediate the relationship between transformational leadership (TL) and work engagement (WE). These findings carry valuable implications for IT company managers. Recognising that transformational leadership positively influences both work engagement and employees' innovative work behaviour highlights the pivotal role of leaders in fostering a productive and innovative work environment within IT organisations.




forma

Applying a multiplex network perspective to understand performance in software development

A number of studies have applied social network analysis (SNA) to show that the patterns of social interaction between software developers explain important organisational outcomes. However, these insights are based on a single network relation (i.e., uniplex social ties) between software developers and do not consider the multiple network relations (i.e., multiplex social ties) that truly exist among project members. This study reassesses the understanding of software developer networks and what it means for performance in software development settings. A systematic review of SNA studies between 1990 and 2020 across six digital libraries within the IS and management science domain was conducted. The central contributions of this paper are an in-depth overview of SNA studies to date and the establishment of a research agenda to advance our knowledge of the concept of multiplexity on how a multiplex perspective can contribute to a software developer's coordination of tasks and performance advantages.




forma

International Journal of Business Information Systems




forma

The role of pre-formation intangible assets in the endowment of science-based university spin-offs

Science-based university spin-offs face considerable technology and market uncertainty over extended periods of time, increasing the challenges of commercialisation. Scientist-entrepreneurs can play formative roles in commercialising lab-based scientific inventions through the formation of well-endowed university spin-offs. Through case study analysis of three science-based university spin-offs within a biotechnology innovation ecosystem, we unpack the impact of <i>pre-formation</i> intangible assets of academic scientists (research excellence, patenting, and international networks) and their entrepreneurial capabilities on spin-off performance. We find evidence that the pre-formation entrepreneurial capabilities of academic scientists can endow science-based university spin-offs by leveraging the scientists' pre-formation intangible assets. A theory-driven model depicting the role of pre-formation intangible assets and entrepreneurial capabilities in endowing science-based university spin-offs is developed. Recommendations are provided for scholars, practitioners, and policymakers to more effectively commercialise high potential inventions in the university lab through the development and deployment of pre-formation intangible assets and entrepreneurial capabilities.




forma

Does smartphone usage affect academic performance during COVID outbreak?

Pandemic has compelled the entire world to change their way of life and work. To control the infection rate, academic institutes deliver education online similarly. At least one smartphone is available in every home, and students use their smartphones to attend class. The study investigates the link between smartphone usage (SU) and academic performance (AP) during the pandemic. 490 data were obtained from various institutions and undergraduate students using stratified random sampling. These components were identified using factor analysis and descriptive methods, while the relationship of SU and AP based on gender classification was tested using Smart-PLS-SEM. The findings show that SU has a substantial relationship with academic success, whether done in class or outside of it. Even yet, the study found that SU and AP significantly impact both male and female students. Furthermore, the research focuses on SU outside and within the classroom to improve students' AP.