amp

World champion Payne aims to boost sidecar profile

World sidecar champion Harry Payne plans to also compete in the British Championship next year to help boost his profile.




amp

Tributes paid to Wolverhampton's Liam Payne

BBC Radio WM listeners share their memories and thoughts following the stars death.




amp

Fast & Furious star to turn on Hay's Christmas Lights

Discover how Hollywood star Luke Evans, who's also been in The Hobbit, is coming to Herefordshire.




amp

Schools launch road safety campaign

The campaign aims to increase speed awareness and improve road safety around schools.




amp

Paralympian says party snub example of inequality

The high-end store has apologised for not hosting Paralympic athletes alongside Team GB Olympians.




amp

Gurners grimace in championship face-off

Gurners sign up on the night then its hideous faces galore at the World Gurning Championships.




amp

Campbell's Bluebird to have engines refurbished

A team of engineers are checking the engines so the hydroplane can return to the water.




amp

War memorial moved as part of square revamp

The memorial is moved to a prominent location after consultation with the Royal British Legion.




amp

Poole Pirates crowned speedway champions

The Poole Pirates win speedway's SGB Championship after beating the Oxford Cheetahs.




amp

Coventry Rugby captain Jordon Poole on perfect start to the Championship

BBC CWR's Clive Eakin chats to the 27-year-old ahead of this weekend versus Caldy!




amp

Lampard confirmed as contender for Coventry job

Coventry owner Doug King confirms that ex-Chelsea and England great Frank Lampard is among the contenders for the Sky Blues job.




amp

Driving champion, 20, eyes future success

Nicky Taylor has won the GB Clio Cup and wants to step up to the British Touring Cars Championship.




amp

Back-row stars, a Puma sensation & more Premiership talking points

The back-row contenders come front and centre, Harlequins have a new Puma on the loose and more Premiership talking points




amp

Woman, 70, wins triathlon world championship title

Judy Orme won the 2024 World Triathlon Championships in her 70-74 age group category in Spain.




amp

Newspapers & trending: Stars return to city centre

A look at what stories are trending across the West of England on 13 November 2024.





amp

Every way Verstappen can clinch the championship at the Las Vegas Grand Prix | Formula 1

Max Verstappen is poised to clinch the 2024 drivers' championship if he finishes ahead of Lando Norris one more time. Here's how he can seal a fourth title at the next race.





amp

Six ways to analyze campaign ideas

Know before you vote




amp

Biden followed FDR's lead in tampering with SCOTUS

This isn’t the first time a president claimed democracy was ‘under attack’




amp

En Japón tampoco atan los perros con longaniza: la situación de la interpretación judicial nipona

En la entrada de hoy quiero hacerme eco de un artículo de Takahata Sachi que se titula «Las malas condiciones desincentivan a los intérpretes judiciales», publicado en Nippon.com y que versa acerca de la situación de los intérpretes judiciales en Japón. El artículo se puede leer en español, por lo que no hay motivo para tormento […]




amp

Una traductora en la WordCamp Valladolid

He tenido tiempo de ver algunas charlas de la WordCamp Valladolid 2020 que me perdí en su momento. Algo bueno tenía que […]




amp

Understanding ESB Performance & Benchmarking

ESB performance is a hot (and disputed topic). In this post I don't want to talk about different vendors or different benchmarks. I'm simply trying to help people understand some of the general aspects of benchmarking ESBs and what to look out for in the results.

The general ESB model is that you have some service consumer, an ESB in the middle and a service provider (target service) that the ESB is calling. To benchmark this, you usually have a load driver client, an ESB, and a dummy service.

+-------------+      +---------+      +---------------+
| Load Driver |------|   ESB   |------| Dummy Service |
+-------------+      +---------+      +---------------+

Firstly, we want the Load Driver (LD), the ESB and the Dummy Service (DS) to be on different hardware. Why? Because we want to understand the ESB performance, not the performance of the DS or LD.

The second thing to be aware of is that the performance results are completely dependent on the hardware, memory, network, etc used. So never compare different results from different hardware.

Now there are three things we could look at:
A) Same LD, same DS, different vendors ESBs doing the same thing (e.g. content-based routing)
B) Same LD, same DS, different ESB configs for the same ESB, doing different things (e.g. static routing vs content-based routing)
C) Going via ESB compared to going Direct (e.g. LD--->DS without ESB)

Each of these provides useful data but each also needs to be understood.

Metrics
Before looking at the scenarios, lets look at how to measure the performance. The two metrics that are always a starting point in any benchmark of an ESB here are the throughput (requests/second) and the latency (how long each request takes). With latency we can consider overall latency - the time taken for a completed request observed at the LD, and the ESB latency, which is the time taken by the message in the ESB. The ESB latency can be hard to work out. A well designed ESB will already be sending bytes to the DS before its finished reading the bytes the LD has sent it. This is called pipelining. Some ESBs attempt to measure the ESB latency inside the ESB using clever calculations. Alternatively scenario C (comparing via ESB vs Direct) can give an idea of ESB Latency. 

But before we look at the metrics we need to understand the load driver.

There are two different models to doing Load Driving:
1) Do a realistic load test based on your requirements. For example if you know you want to support up to 50 concurrent clients each making a call every 5 seconds on average, you can simulate this.
2) Saturation! Have a large number of clients, each making a call as soon as the last one finishes.

The first one is aimed at testing what the ESB does before its fully CPU loaded. In other words, if you are looking to see the effect of adding an ESB, or the comparison of one ESB to another under realistic load, then #1 is the right approach. In this approach, looking at throughput may not be useful, because all the different approaches have similar results. If I'm only putting in 300 requests a sec on a modern system, I'm likely to see 300 request a sec. Nothing exciting. But the latency is revealing here. If one ESB responds in less time than another ESB thats a very good sign, because with the same DS the average time per request is very telling.

On the other hand the saturation test is where the throughput is interesting. Before you look at the throughput though, check three things:
1) Is the LD CPU running close to 100%?
2) Is the DS CPU running close to 100%?
3) Is the network bandwidth running close to 100%?

If any of these are true, you aren't doing a good test of the ESB throughput. Because if you are looking at throughput then you want the ESB to be the bottleneck. If something else is the bottleneck then the ESB is not providing its max throughput and you aren't giving it a fair chance. For this reason, most benchmarks use a very very lightweight LD or a clustered LD, and similarly use a DS that is superfast and not a realistic DS. Sometimes the DS is coded to do some real work or sleep the thread while its executing to provide a more realistic load test. In this case you probably want to look at latency more than throughput.

Finally you are looking to see a particular behaviour for throughput testing as you increase load.
Throughput vs Load
The shape of this graph shows an ideal scenario. As the LD puts more work through the ESB it responds linearly. At some point the CPU of the ESB hits maximum, and then the throughput stabilizes.  What we don't want to see is the line drooping at the far right. That would mean that the ESB is crumpling under the extra load, and its failing to manage the extra load effectively. This is like the office worker whose efficiency increases as you give them more work but eventually they start spending all their time re-organizing their todo lists and less work overall gets done.

Under the saturation test you really want to see the CPU of the ESB close to 100% utilised. Why? This is a sign that its doing as much as possible. Why would it not be 100%? Two reasons: I/O, multi-processing and thread locks: either the network card or disk or other I/O is holding it up, the code is not efficiently using the available cores, or there are thread contention issues.

Finally its worth noting that you expect the latency to increase a lot under the saturation test. A classic result is this: I do static routing for different size messages with 100 clients LD. For message sizes up to 100k maybe I see a constant 2ms overhead for using the ESB. Suddenly as the message size grows from 100k to 200k I see the overhead growing in proportion to the message size.


Is this such a bad thing? No, in fact this is what you would expect. Before 100K message size, the ESB is underloaded. The straight line up to this point is a great sign that the ESB is pipelining properly. Once the CPU becomes loaded, each request is taking longer because its being made to wait its turn at the ESB while the ESB deals with the increased load.

A big hint here: When you look at this graph, the most interesting latency numbers occur before the CPU is fully loaded. The latency after the CPU is fully loaded is not that interesting, because its simply a function of the number of queued requests.

Now we understand the metrics, lets look at the actual scenarios.

A. Different Vendors, Same Workload
For the first comparison (different vendors) the first thing to be careful of is that the scenario is implemented in the best way possible in each ESB. There are usually a number of ways of implementing the same scenario. For example the same ESB may offer two different HTTP transports (or more!). For example blocking vs non-blocking, servlet vs library, etc. There may be an optimum approach and its worth reading the docs and talking to the vendor to understand the performance tradeoffs of each approach.

Another thing to be careful of in this scenario is the tuning parameters. Each ESB has various tuning aspects that may affect the performance depending on the available hardware. For example, setting the number of threads and memory based on the number of cores and physical memory may make a big difference.

Once you have your results, assuming everything we've already looked at is tickety-boo, then both latency and throughput are interesting and valid comparisons here. 

B. Different Workloads, Same Vendor
What this is measuring is what it costs you to do different activities with the same ESB. For example, doing a static routing is likely to be faster than a content-based routing, which in turn is faster than a transformation. The data from this tells you the cost of doing different functions with the ESB. For example you might want to do a security authentication/authorization check. You should see a constant bump in latency for the security check, irrespective of message size. But if you were doing complex transformation, you would expect to see higher latency for larger messages, because they take more time to transform. 

C. Direct vs ESB
This is an interesting one. Usually this is done for a simple static routing/passthrough scenario. In other words, we are testing the ESB doing its minimum possible. Why bother? Well there are two different reasons. Firstly ESB vendors usually do this for their own benefit as a baseline test. In other words, once you understand the passthrough performance you can then see the cost of doing more work (e.g. logging a header, validating security, transforming the message). 

Remember the two testing methodologies (realistic load vs saturation)? You will see very very different results in each for this, and the data may seem surprising. For the realistic test, remember we want to look at latency. This is a good comparison for the ESB. How much extra time is spent going through the ESB per request under normal conditions. For example, if the average request to the backend takes 18ms and the average request via the ESB takes 19ms, we have an average ESB latency of 1ms. This is a good result - the client is not going to notice much difference - less than 5% extra. 

The saturation test here is a good test to compare different ESBs. For example, suppose I can get 5000 reqs/sec direct. Via ESB_A the number is 3000 reqs/sec and via ESB_B the number is 2000 reqs/sec, I can say that ESB_A is providing better throughput than ESB_B. 

What is not  a good metric here is comparing throughput in saturation mode for direct vs ESB. 


Why not? The reason here is a little complex to explain. Remember how we coded DS to be as fast as possible so as not to be a bottleneck? So what is DS doing? Its really just reading bytes and sending bytes as fast as it can. Assuming the DS code is written efficiently using something really fast (e.g. just a servlet), what this is testing is how fast the hardware (CPU plus Network Card) can read and write through user space in the operating system. On a modern server hardware box you might get a very high number of transactions/sec. Maybe 5000req/s with each message in and out being 1k in size.

So we have 1k in and 1k out = 2k IO.
2k IO x 5000 reqs/sec x 8bits gives us the total network bandwidth of 80Mbits/sec (excluding ethernet headers and overhead).

Now lets look at the ESB. Imagine it can handle 100% of the direct load. There is no slowdown in throughput for the ESB. For each request it has to read the message in from LD and send it out to DS. Even if its doing this in pipelining mode, there is still a CPU cost and an IO cost for this. So the ESB latency of the ESB maybe 1ms, but the CPU and IO cost is much higher. Now, for each response it also has to read it in from DS and write it out to LD. So if the DS is doing 80Mbits/second, the ESB must be doing 160Mbits/second. 

Here is a picture.

Now if the LD is good enough, it will have loaded the DS to the max. CPU or IO capacity or both will be maxed out. Suppose the ESB is running on the same hardware platform as the DS. If the DS machine can do 80Mbit/s flat out, there is no way that the same hardware running as an ESB can do 160Mbit/s! In fact, if the ESB and DS code are both as efficient as possible, then the throughput via ESB will always be 50% of the throughput direct to the DS. Now there is a possible way for the ESB to do better: it can be better coded than the DS. For example, if the ESB did transfers in kernel space instead of user space then it might make a difference. The real answer here is to look at the latency. What is the overhead of adding the ESB to each request. If the ESB latency is small, then we can solve this problem by clustering the ESB. In this case we would put two ESBs in and then get back to full throughput.

The real point of this discussion is that this is not a useful comparison. In reality backend target services are usually pretty slow. If the same dual core server is actually doing some real work - e.g. database lookups, calculations, business logic - then its much more likely to be doing 500 requests a second or even less. 

The following chart shows real data to demonstrate this. The X-Axis shows increasing complexity of work at the backend (DS). As the effort taken by the backend becomes more realistic, the loss in throughput of having an ESB in the way reduces. So with a blindingly fast backend, we see the ESB struggling to provide just 55% of the throughput of the direct case. But as the backend becomes more realistic, we see much better numbers. So at 2000 requests a second there is barely a difference (around 10% reduction in throughput). 


In real life, what we actually see is that often you have many fewer ESBs than backend servers. For example, if we took the scenario of a backend server that can handle 500 reqs/sec, then we might end up with a cluster of two ESBs handling a cluster of 8 backends. 

Conclusion
I hope this blog has given a good overview of ESB performance and benchmarking. In particular, when is a good idea to look at latency and when to use throughput. 





amp

Gaza : au moins trois morts après une frappe israélienne sur le camp de réfugiés de Nouseirat

Gaza : au moins trois morts après une frappe israélienne sur le camp de réfugiés de Nouseirat




amp

Retour d’expérience sur une campagne de boycott d’entreprises au Maroc

 

Le 20 avril 2018, un appel au boycott a été lancé sur les réseaux sociaux marocains contre trois entreprises leaders dans leurs secteurs d’activités. L’eau minérale de Sidi Ali, le lait de Centrale Danone et les stations de services Afriquia (pétrole) ont été victimes d’une guerre d’information, justifiée selon les internautes par des prix de vente élevés. Les appels au boycott ont été relayés par les internautes Marocains via des groupes et des pages ...




amp

Journï¿œes parlementaires, campus rï¿œgionaux : La Rï¿œpublique en marche va dï¿œpenser prï¿œs d'un million d'euros pour sa rentrï¿œe

C'est la rentrᅵe politique. Et qui dit rentrᅵe, dit universitᅵ d'ᅵtᅵ. Cette annᅵe, la Rᅵpublique en marche a vu les choses en grand en organisant ᅵ la fois des journᅵes...




amp

Access controllable multi-blockchain platform for enterprise R&D data management

In the era of big data, enterprises have accumulated a large amount of research and development data. Effective management of their precipitated data and safe sharing of data can improve the collaboration efficiency of research and development personnel, which has become the top priority of enterprise development. This paper proposes to use blockchain technology to assist the collaboration efficiency of enterprise R&D personnel. Firstly, the multi-chain blockchain platform is used to realise the data sharing of internal data of enterprise R&D data department, project internal data and enterprise data centre, and then the process of construction of multi-chain structure and data sharing is analysed. Finally, searchable encryption was introduced to achieve data retrieval and secure sharing, improving the collaboration efficiency of enterprise research and development personnel and maximising the value of data assets. Through the experimental verification, the multi-chain structure improves the collaboration efficiency of researchers and data security sharing.




amp

Learning & Personality Types: A Case Study of a Software Design Course




amp

First Year Engagement & Retention: A Goal-Setting Approach




amp

Incorporating Kinesthetic Learning into University Classrooms: An Example from Management Information Systems

Aim/Purpose: Students tend to learn best when an array of learning styles is used by instructors. The purpose of this paper is to add, to introduce, and to apply the concepts of kinesthetic learning and learning structures to university and STEM education. Background: The study applies the concept of kinesthetic learning and a learning structure called Think-Pair-Share to an experiential exercise about Moore’s Law in an introductory MIS classroom. The paper details the exercise and each of its components. Methodology: Students in two classes were asked to complete a short survey about their conceptual understanding of the course material before and after the experiential exercise. Contribution: The paper details the benefits of kinesthetic learning and learning structures and discusses how to apply these concepts through an experiential exercise used in an introductory MIS course. Findings: Results indicate that the kinesthetic learning activity had a positive impact on student learning outcomes. Recommendations for Practitioners: University educators can use this example to structure several other learning activities that apply kinesthetic learning principles. Recommendation for Researchers: Researchers can use this paper to study more about how to incorporate kinesthetic learning into education, and about teaching technology concepts to undergraduate students through kinesthetic learning. Impact on Society: The results of this study may be extremely beneficial for the university and STEM community and overall academic business community. Future Research: Researchers should consider longitudinal studies and other ways to incorporate kinesthetic learning activities into education.




amp

M-Learning Management Tool Development in Campus-Wide Environment




amp

Virtual Medical Campus (VMC) Graz: Innovative Curriculum meets Innovative Learning Objects Technology




amp

The Discovery Camp: A Talent Fostering Initiative for Developing Research Capabilities among Undergraduate Students




amp

The Need for and Contents of a Course in Forensic Information Systems & Computer Science at the University of Cape Town




amp

A Data Driven Conceptual Analysis of Globalization — Cultural Affects and Hofstedian Organizational Frames: The Slovak Republic Example




amp

The Coordination between Faculty and Technical Support Staff in Updating Computer Technology Courses – A Case Example




amp

Virtual Campuses, Groupware and University Evolution




amp

Dealing with Student Disruptive Behavior in the Classroom – A Case Example of the Coordination between Faculty and Assistant Dean for Academics




amp

Campus Event App - New Exploration for Mobile Augmented Reality




amp

Self-efficacy, Challenge, Threat and Motivation in Virtual and Blended Courses on Multicultural Campuses

Aim/Purpose: The aim of this study was to examine the sense of challenge and threat, negative feelings, self-efficacy, and motivation among students in a virtual and a blended course on multicultural campuses and to see how to afford every student an equal opportunity to succeed in academic studies. Background: Most academic campuses in Israel are multicultural, with a diverse student body. The campuses strive to provide students from all sectors, regardless of nationality, religion, etc., the possibility of enjoying academic studies and completing them successfully. Methodology: This is a mixed-method study with a sample of 484 students belonging to three sectors: general Jewish, ultra-orthodox Jewish, and Arab. Contribution: This study’s findings might help faculty on multicultural campuses to advance all students and enable them equal opportunity to succeed in academic studies. Findings: Significant sectorial differences were found for the sense of challenge and threat, negative feelings, and motivation. We found that the sense of challenge and level of motivation among Arab students was higher than among the ultra-orthodox Jewish students, which, in turn, was higher than among the general Jewish student population. On the other hand, we found that the perception of threat and negative feelings among Arab students were higher than for the other two sectors for both the virtual and the blended course. Recommendations for Practitioners: Significant feedback might lessen the sense of threat and the negative feelings and be a meaningful factor for the students to persevere in the course. Intellectual, emotional, and differential feedback is recommended. Not relating to students’ difficulties might lead to a sense of alienation, a lack of belonging, or inability to cope with the tasks at hand and dropout from the course, or even from studies altogether. A good interaction between lecturer and student can change any sense of incompetence or helplessness to one of self-efficacy and the ability to interact with one’s surroundings. Recommendations for Researchers: Lecturers can reduce the sense of threat and negative feelings and increase a student’s motivation by making their presence felt on the course website, using the forums to manage discussions with students, and enabling and encouraging discussion among the students. Impact on Society: The integration of virtual learning environments into the learning process might lead to the fulfilment of an educational vision in which autonomous learners realize their personal potential. Hence they must be given tasks requiring the application of high learning skills without compromise, but rather with differential treatment of students in order to reduce negative feelings and the sense of threat, and to reduce the transactional distance. Future Research: Further studies should examine the causes of negative feelings among students participating in virtual and blended courses on multicultural campuses and how these feelings can be handled.




amp

Critical Success Factors for Implementing Business Intelligence Systems in Small and Medium Enterprises on the Example of Upper Silesia, Poland




amp

Predicting Key Predictors of Project Desertion in Blockchain: Experts’ Verification Using One-Sample T-Test

Aim/Purpose: The aim of this study was to identify the critical predictors affecting project desertion in Blockchain projects. Background: Blockchain is one of the innovations that disrupt a broad range of industries and has attracted the interest of software developers. However, despite being an open-source software (OSS) project, the maintenance of the project ultimately relies on small core developers, and it is still uncertain whether the technology will continue to attract a sufficient number of developers. Methodology: The study utilized a systematic literature review (SLR) and an expert review method. The SLR identified 21 primary studies related to project desertion published in Scopus databases from the year 2010 to 2020. Then, Blockchain experts were asked to rank the importance of the identified predictors of project desertion in Blockchain. Contribution: A theoretical framework was constructed based on Social Cognitive Theory (SCT) constructs; personal, behavior, and environmental predictors and related theories. Findings: The findings indicate that the 12 predictors affecting Blockchain project desertion identified through SLR were important and significant. Recommendations for Practitioners: The framework proposed in this paper can be used by the Blockchain development community as a basis to identify developers who might have the tendency to abandon a Blockchain project. Recommendation for Researchers: The results show that some predictors, such as code testing tasks, contributed code decoupling, system integration and expert heterogeneity that are not covered in the existing developer turnover models can be integrated into future research efforts. Impact on Society: This study highlights how an individual’s design choices could determine the success or failure of IS projects. It could direct Blockchain crypto-currency investors and cyber-security managers to pay attention to the developer’s behavior while ensuring secure investments, especially for crypto-currencies projects. Future Research: Future research may employ additional methods, such as a meta-analysis, to provide a comprehensive picture of the main predictors that can predict project desertion in Blockchain.




amp

The Impacts of KM-Centred Strategies and Practices on Innovation: A Survey Study of R&D Firms in Malaysia

Aim/Purpose: The aim of this paper is to examine the influences of KM-centred strategies on innovation capability among Malaysian R&D firms. It also deepens understanding of the pathways and conditions to improve the innovation capability by assessing the mediating role of both KM practices, i.e., knowledge exploration practices, and knowledge exploitation practices. Background: Knowledge is the main organisational resource that is able to generate a competitive advantage through innovation. It is a critical success driver for both knowledge exploration and exploitation for firms to achieve sustainable competitive advantages. Methodology: A total of 320 questionnaires were disseminated to Malaysian R&D firms and the response rate was 47 percent. The paper utilised structural equation modelling and cross-sectional design to test hypotheses in the proposed research model. Contribution: This paper provides useful information and valuable initiatives in exploring the mediating role of knowledge exploration and knowledge exploitation in influencing innovation in Malaysian R&D firms. It helps R&D firms to frame their KM activities to drive the capability of creating and retaining a greater value onto their core business competencies. Findings: The findings indicate that all three KM-centred strategies (leadership, HR practices, and culture) have a direct effect on innovation. In addition, KM exploration practices mediate HR practices on innovation while KM exploitation mediates both leadership and HR practices on innovation. Recommendations for Practitioners: This paper serves as a guide for R&D managers to determine the gaps and appropriate actions to collectively achieve the desired R&D results and national innovation. It helps R&D firms frame their KM activities to enhance the capability of creating and retaining a greater value to their core business competencies. Recommendation for Researchers: This paper contributes significantly to knowledge management and innovation research by establishing new associations among KM-centred strategies, i.e., leadership, HR practices, and culture, both KM practices (knowledge exploration and knowledge exploitation), and innovation. Impact on Society: This paper highlights the important role of knowledge leaders and the practice of effective HR practices to help R&D firms to create a positive environment that facilitates both knowledge exploration and knowledge exploitation in enhancing innovation capabilities. Future Research: Further research could use a longitudinal sample to examine relationships of causality, offering a more comprehensive view of the effect of KM factors on innovation over the long term. Future research should also try to incorporate information from new external sources, such as customers or suppliers.




amp

Encouraging SME eCollaboration – The Role of the Champion Facilitator




amp

Geospatial Crypto Reconnaissance: A Campus Self-Discovery Game

Campus discovery is an important feature of a university student induction process. Approaches towards campus discovery differ from course to course and can comprise guided tours that are often lengthy and uninspiring, or self-guided tours that run the risk of students failing to complete them. This paper describes a campus self-discovery induction game (Geospatial Crypto Reconnaissance) which aims to make students aware of campus resources and facilities, whilst at the same time allowing students to make friends and complete the game in an enthusing and exciting way. In this paper we describe the game construct, which comprises of a location, message, and artefact, and also the gameplay. Geospatial Crypto Reconnaissance requires students to identify a series of photographs from around the campus, to capture the GPS coordinates of the location of the photograph, to decipher a ciphered message and then to return both the GPS coordinates and the message for each photograph, proving that the student has attended the location. The game had a very high satisfaction score and we present an analysis of student feedback on the game and also provide guidance on how the game can be adopted for less technical cohorts of students.




amp

Work-Based Learning and Research for Mid-Career Professionals: Two Project Examples from Australia

Aim/Purpose: Most research on work-based learning and research relates to theory, including perspectives, principles and curricula, but few studies provide contemporary examples of work-based projects, particularly in the Australian context; this paper aims to address that limitation. Background: The Professional Studies Program at University of Southern Queensland is dedicated to offering advanced practice professionals the opportunity to self-direct organizational and work-based research projects to solve real-world workplace problems; two such examples in the Australian context are provided by this paper. Methodology: The paper employs a descriptive approach to analyzing these two work-based research projects and describes the mixed methods used by each researcher. Contribution: The paper provides examples of work-based research in (a) health, safety, and wellness leadership and its relation to corporate performance; and (b) investigator identity in the Australian Public Service; neither topic has been examined before in Australia and little, if anything, is empirically known about these topics internationally. Findings: The paper presents the expected outcomes for each project, including discussion of the ‘triple dividend’ of personal, organizational, and practice domain benefits; as importantly, the paper presents statements of workplace problems, needs and opportunities, status of the practice domain, background and prior learning of the researchers, learning objectives, work-based research in the practice domain, and lessons learned from research which can be integrated into a structured framework of advanced practice. Recommendations for Practitioners: This is a preliminary study of two work-based research projects in Australia; as these and other real-world projects are completed, further systematic and rigorous reports to the international educational community will reveal the granulated value of conducting projects designed to change organisations and concordant practice domains. Recommendation for Researchers: While introducing the basic elements of research methods and expected out-comes of work-based projects, examples in this paper give only a glimpse into the possible longer-term contributions such research can make to workplaces in Australia. Researchers, as a consequence, need to better understand the relationship between practice domains, research as a valuable investigative tool in workplaces, and organizational and social outcomes. Impact on Society: Work-based learning and research have been developed to not only meet the complex and changing demands of the global workforce but have been implemented to address real-world organizational problems for the benefit of society; this paper provides two examples where such benefit may occur. Future Research: Future research should focus on the investigation of triple-dividend outcomes and whether they are sustainable over the longer term.




amp

Comprendiendo Nuestras Politicas: The Need for an Effective C&IT Policy for a Nation’s Development, The Venezuelan Case




amp

Communicating Transdisciplinary Characteristics In Global Regulatory Affairs: An Example From Health Professions Education

Aim/Purpose: This paper describes the regulatory affairs discipline as a useful case in the study of both inter- and transdisciplinary science and dynamics related to communication across multiple boundaries. We will 1) outline the process that led to the development of transnational competencies for regulatory affairs graduate education, 2) discuss how the process highlights the transdisciplinary character of regulatory affairs, 3) provide implications for how to communicate the influence of this characterization to future healthcare professionals, and 4) draw conclusions regarding how our lessons-learned might inform other programs of study. Background: In the past few decades, the regulatory affairs profession has become more internationalized. This prompted the need for new competencies grounded in the transnational and cross-disciplinary contexts in which these professionals are required to operate. Methodology: A convenience sample of experienced regulatory affairs professionals from multiple disciplines contributed to the development of transnational competencies for a master’s program in regulatory affairs using a transdisciplinary framework. Contribution: An applied exemplar in which to understand how transdisciplinary characteristics can be communicated and applied in higher education. Recommendations for Practitioners: This paper recommends how competencies developed from a regulatory affairs program can serve as exemplars for other applied transdisciplinary higher education programs. Impact on Society: This framework provides a seldom-used reflective approach to regulatory affairs education that utilizes cross-disciplinary theory to inform competence-based formation of professionals.




amp

Berkeley Technology Law Journal Podcast: Will ChatGPT Tell Me How to Vote? Democracy & AI with Professor Bertrall Ross

[Meg O’Neill] 00:08 Hello and welcome to the Berkeley Technology Law Journal podcast. My name is Meg O’Neill and I am one of the editors of the podcast. Today we are excited to share with you a conversation between Berkeley Law LLM student Franco Dellafiori, and Professor Bertrall Ross. Professor ...

The post Berkeley Technology Law Journal Podcast: Will ChatGPT Tell Me How to Vote? Democracy & AI with Professor Bertrall Ross appeared first on Berkeley Technology Law Journal.




amp

CLEAR & RETURN: Stopping Run-Time Countermeasures in Cryptographic Primitives

Myung-Hyun KIM,Seungkwang LEE, Vol.E107-D, No.11, pp.1449-1452
White-box cryptographic implementations often use masking and shuffling as countermeasures against key extraction attacks. To counter these defenses, higher-order Differential Computation Analysis (HO-DCA) and its variants have been developed. These methods aim to breach these countermeasures without needing reverse engineering. However, these non-invasive attacks are expensive and can be thwarted by updating the masking and shuffling techniques. This paper introduces a simple binary injection attack, aptly named clear & return, designed to bypass advanced masking and shuffling defenses employed in white-box cryptography. The attack involves injecting a small amount of assembly code, which effectively disables run-time random sources. This loss of randomness exposes the unprotected lookup value within white-box implementations, making them vulnerable to simple statistical analysis. In experiments targeting open-source white-box cryptographic implementations, the attack strategy of hijacking entries in the Global Offset Table (GOT) or function calls shows effectiveness in circumventing run-time countermeasures.
Publication Date: 2024/11/01