king

Stoke-on-Trent looking to refresh ties with German twin

Stoke-on-Trent is looking to rekindle its relationship with its German twin town.




king

Kerosene leak continues, sparking pollution fears

Councillor Ruth Houghton says the kerosene in the drains is "still flowing" after 10 days.




king

Controversial parking charges to come in next month

The plans to bring in charges led to protests in some areas and outside council meetings.




king

Kingsway residents seek more access roads to reduce congestion

Reporter Duncan Cook has been finding out more.




king

'Cyber attack' council working to ease backlog

The authority says there was a backlog of planning applications following the incident.




king

Town says no to compulsory High Street parking app

Drivers can currently park for free for an hour, but might need to register on an app in the future.




king

Not-so-smart parking traps 60 in bowls club

Club members are not exactly bowled over when a Smart car driver blocks them in their car park.




king

Joking burglars jailed for string of metal thefts

The gang, who nicknamed themselves The Sticky Bandits, hit 10 firms within six months.




king

'My home's a working museum piece, it's a gem'

Davy Keenan has lived on water for more than 50 years and now spends his time aboard 'Little Gem'.




king

Third of working age people die in poverty - report

According to Marie Curie, 35% of working age people who died in Bradford were in poverty.




king

Free parking day to help late holiday shoppers

Weymouth Town Council says it has made 23 December a free parking day to help last-minute shoppers.




king

School run drivers risking lives

A Rugby school's lack of parking restrictions outside gates is a hazard for pupils.




king

King faces fans after painful Robins 'divorce'

Coventry City owner Doug King outlines the real reasons behind Mark Robins' sacking from Sky Blues.




king

How to use networking and customer advocacy to build your brand community

Personalisation in marketing works because personal connections matter. That extends beyond your customers, too. Personal connections include the relationships you have with your suppliers, your stakeholders, your employees, industry peers… the list goes on.




king

Dao Day 2024 – a regression in the making

It’s twenty four years to the day since A List Apart published John Allsopp’s seminal treatise A Dao of Web Design. It must be one of the most vital and cited articles ever to be written about web design. In it John quoted the Tao Te Ching as a way of persuading us web designers to be like The Sage and “accept the ebb and flow of things”.

John compared the nature of print with the web:

The fact we can control a paper page is really a limitation of that medium. You can think – we can fix the size of text – or you can think – the size of text is unalterable. You can think – the dimensions of a page can be controlled – or – the dimensions of a page can’t be altered. These are simply facts of the medium.

And they aren’t necessarily good facts, especially for the reader.

We should embrace the fact that the web doesn’t have the same constraints, and design for this flexibility.

Those demands for flexibility led – 10 years later – to responsive web design as a best practice, and on to the present concept of fluid design.

However we’re currently battling against another regression. As John himself wrote recently, “having escaped the gravity well of web pages being ’print, only onscreen’, they became ’apps, only in the browser’”.

The better way of doing things will win out. Why? Because more people benefit from the accessible outcomes of fluid design, and it is coupled with a lower design and technical debt, even if the initial effort is higher. Meanwhile plus í§a change, plus c’est la míªme chose, or as the Lao Tse wrote 2,500 years ago “Well established hierarchies are not easily uprooted. So ritual enthrals generation after generation.”

Read or add comments




king

Back-row stars, a Puma sensation & more Premiership talking points

The back-row contenders come front and centre, Harlequins have a new Puma on the loose and more Premiership talking points




king

The cyclists tracking down their own stolen bikes

Fewer than 3% of reported bike thefts since 2019 resulted in a charge or summons, the BBC learns.





king

BREAKING NEWS: I was a sex worker.

This morning I awoke to find a claim published in the Mail that I was not a sex worker. 


It is a direct attack on my integrity as a writer, to claim that I lied. And I have been prepared.

When the case goes to trial, I will have to present evidence that I was a sex worker. Starting with this - an Archive.org snap of my first escorting ad from October 2003 (link NSFW).

(Readers of the first book may recall this was the session with the grumpy photographer I wrote about. As I have often said, it was that experience - being made to wear terrible lingerie, awkward poses, all the rest - that first made me think, 'hey, I should be blogging this.'

And if you read the third book, I made a reference to a restaurant on Old Compton Street that has the same name as my working name - that is, of course, Taro.)

I will also be presenting my bank records from 2003-04, showing the cash deposits from the money I earned as an escort, and tax records from the same years showing that this income was declared to HMRC and tax paid. Here is a sample:

 
I also have the notebook in which I recorded details of appointments, etc. In several instances I have been able to piece together entries from the notebook, deposits to my accounts, and the corresponding entries in the book. If pressed, I will name a client, but only as a last resort.

The Mail also claims I didn't own nice enough clothes so couldn't have been an escort!


That's from December 2003, and is the same red silk top I wore to meet the manager for the first time (as written about in the first book). The next is at Henley Regatta in July 2004, suit is from Austin Reed, the bracelet was a gift from a client.



The Mail claims I was in Sheffield when writing the blog, but I moved to London in September 2003 and started escorting in October, starting blogging a few weeks later. All of which is easy - trivial, even - to prove.

Oh, and the "former landlady in Sheffield, who did not wish to be named", where I supposedly lived for three years? Who apparently saw me in 'Oxfam jumpers'? Hmm... I lived one year in university accommodation (St George's Flats),  one year in a shared flat with an absentee landlord I never met (Hawthorne Road), and one year on my own in a house let through an agency (Loxley New Road). All well before moving to London. So either the landlady is lying about the timing of my tenancy and having met me, or (shock, horror) they made it up.

There's much more but it would be boring to put it all here. It's amazing to me the MoS made no effort at all to match anything they printed against things that are easy to find and in the public domain. But that's by the by, and will come out in due course.

It matters because this is a concerted and direct attack on my work as a writer. When I was anonymous, being real was my main - my only - advantage. The Mail on Sunday have made some frankly nonsense claims, and I will be going to town on this.

Because I know people do not trust the word of a sex worker, that is why I saved everything.

I look forward to the opportunity to rebut all claims in court. (The MoS claim the trial is expected "within weeks." In fact it is scheduled for June 2015.)




king

The times McLaren came closest to breaking 25-year constructors’ title drought | Formula 1

McLaren could be set to win their first constructors' title for 25 years this season. Here is how close they've come over that time.




king

Trados Studio 2017 – Auto propagation not working in review mode

Have you ever reviewed a large file in SDL Trados Studio with numerous repetitions and struggled with confirmed segments not being propagated to the rest of the file? Here is what to do. Imagine you have a big file to … Continue reading




king

Working under a cloud!

In the heart of LingoVille, translator Trina was renowned for her linguistic prowess but was a bit behind in the tech world.  When her old typewriter finally gave out, she received a sleek new laptop, which came with OneDrive pre-enabled.  Initially hesitant about this “cloud magic,” she soon marvelled at the convenience of securely storing … Continue reading Working under a cloud!




king

Working with CSV’s…

CSV, or files with “comma separated values”, is a simple format that everyone should be able to handle.  Certainly you’d think so except nothing is ever that straightforward and if you’ve ever spent time trying to work with these files and having to deal with all the problems inherent to this format then you’ll know … Continue reading Working with CSV’s…





king

Taking College loyalty a bit far?

A few weekends ago I was at the Balliol College family day and they had a face painter. I got her to do a large college arms on my face, which came out quite well! Thanks to Jeremy for the picture.




king

Understanding ESB Performance & Benchmarking

ESB performance is a hot (and disputed topic). In this post I don't want to talk about different vendors or different benchmarks. I'm simply trying to help people understand some of the general aspects of benchmarking ESBs and what to look out for in the results.

The general ESB model is that you have some service consumer, an ESB in the middle and a service provider (target service) that the ESB is calling. To benchmark this, you usually have a load driver client, an ESB, and a dummy service.

+-------------+      +---------+      +---------------+
| Load Driver |------|   ESB   |------| Dummy Service |
+-------------+      +---------+      +---------------+

Firstly, we want the Load Driver (LD), the ESB and the Dummy Service (DS) to be on different hardware. Why? Because we want to understand the ESB performance, not the performance of the DS or LD.

The second thing to be aware of is that the performance results are completely dependent on the hardware, memory, network, etc used. So never compare different results from different hardware.

Now there are three things we could look at:
A) Same LD, same DS, different vendors ESBs doing the same thing (e.g. content-based routing)
B) Same LD, same DS, different ESB configs for the same ESB, doing different things (e.g. static routing vs content-based routing)
C) Going via ESB compared to going Direct (e.g. LD--->DS without ESB)

Each of these provides useful data but each also needs to be understood.

Metrics
Before looking at the scenarios, lets look at how to measure the performance. The two metrics that are always a starting point in any benchmark of an ESB here are the throughput (requests/second) and the latency (how long each request takes). With latency we can consider overall latency - the time taken for a completed request observed at the LD, and the ESB latency, which is the time taken by the message in the ESB. The ESB latency can be hard to work out. A well designed ESB will already be sending bytes to the DS before its finished reading the bytes the LD has sent it. This is called pipelining. Some ESBs attempt to measure the ESB latency inside the ESB using clever calculations. Alternatively scenario C (comparing via ESB vs Direct) can give an idea of ESB Latency. 

But before we look at the metrics we need to understand the load driver.

There are two different models to doing Load Driving:
1) Do a realistic load test based on your requirements. For example if you know you want to support up to 50 concurrent clients each making a call every 5 seconds on average, you can simulate this.
2) Saturation! Have a large number of clients, each making a call as soon as the last one finishes.

The first one is aimed at testing what the ESB does before its fully CPU loaded. In other words, if you are looking to see the effect of adding an ESB, or the comparison of one ESB to another under realistic load, then #1 is the right approach. In this approach, looking at throughput may not be useful, because all the different approaches have similar results. If I'm only putting in 300 requests a sec on a modern system, I'm likely to see 300 request a sec. Nothing exciting. But the latency is revealing here. If one ESB responds in less time than another ESB thats a very good sign, because with the same DS the average time per request is very telling.

On the other hand the saturation test is where the throughput is interesting. Before you look at the throughput though, check three things:
1) Is the LD CPU running close to 100%?
2) Is the DS CPU running close to 100%?
3) Is the network bandwidth running close to 100%?

If any of these are true, you aren't doing a good test of the ESB throughput. Because if you are looking at throughput then you want the ESB to be the bottleneck. If something else is the bottleneck then the ESB is not providing its max throughput and you aren't giving it a fair chance. For this reason, most benchmarks use a very very lightweight LD or a clustered LD, and similarly use a DS that is superfast and not a realistic DS. Sometimes the DS is coded to do some real work or sleep the thread while its executing to provide a more realistic load test. In this case you probably want to look at latency more than throughput.

Finally you are looking to see a particular behaviour for throughput testing as you increase load.
Throughput vs Load
The shape of this graph shows an ideal scenario. As the LD puts more work through the ESB it responds linearly. At some point the CPU of the ESB hits maximum, and then the throughput stabilizes.  What we don't want to see is the line drooping at the far right. That would mean that the ESB is crumpling under the extra load, and its failing to manage the extra load effectively. This is like the office worker whose efficiency increases as you give them more work but eventually they start spending all their time re-organizing their todo lists and less work overall gets done.

Under the saturation test you really want to see the CPU of the ESB close to 100% utilised. Why? This is a sign that its doing as much as possible. Why would it not be 100%? Two reasons: I/O, multi-processing and thread locks: either the network card or disk or other I/O is holding it up, the code is not efficiently using the available cores, or there are thread contention issues.

Finally its worth noting that you expect the latency to increase a lot under the saturation test. A classic result is this: I do static routing for different size messages with 100 clients LD. For message sizes up to 100k maybe I see a constant 2ms overhead for using the ESB. Suddenly as the message size grows from 100k to 200k I see the overhead growing in proportion to the message size.


Is this such a bad thing? No, in fact this is what you would expect. Before 100K message size, the ESB is underloaded. The straight line up to this point is a great sign that the ESB is pipelining properly. Once the CPU becomes loaded, each request is taking longer because its being made to wait its turn at the ESB while the ESB deals with the increased load.

A big hint here: When you look at this graph, the most interesting latency numbers occur before the CPU is fully loaded. The latency after the CPU is fully loaded is not that interesting, because its simply a function of the number of queued requests.

Now we understand the metrics, lets look at the actual scenarios.

A. Different Vendors, Same Workload
For the first comparison (different vendors) the first thing to be careful of is that the scenario is implemented in the best way possible in each ESB. There are usually a number of ways of implementing the same scenario. For example the same ESB may offer two different HTTP transports (or more!). For example blocking vs non-blocking, servlet vs library, etc. There may be an optimum approach and its worth reading the docs and talking to the vendor to understand the performance tradeoffs of each approach.

Another thing to be careful of in this scenario is the tuning parameters. Each ESB has various tuning aspects that may affect the performance depending on the available hardware. For example, setting the number of threads and memory based on the number of cores and physical memory may make a big difference.

Once you have your results, assuming everything we've already looked at is tickety-boo, then both latency and throughput are interesting and valid comparisons here. 

B. Different Workloads, Same Vendor
What this is measuring is what it costs you to do different activities with the same ESB. For example, doing a static routing is likely to be faster than a content-based routing, which in turn is faster than a transformation. The data from this tells you the cost of doing different functions with the ESB. For example you might want to do a security authentication/authorization check. You should see a constant bump in latency for the security check, irrespective of message size. But if you were doing complex transformation, you would expect to see higher latency for larger messages, because they take more time to transform. 

C. Direct vs ESB
This is an interesting one. Usually this is done for a simple static routing/passthrough scenario. In other words, we are testing the ESB doing its minimum possible. Why bother? Well there are two different reasons. Firstly ESB vendors usually do this for their own benefit as a baseline test. In other words, once you understand the passthrough performance you can then see the cost of doing more work (e.g. logging a header, validating security, transforming the message). 

Remember the two testing methodologies (realistic load vs saturation)? You will see very very different results in each for this, and the data may seem surprising. For the realistic test, remember we want to look at latency. This is a good comparison for the ESB. How much extra time is spent going through the ESB per request under normal conditions. For example, if the average request to the backend takes 18ms and the average request via the ESB takes 19ms, we have an average ESB latency of 1ms. This is a good result - the client is not going to notice much difference - less than 5% extra. 

The saturation test here is a good test to compare different ESBs. For example, suppose I can get 5000 reqs/sec direct. Via ESB_A the number is 3000 reqs/sec and via ESB_B the number is 2000 reqs/sec, I can say that ESB_A is providing better throughput than ESB_B. 

What is not  a good metric here is comparing throughput in saturation mode for direct vs ESB. 


Why not? The reason here is a little complex to explain. Remember how we coded DS to be as fast as possible so as not to be a bottleneck? So what is DS doing? Its really just reading bytes and sending bytes as fast as it can. Assuming the DS code is written efficiently using something really fast (e.g. just a servlet), what this is testing is how fast the hardware (CPU plus Network Card) can read and write through user space in the operating system. On a modern server hardware box you might get a very high number of transactions/sec. Maybe 5000req/s with each message in and out being 1k in size.

So we have 1k in and 1k out = 2k IO.
2k IO x 5000 reqs/sec x 8bits gives us the total network bandwidth of 80Mbits/sec (excluding ethernet headers and overhead).

Now lets look at the ESB. Imagine it can handle 100% of the direct load. There is no slowdown in throughput for the ESB. For each request it has to read the message in from LD and send it out to DS. Even if its doing this in pipelining mode, there is still a CPU cost and an IO cost for this. So the ESB latency of the ESB maybe 1ms, but the CPU and IO cost is much higher. Now, for each response it also has to read it in from DS and write it out to LD. So if the DS is doing 80Mbits/second, the ESB must be doing 160Mbits/second. 

Here is a picture.

Now if the LD is good enough, it will have loaded the DS to the max. CPU or IO capacity or both will be maxed out. Suppose the ESB is running on the same hardware platform as the DS. If the DS machine can do 80Mbit/s flat out, there is no way that the same hardware running as an ESB can do 160Mbit/s! In fact, if the ESB and DS code are both as efficient as possible, then the throughput via ESB will always be 50% of the throughput direct to the DS. Now there is a possible way for the ESB to do better: it can be better coded than the DS. For example, if the ESB did transfers in kernel space instead of user space then it might make a difference. The real answer here is to look at the latency. What is the overhead of adding the ESB to each request. If the ESB latency is small, then we can solve this problem by clustering the ESB. In this case we would put two ESBs in and then get back to full throughput.

The real point of this discussion is that this is not a useful comparison. In reality backend target services are usually pretty slow. If the same dual core server is actually doing some real work - e.g. database lookups, calculations, business logic - then its much more likely to be doing 500 requests a second or even less. 

The following chart shows real data to demonstrate this. The X-Axis shows increasing complexity of work at the backend (DS). As the effort taken by the backend becomes more realistic, the loss in throughput of having an ESB in the way reduces. So with a blindingly fast backend, we see the ESB struggling to provide just 55% of the throughput of the direct case. But as the backend becomes more realistic, we see much better numbers. So at 2000 requests a second there is barely a difference (around 10% reduction in throughput). 


In real life, what we actually see is that often you have many fewer ESBs than backend servers. For example, if we took the scenario of a backend server that can handle 500 reqs/sec, then we might end up with a cluster of two ESBs handling a cluster of 8 backends. 

Conclusion
I hope this blog has given a good overview of ESB performance and benchmarking. In particular, when is a good idea to look at latency and when to use throughput. 





king

Automatically Checking Feature Model Refactorings

A feature model (FM) defines the valid combinations of features, whose combinations correspond to a program in a Software Product Line (SPL). FMs may evolve, for instance, during refactoring activities. Developers may use a catalog of refactorings as support. However, the catalog is incomplete in principle. Additionally, it is non-trivial to propose correct refactorings. To our knowledge, no previous analysis technique for FMs is used for checking properties of general FM refactorings (a transformation that can be applied to a number of FMs) containing a representative number of features. We propose an efficient encoding of FMs in the Alloy formal specification language. Based on this encoding, we show how the Alloy Analyzer tool, which performs analysis on Alloy models, can be used to automatically check whether encoded general and specific FM refactorings are correct. Our approach can analyze general transformations automatically to a significant scale in a few seconds. In order to evaluate the analysis performance of our encoding, we evaluated in automatically generated FMs ranging from 500 to 2,000 features. Furthermore, we analyze the soundness of general transformations.




king

A Ranking Tool Exploiting Semantic Descriptions for the Comparison of EQF-based Qualifications

Nowadays, one of the main issues discussed at the Community level is represented by the mobility of students and workers across Europe. During the last years, in order to deal with the above picture, several initiatives have been carried out: one of them is the definition of the European Qualification Framework (EQF), a common architecture for the description of qualifications. At the same time, several research activities were established with the aim of finding how semantic technologies could be exploited for qualifications comparison in the field of human resources acquisition. In this paper, the EQF specifications are taken into account and they are applied in a practical scenario to develop a ranking algorithm for the comparison of qualifications expressed in terms of knowledge, skill and competence concepts, potentially aimed at supporting European employers during the recruiting phase.




king

A Comparison of Different Retrieval Strategies Working on Medical Free Texts

Patient information in health care systems mostly consists of textual data, and free text in particular makes up a significant amount of it. Information retrieval systems that concentrate on these text types have to deal with the different challenges these medical free texts pose to achieve an acceptable performance. This paper describes the evaluation of four different types of information retrieval strategies: keyword search, search performed by a medical domain expert, a semantic based information retrieval tool, and a purely statistical information retrieval method. The different methods are evaluated and compared with respect to its appliance in medical health care systems.




king

What is walking pneumonia? As cases rise in Canada, the symptoms to look out for - The Globe and Mail

  1. What is walking pneumonia? As cases rise in Canada, the symptoms to look out for  The Globe and Mail
  2. Walking pneumonia on the rise in Kingston, but treatable  The Kingston Whig-Standard
  3. What parents need to know about walking pneumonia in kids  FingerLakes1.com
  4. Pediatric pneumonia is surging in Central Ohio  MSN
  5. Walking Pneumonia is spiking right now. How do you know you have it?  CBS 6 News Richmond WTVR




king

Hiking with a backpack is the workout of 2024. An exercise scientist says it’s worth the extra effort - The Globe and Mail

  1. Hiking with a backpack is the workout of 2024. An exercise scientist says it’s worth the extra effort  The Globe and Mail
  2. Military-Inspired Workout Has 'Huge Wins' for Women, Says Personal Trainer  MSN
  3. How Rucking Can Turn Your Walks into a Full-Body Workout  Verywell Health
  4. What Is Rucking and Is It Better Than Regular Walking? Here's What Personal Trainers Say  EatingWell
  5. Rucking: Why It’s a Great Workout & How to Get Started  Athletech News





king

Niagara Health offering free parking after delays reported - News Talk 610 CKTB

  1. Niagara Health offering free parking after delays reported  News Talk 610 CKTB
  2. Implementation of new Niagara Health patient info system resulting in long wait times  St. Catharines Standard
  3. Temporary delays impacting registration at emergency departments  Thorold News
  4. Niagara Health Working Through Delays  101.1 More FM
  5. Niagara Health experiencing temporary delays impacting registration and EDs  Niagara Health




king

Robust watermarking of medical images using SVM and hybrid DWT-SVD

In the present scenario, the security of medical images is an important aspect in the field of image processing. Support vector machines (SVMs) are a supervised machine learning technique used in image classification. The roots of SVM are from statistical learning theory. It has gained excellent significance because of its robust, accurate, and very effective algorithm, even though it was applied to a small set of training samples. SVM can classify data into binary classification or multiple classifications according to the application's needs. Discrete wavelet transform (DWT) and singular value decomposition (SVD) transform techniques are utilised to enhance the image's security. In this paper, the image is first classified using SVM into ROI and RONI, and thereafter, to enhance the images diagnostic capabilities, the DWT-SVD-based hybrid watermarking technique is utilised to embed the watermark in the RONI region. Overall, our work makes a significant contribution to the field of medical image security by presenting a novel and effective solution. The results are evaluated using both perceptual and imperceptibility testing using PSNR and SSIM parameters. Different attacks were introduced to the watermarked image, which shows the efficacy and robustness of the proposed algorithm.




king

A robust feature points-based screen-shooting resilient watermarking scheme

Screen-shooting will lead to information leakage. Anti-screen-shooting watermark, which can track the leaking sources and protect the copyrights of images, plays an important role in image information security. Due to the randomness of shooting distance and angle, more robust watermark algorithms are needed to resist the mixed attack generated by screen-shooting. A robust digital watermarking algorithm that is resistant to screen-shooting is proposed in this paper. We use improved Harris-Laplace algorithm to detect the image feature points and embed the watermark into the feature domain. In this paper, all test images are selected on the dataset USC-SIPI and six related common algorithms are used for performance comparison. The experimental results show that within a certain range of shooting distance and angle, this algorithm presented can not only extract the watermark effectively but also ensure the most basic invisibility of watermark. Therefore, the algorithm has good robustness for anti-screen-shooting.




king

Undertaking a bibliometric analysis to investigate the framework and dynamics of slow fashion in the context of sustainability

The current study has outlined slow fashion (SF) research trends and created a future research agenda for this field. It is a thorough analysis of the literature on slow fashion. Numerous bibliometric features of slow fashion have been discussed in the paper. This study comprises 182 research articles from the Scopus database. The database was utilised for bibliometric analysis. To identify certain trends in the area of slow fashion, a bibliometric study is done. For bibliometric analysis, the study employed R-software (the Biblioshiny package). Here, VOSviewer software is used to determine the co-occurrence of authors, countries, sources, etc. The study has outlined the gap that still exists in the field of slow fashion. Here, the research outcome strengthens the domain of slow fashion for sustainable consumption. The study findings will be useful for policymakers, industry professionals, and researchers.




king

The discussion of information security risk control in mobile banking

The emergence of digital technology and the increasing prevalence of smartphones have promoted innovations in payment options available in finance and consumption markets. Banks providing mobile payment must ensure the information security. Inadequate security control leads to information leakage, which severely affects user rights and service providers' reputations. This study uses control objectives for Information and Related Technologies 4.1 as the mobile payment security control framework to examine the emergent field of mobile payment. A literature review is performed to compile studies on the safety risk, regulations, and operations of mobile payments. In addition, the Delphi questionnaire is distributed among experts to determine the practical perspectives, supplement research gaps in the literature, and revise the prototype framework. According to the experts' opinions, 59 control objectives from the four domains of COBIT 4.1 are selected. The plan and organise, acquire and implement, deliver and support, and monitor and evaluate four domains comprised 2, 5, 10, and 2 control objectives that had mean importance scores of > 4.50. Thus, these are considered the most important objectives by the experts, respectively. The results of this study can serve as a reference for banks to construct secure frameworks in mobile payment services.




king

SVC-MST BWQLB multicast over software-defined networking

This paper presents a Scalable Video Coding (SVC) system over multicast Software-Defined Networking (SDN), which focuses on, transmission management for the sender-receiver model. Our approach reduces bandwidth usage by allowing the receiver to select various video resolutions in a multicast group, which helps avoid a video freezing issue during bandwidth congestion. Moreover, the SVC Multiple Sessions Transmission Bandwidth thresholds Quantised Level Balance (SVC-MST BWQLB) routes different layers of the SVC stream using distinct paths and reduces storage space and bandwidth congestion problems in different video resolutions. The experimental results show that the proposed model provides better display quality than the traditional Open Shortest Path First (OSPF) routing technique. Furthermore, it reduced transmission delays by up to 66.64% for grouped resolutions compared to SVC-Single Session Transmission (SVC-SST). Additionally, the modified Real-time Transport Protocol (RTP) header and the sorting buffer for SVC-MST are proposed to deal with the defragmentation problem.




king

Smart approach to constraint programming: intelligent backtracking using artificial intelligence

Constrained programming is the concept used to select possible alternatives from an incredibly diverse range of candidates. This paper proposes an AI-assisted Backtracking Scheme (AI-BS) by integrating the generic backtracking algorithm with Artificial Intelligence (AI). The detailed study observes that the extreme dual ray associated with the infeasible linear program can be automatically extracted from minimum unfeasible sets. Constraints are used in artificial intelligence to list all possible values for a group of variables in a given universe. To put it another way, a solution is a way of assigning a value to each variable that these values satisfy all constraints. Furthermore, this helps the study reach a decreased search area for smart backtracking without paying high costs. The evaluation results exhibit that the IB-BC algorithm-based smart electricity schedule controller performs better electricity bill during the scheduled periods than comparison approaches such as binary backtracking and binary particle swarm optimiser.




king

Digitalisation boost operation efficiency with special emphasis on the banking sector

The banking sector has experienced a substantial technological shift that has opened up new and better opportunities for its customers. Based on their technological expenditures, the study assessed the two biggest public Indian banks and the two biggest private Indian banks. The most crucial statistical techniques used to demonstrate the aims are realistic are bivariate correlations and ordinary least squares. This work aims to establish a connection between research and a technology index that serves as a proxy for operational efficiency. The results show that for both public and private banks, the technology index positively influences operational efficiency metrics like IT costs, marketing costs, and compensation costs. This suggests that when the technology index increases, so do IT, marketing, and compensation costs, even though it has been shown that the technology index favourably improves operational efficiency measures like depreciation and printing. This means that the cost to banks is high despite greater investment in technology for these activities.




king

Trust in news accuracy on X and its impact on news seeking, democratic perceptions and political participation

Based on a survey of 2548 American adults conducted by Pew Research Center in 2021, this study finds that trust in the accuracy of news circulated on X (former Twitter) is positively correlated with following news sites on X, underscoring the crucial role of trust in news accuracy in shaping news-seeking behaviour. Trust in news accuracy also positively relates to political participation via X. Those who trust in news accuracy are more likely to perceive X as an effective tool for raising public awareness about political and social issues, as well as a positive force for democracy. However, exposure to misinformation weakens the connection between trust in news accuracy and users' perception about X as an effective tool for raising public awareness about political or social issues and as a positive driver for democracy.




king

Beyond utility: unpacking the enjoyment gap in e-government service use

E-government serves as a vital channel for citizen interactions with the public sector, where user enjoyment is of paramount importance. To date, few studies have comprehensively examined the determinants of citizen enjoyment in e-government. To address this research gap, we administered a survey and gathered data from 363 Australian residents using myGov for tax filing. Our analysis revealed a pronounced discrepancy between reported enjoyment and the intention to continue using the services. Although users demonstrated a strong intent to use e-government services, this intent did not uniformly align with enjoyment. Additionally, informed by self-determination theory, we developed and tested an e-government service enjoyment model to study the impacts of effort expectancy, technophilia, technology humanness, and engagement in fostering user enjoyment. Unexpectedly, the results showed that information privacy concerns, commonly seen as a deterrent in e-government adoption, did not significantly affect enjoyment. Our findings advance the discourse on e-government service improvement.




king

Making Information Systems less Scrugged: Reflecting on the Processes of Change in Teaching and Learning




king

Exploring Educational and Cultural Adaptation through Social Networking Sites




king

Advancing Creative Visual Thinking with Constructive Function-based Modelling




king

Making Mobile Learning Work: Student Perceptions and Implementation Factors

Mobile devices are the constant companions of technology users of all ages. Studies show, however, that making calls is a minimal part of our engagement with today’s smart phones and that even texting has fallen off, leaving web browsing, gaming, and social media as top uses. A cross-disciplinary group of faculty at our university came together in the mLearning Scholars group to study the potential for using mobile devices for student learning. The group met bi-weekly throughout a semester and shared thoughts, ideas, resources, and examples, while experimenting with mobile learning activities in individual classes. This paper summarizes student perceptions and adoption intent for using mobile devices for learning, and discusses implementation issues for faculty in adding mobile learning to a college course. Outcomes reflect that mobile learning adoption is not a given, and students need help in using and understanding the value in using personal devices for learning activities.




king

The Impact of User Interface on Young Children’s Computational Thinking

Aim/Purpose: Over the past few years, new approaches to introducing young children to computational thinking have grown in popularity. This paper examines the role that user interfaces have on children’s mastery of computational thinking concepts and positive interpersonal behaviors. Background: There is a growing pressure to begin teaching computational thinking at a young age. This study explores the affordances of two very different programming interfaces for teaching computational thinking: a graphical coding application on the iPad (ScratchJr) and tangible programmable robotics kit (KIBO). Methodology : This study used a mixed-method approach to explore the learning experiences that young children have with tangible and graphical coding interfaces. A sample of children ages four to seven (N = 28) participated. Findings: Results suggest that type of user interface does have an impact on children’s learning, but is only one of many factors that affect positive academic and socio-emotional experiences. Tangible and graphical interfaces each have qualities that foster different types of learning




king

The Development of Computational Thinking in Student Teachers through an Intervention with Educational Robotics

Aim/Purpose: This research aims to describe and demonstrate the results of an intervention through educational robotics to improve the computational thinking of student teachers. Background: Educational robotics has been increasing in school classrooms for the development of computational thinking and digital competence. However, there is a lack of research on how to prepare future teachers of Kindergarten and Elementary School in the didactic use of computational thinking, as part of their necessary digital teaching competence. Methodology: Following the Design-Based Research methodology, we designed an intervention with educational robots that includes unplugged, playing, making and remixing activities. Participating in this study were 114 Spanish university students of education. Contribution: This research helps to improve the initial training of student teachers, especially in the field of educational robotics. Findings: The student teachers consider themselves digital competent, especially in the dimensions related to social and multimedia aspects, and to a lesser extent in the technological dimension. The results obtained also confirm the effectiveness of the intervention through educational robotics in the development of computational thinking of these students, especially among male students. Recommendations for Practitioners: Teacher trainers could introduce robotics following these steps: (1) initiation and unplugged activities, (2) gamified activities of initiation to the programming and test of the robots, (3) initiation activities to Scratch, and (4) design and resolution of a challenge. Recommendation for Researchers: Researchers could examine how interventions with educational robots helps to improve the computational thinking of student teachers, and thoroughly analyze gender-differences. Impact on Society: Computational thinking and robotics are one of the emerging educational trends. Despite the rise of this issue, there are still few investigations that systematize and collect evidence in this regard. This study allows to visualize an educational intervention that favors the development of the computational thinking of student teachers. Future Research: Researchers could evaluate not only the computational thinking of student teachers, but also their didactics, their ability to teach or create didactic activities to develop computational thinking in their future students.




king

New Findings on Student Multitasking with Mobile Devices and Student Success

Aim/Purpose: This paper investigates the influence of university student multitasking on their learning success, defined as students’ learning satisfaction and performance. Background: Most research on student multitasking finds student multitasking problematic. However, this research is generally from 2010. Yet, today’s students are known to be digital natives and they have a different, more positive, relationship with mobile technologies. Based on the old findings, most instructors ban mobile technology use during instruction, and design their online courses without regard for the mobile technology use that happens regardless of their ban. This study investigates whether today’s instructors and learning management system interface designers should take into account multitasking with mobile technologies. Methodology: A quasi-experimental design was used in this study. Data were collected from 117 students across two sections of an introductory Management Information Systems class taught by the first author. We took multiple approaches and steps to control for confounding factors and to increase the internal validity of the study. We used a control group as a comparison group, we used a pre-test, we controlled for selection bias, and we tested for demographic differences between groups. Contribution: With this paper, we explicated the relationship between multitasking and learning success. We defined learning success as learning performance and learning satisfaction. Contrary to the literature, we found that multitasking involving IT texting does not decrease students’ learning performance. An explanation of this change is the change in the student population, and the digital nativeness between 2010s and 2020 and beyond. Findings: Our study showed that multitasking involving IT texting does not decrease students’ performance in class compared to not multitasking. Secondly, our study showed that, overall, multitasking reduced the students’ learning satisfaction despite the literature suggesting otherwise. We found that attitude towards multitasking moderated the relationship between multitasking and learning satisfaction as follows. Individuals who had a positive attitude towards multitasking had high learning satisfaction with multitasking. However, individuals who had positive attitude toward multitasking did not necessarily have higher learning performance. Recommendations for Practitioners: We would recommend both instructors and the designers of learning management systems to take mobile multitasking into consideration while designing courses and course interfaces, rather than banning multitasking, and assuming that the students do not do it. Furthermore, we recommend including multitasking into relevant courses such as Management Information Systems courses to make students aware of their own multitasking behavior and their results. Recommendation for Researchers: We recommend that future studies investigate multitasking with different instruction methods, especially studies that make students aware of their multitasking behavior and its outcomes will be useful for next generations. Impact on Society: This paper investigates the role of mobile multitasking on learning performance. Since mobile technologies are ubiquitous and their use in multitasking is common, their use in multitasking affects societal performance. Future Research: Studies that replicate our research with larger and more diverse samples are needed. Future research could explore research-based experiential teaching methods, similar to this study.




king

Utilizing Design Thinking to Create Digital Self-Directed Learning Environment for Enhancing Digital Literacy in Thai Higher Education

Aim/Purpose: To explore the effectiveness of utilizing the design thinking approach in developing digital self-directed learning environment to enhance digital literacy skills in Thai higher education. Background: To foster digital literacy skills in higher education, Thai students require more than access to technology. Emphasizing digital self-directed learning and incorporating Design Thinking approach, can empower students to learn and develop their digital skills effectively. This study explores the impact of digital self-directed learning environment, developed using a design thinking approach, on enhancing digital literacy skills among higher education students in Thailand. Methodology: The research methodology involves developing a digital self-directed learning environment, collecting and analyzing data, and using statistical analysis to compare the outcomes between different groups. The sample includes 60 undergraduate students from the School of Industrial Education and Technology at King Mongkut Institute of Technology, divided into a control group (n=30) and an experimental group (n=30). Data analysis involves mean, standard deviation, and one-way MANOVA. Contribution: This research contributes to the evidence supporting the use of Design Thinking in developing digital self-directed learning environment, demonstrating its effectiveness in meeting learners’ needs and improving learning outcomes in higher education. Findings: Key findings include: 1) the digital media and self-directed learning activities plan developed through the design thinking approach received high-quality ratings from experts, with mean scores of 4.87 and 4.93, respectively; and 2) post-lesson comparisons of learning outcome and digital literacy assessment scores revealed that the group utilizing digital media with self-directed learning activities had significantly higher mean scores than the traditional learning group, with a significance level of 0.001. Recommendations for Practitioners: Practitioners in higher education should use design thinking to develop digital self-directed learning environments that enhance digital literacy skills. This approach involves creating high-quality digital media and activities, promoting engagement and improved outcomes. Collaboration and stakeholder involvement are essential for effective implementation. Recommendation for Researchers: Researchers should continue to explore the effectiveness of design thinking approaches in the development of learning environments, as well as their influence on different educational aspects such as student engagement, satisfaction, and overall learning outcomes. Impact on Society: By enhancing digital literacy skills among higher education students, this study contributes to the development of a digitally skilled workforce, encourages lifelong learning, and aids individuals in effectively navigating the challenges of the digital era. Future Research: Future research could explore a broader range of student demographics and educational settings to validate the effectiveness of the Design Thinking approach in enhancing digital literacy. This could include integrating design thinking with alternative digital learning and teaching methods to further improve digital literacy.