auto

Investigating the Feasibility of Automatic Assessment of Programming Tasks

Aim/Purpose: The aims of this study were to investigate the feasibility of automatic assessment of programming tasks and to compare manual assessment with automatic assessment in terms of the effect of the different assessment methods on the marks of the students. Background: Manual assessment of programs written by students can be tedious. The assistance of automatic assessment methods might possibly assist in reducing the assessment burden, but there may be drawbacks diminishing the benefits of applying automatic assessment. The paper reports on the experience of a lecturer trying to introduce automated grading. Students’ solutions to a practical Java programming test were assessed both manually and automatically and the lecturer tied the experience to the unified theory of acceptance and use of technology (UTAUT). Methodology: The participants were 226 first-year students registered for a Java programming course. Of the tests the participants submitted, 214 were assessed both manually and automatically. Various statistical methods were used to compare the manual assessment of student’s solutions with the automatic assessment of the same solutions. A detailed investigation of reasons for differences was also carried out. A further data collection method was the lecturer’s reflection on the feasibility of automatic assessment of programming tasks based on the UTAUT. Contribution: This study enhances the knowledge regarding benefits and drawbacks of automatic assessment of students’ programming tasks. The research contributes to the UTAUT by applying it in a context where it has hardly been used. Furthermore, the study is a confirmation of previous work stating that automatic assessment may be less reliable for students with lower marks, but more trustworthy for the high achieving students. Findings: An automatic assessment tool verifying functional correctness might be feasible for assessment of programs written during practical lab sessions but could be less useful for practical tests and exams where functional, conceptual and structural correctness should be evaluated. In addition, the researchers found that automatic assessment seemed to be more suitable for assessing high achieving students. Recommendations for Practitioners: This paper makes it clear that lecturers should know what assessment goals they want to achieve. The appropriate method of assessment should be chosen wisely. In addition, practitioners should be aware of the drawbacks of automatic assessment before choosing it. Recommendation for Researchers: This work serves as an example of how researchers can apply the UTAUT theory when conducting qualitative research in different contexts. Impact on Society: The study would be of interest to lecturers considering automated assessment. The two assessments used in the study are typical of the way grading takes place in practice and may help lecturers understand what could happen if they switch from manual to automatic assessment. Future Research: Investigate the feasibility of automatic assessment of students’ programming tasks in a practical lab environment while accounting for structural, functional and conceptual assessment goals.




auto

A novel IoT-enabled portable, secure automatic self-lecture attendance system: design, development and comparison

This study focuses on the importance of monitoring student attendance in education and the challenges faced by educators in doing so. Existing methods for attendance tracking have drawbacks, including high costs, long processing times, and inaccuracies, while security and privacy concerns have often been overlooked. To address these issues, the authors present a novel internet of things (IoT)-based self-lecture attendance system (SLAS) that leverages smartphones and QR codes. This system effectively addresses security and privacy concerns while providing streamlined attendance tracking. It offers several advantages such as compact size, affordability, scalability, and flexible features for teachers and students. Empirical research conducted in a live lecture setting demonstrates the efficacy and precision of the SLAS system. The authors believe that their system will be valuable for educational institutions aiming to streamline attendance tracking while ensuring security and privacy.




auto

LDSAE: LeNet deep stacked autoencoder for secure systems to mitigate the errors of jamming attacks in cognitive radio networks

A hybrid network system for mitigating errors due to jamming attacks in cognitive radio networks (CRNs) is named LeNet deep stacked autoencoder (LDSAE) and is developed. In this exploration, the sensing stage and decision-making are considered. The sensing unit is composed of four steps. First, the detected signal is forwarded to filtering progression. Here, BPF is utilised to filter the detected signal. The filtered signal is squared in the second phase. Third, signal samples are combined and jamming attacks occur by including false energy levels. Last, the attack is maliciously affecting the FC decision in the fourth step. On the other hand, FC initiated the decision-making and also recognised jamming attacks that affect the link amidst PU and SN in decision-making stage and it is accomplished by employing LDSAE-based trust model where the proposed module differentiates the malicious and selfish users. The analytic measures of LDSAE gained 79.40%, 79.90%, and 78.40%.




auto

Automatically Grading Essays with Markit©




auto

Automatically Generating Questions in Multiple Variables for Intelligent Tutoring




auto

The Power of Normalised Word Vectors for Automatically Grading Essays




auto

Automatic Conceptual Analysis for Plagiarism Detection




auto

An Exploratory Survey in Collaborative Software in a Graduate Course in Automatic Identification and Data Capture




auto

SMS Based Wireless Home Appliance Control System (HACS) for Automating Appliances and Security




auto

The Adoption of Automatic Teller Machines in Nigeria: An Application of the Theory of Diffusion of Innovation




auto

Automatic Detection and Classification of Dental Restorations in Panoramic Radiographs

Aim/Purpose: The aim of this study was to develop a prototype of an information-generating computer tool designed to automatically map the dental restorations in a panoramic radiograph. Background: A panoramic radiograph is an external dental radiograph of the oro-maxillofacial region, obtained with minimal discomfort and significantly lower radiation dose compared to full mouth intra-oral radiographs or cone-beam computed tomography (CBCT) imaging. Currently, however, a radiologic informative report is not regularly designed for a panoramic radiograph, and the referring doctor needs to interpret the panoramic radiograph manually, according to his own judgment. Methodology: An algorithm, based on techniques of computer vision and machine learning, was developed to automatically detect and classify dental restorations in a panoramic radiograph, such as fillings, crowns, root canal treatments and implants. An experienced dentist evaluated 63 panoramic anonymized images and marked on them, manually, 316 various restorations. The images were automatically cropped to obtain a region of interest (ROI) containing only the upper and lower alveolar ridges. The algorithm automatically segmented the restorations using a local adaptive threshold. In order to improve detection of the dental restorations, morphological operations such as opening, closing and hole-filling were employed. Since each restoration is characterized by a unique shape and unique gray level distribution, 20 numerical features describing the contour and the texture were extracted in order to classify the restorations. Twenty-two different machine learning models were evaluated, using a cross-validation approach, to automatically classify the dental restorations into 9 categories. Contribution: The computer tool will provide automatic detection and classification of dental restorations, as an initial step toward automatic detection of oral pathologies in a panoramic radiograph. The use of this algorithm will aid in generating a radiologic report which includes all the information required to improve patient management and treatment outcome. Findings: The automatic cropping of the ROI in the panoramic radiographs, in order to include only the alveolar ridges, was successful in 97% of the cases. The developed algorithm for detection and classification of the dental restorations correctly detected 95% of the restorations. ‘Weighted k-NN’ was the machine-learning model that yielded the best classification rate of the dental restorations - 92%. Impact on Society: Information that will be extracted automatically from the panoramic image will provide a reliable, reproducible radiographic report, currently unavailable, which will assist the clinician as well as improve patients’ reliance on the diagnosis. Future Research: The algorithm for automatic detection and classification of dental restorations in panoramic imaging must be trained on a larger dataset to improve the results. This algorithm will then be used as a preliminary stage for automatically detecting incidental oral pathologies exhibited in the panoramic images.




auto

Autoethnography of the Cultural Competence Exhibited at an African American Weekly Newspaper Organization

Aim/Purpose: Little is known of the cultural competence or leadership styles of a minority owned newspaper. This autoethnography serves to benchmark one early 1990s example. Background: I focused on a series of flashbacks to observe an African American weekly newspaper editor-in-chief for whom I reported to 25 years ago. In my reflections I sought to answer these questions: How do minorities in entrepreneurial organizations view their own identity, their cultural competence? What degree of this perception is conveyed fairly and equitably in the community they serve? Methodology: Autoethnography using both flashbacks and article artifacts applied to the leadership of an early 1990s African American weekly newspaper. Contribution: Since a literature gap of minority newspaper cultural competence examples is apparent, this observation can serve as a benchmark to springboard off older studies like that of Barbarin (1978) and that by examining the leadership styles and editorial authenticity as noted by The Chicago School of Media Theory (2018), these results can be used for comparison to other such minority owned publications. Findings: By bringing people together, mixing them up, and conducting business any other way than routine helped the Afro-American Gazette, Grand Rapids, proudly display a confidence sense of cultural competence. The result was a potentiating leadership style, and this style positively changed the perception of culture, a social theory change example. Recommendations for Practitioners: For the minority leaders of such publications, this example demonstrates effective use of potentiating leadership to positively change the perception of the quality of such minority owned newspapers. Recommendations for Researchers: Such an autoethnography could be used by others to help document other examples of cultural competence in other minority owned newspapers. Impact on Society: The overall impact shows that leadership at such minority owned publications can influence the community into a positive social change example. Future Research: Research in the areas of culture competence, leadership, within minority owned newspapers as well as other minority alternative publications and websites can be observed with a focus on what works right as well as examples that might show little social change model influence. The suggestion is to conduct the research while employed if possible, instead of relying on flashbacks.




auto

Combining Summative and Formative Evaluation Using Automated Assessment

Aim/Purpose: Providing both formative and summative assessment that allows students to learn from their mistakes is difficult in large classes. This paper describes an automated assessment system suitable for courses with even 100 or more students. Background: Assessment is a vital part of any course of study. Ideally students should be given formative assessment with feedback during the course so students and tutors can identify weaknesses and focus on what needs improvement before summative assessment, which results in a grade. This paper describes and automated assessment system that lessens the burden of providing formative assessment in large classes. Methodology: We used Checkpoint, a web-based automated assessment system, to grade assignments in a number of different computer science courses. Contribution: The students come from diverse backgrounds, with a wide range of ages, previous qualifications and technical skills, and our approach allows the students to work at their own pace according to their individual needs, submitting their solutions as many times as they wish up to a deadline, using feedback provided by the system to help identify and correct their mistakes before trying again. Findings: Use of automated assessment allows us to achieve the goals of both summative and formative assessment: we allow students to learn from their mistakes without incurring a penalty, while at the same time awarding them a grade to validate their efforts. The students have an overwhelmingly positive view about our use of automated assessment, and their comments support our views on the assessment process. Recommendations for Practitioners: Because of the increasing number of students in today’s courses, we recommend using automated assessment wherever possible.




auto

The Generalized Requirement Approach for Requirement Validation with Automatically Generated Program Code




auto

The Application of a Knowledge Management Framework to Automotive Original Component Manufacturers

Aim/Purpose: This paper aims to present an example of the application of a Knowledge Man-agement (KM) framework to automotive original component manufacturers (OEMs). The objective is to explore KM according to the four pillars of a selected KM framework. Background: This research demonstrates how a framework, namely the George Washington University’s Four Pillar Framework, can be used to determine the KM status of the automotive OEM industry, where knowledge is complex and can influence the complexity of the KM system (KMS) used. Methodology: An empirical study was undertaken using a questionnaire to gather quantitative data. There were 38 respondents from the National Association of Automotive Component and Allied Manufacturers (NAACAM) and suppliers from three major automotive OEMs. The respondents were required to be familiar with the company’s KMS. Contribution: Currently there is a limited body of research available on the KM implementation frameworks for the automotive industry. This study presents a novel approach to the use of a KM framework to reveal the status of KM in automotive OEMs. At the time of writing, the relationship between the four pillars and the complexity of KMS had not yet been determined. Findings: The results indicate that there is a need to improve KM in the automotive OEM industry. According to the relationships investigated, the four pillars, namely leadership, organization, technology and learning, are considered important for KM, regardless of the level of KMS complexity, Recommendations for Practitioners: Automotive OEMs need to ensure that the KM aspects are established and should be periodically evaluated by using a KM framework such as the George Washington University’s Four Pillar Framework to identify KM weaknesses. Recommendation for Researchers: The establishment and upkeep of a successful KM environment is challenging due to the complexity involved with various influencing aspects. To ensure that all aspects are considered in KM environments, comprehensive KM frameworks, such as the George Washington University’s Four Pillar Framework, need to be applied. Impact on Society: The status of KM management and accessibility of knowledge in organizations needs to be periodically examined, in order to improve supplier and OEM knowledge sharing. Future Research: Although the framework used provides a process for KM status determination, this study could be extended by investigating a methodology that includes KMS best practice and tools. This study could be repeated at a national and international level to provide an indication of KM practice within the entire automotive industry.




auto

PRATO: An Automated Taxonomy-Based Reviewer-Proposal Assignment System

Aim/Purpose: This paper reports our implementation of a prototype system, namely PRATO (Proposals Reviewers Automated Taxonomy-based Organization), for automatic assignment of proposals to reviewers based on categorized tracks and partial matching of reviewers’ profiles of research interests against proposal keywords. Background: The process of assigning reviewers to proposals tends to be a complicated task as it involves inspecting the matching between a given proposal and a reviewer based on different criteria. The situation becomes worse if one tries to automate this process, especially if a reviewer partially matches the domain of the paper at hand. Hence, a new controlled approach is required to facilitate the matching process. Methodology: Proposals and reviewers are organized into categorized tracks as defined by a tree of hierarchical research domains which correspond to the university’s colleges and departments. In addition, reviewers create their profiles of research interests (keywords) at the time of registration. Initial assignment is based on the matching of categorized sub-tracks of proposal and reviewer. Where the proposal and a reviewer fall under different categories (sub-tracks), assignment is done based on partial matching of proposal content against re-viewers’ research interests. Jaccard similarity coefficient scores are calculated of proposal keywords and reviewers’ profiles of research interest, and the reviewer with highest score is chosen. The system was used to automate the process of proposal-reviewer assignment at the Umm Al-Qura University during the 2017-2018 funding cycle. The list of proposal-reviewer assignments generated by the system was sent to human experts for voting and subsequently to make final assignments accordingly. With expert votes and final decisions as evaluation criteria, data system-expert agreements (in terms of “accept” or “reject”) were collected and analyzed by tallying frequencies and calculating rejection/acceptance ratios to assess the system’s performance. Contribution: This work helped the Deanship of Scientific Research (DSR), a funding agency at Umm Al-Qura University, in managing the process of reviewing proposals submitted for funding. We believe the work can also benefit any organizations or conferences to automate the assignment of papers to the most appropriate reviewers. Findings: Our developed prototype, PRATO, showed a considerable impact on the entire process of reviewing proposals at DSR. It automated the assignment of proposals to reviewers and resulted in 56.7% correct assignments overall. This indicates that PRATO performed considerably well at this early stage of its development. Recommendations for Practitioners: It is important for funding agencies and publishers to automate reviewing process to obtain better reviewing quality in a timely manner. Recommendation for Researchers: This work highlighted a new methodology to tackle the proposal-reviewer assignment task in an automated manner. More evaluation might be needed with consideration of different categories, especially for partially matched candidates. Impact on Society: The new methodology and knowledge about factors influencing the implementation of automated proposal-reviewing systems will help funding agencies and publishers to improve the quality of their internal processes. Future Research: In the future, we plan to examine PRATO’s performance on different classification schemes where specialty areas can be represented in graphs rather than trees. With graph representation, the scope for reviewer selection can be widened to include more general fields of specialty. Moreover, we will try to record the reasons for rejection to identify accurately whether the rejection was due to improper assignment or other reasons.




auto

Automatic Generation of Temporal Data Provenance From Biodiversity Information Systems

Aim/Purpose: Although the significance of data provenance has been recognized in a variety of sectors, there is currently no standardized technique or approach for gathering data provenance. The present automated technique mostly employs workflow-based strategies. Unfortunately, the majority of current information systems do not embrace the strategy, particularly biodiversity information systems in which data is acquired by a variety of persons using a wide range of equipment, tools, and protocols. Background: This article presents an automated technique for producing temporal data provenance that is independent of biodiversity information systems. The approach is dependent on the changes in contextual information of data items. By mapping the modifications to a schema, a standardized representation of data provenance may be created. Consequently, temporal information may be automatically inferred. Methodology: The research methodology consists of three main activities: database event detection, event-schema mapping, and temporal information inference. First, a list of events will be detected from databases. After that, the detected events will be mapped to an ontology, so a common representation of data provenance will be obtained. Based on the derived data provenance, rule-based reasoning will be automatically used to infer temporal information. Consequently, a temporal provenance will be produced. Contribution: This paper provides a new method for generating data provenance automatically without interfering with the existing biodiversity information system. In addition to this, it does not mandate that any information system adheres to any particular form. Ontology and the rule-based system as the core components of the solution have been confirmed to be highly valuable in biodiversity science. Findings: Detaching the solution from any biodiversity information system provides scalability in the implementation. Based on the evaluation of a typical biodiversity information system for species traits of plants, a high number of temporal information can be generated to the highest degree possible. Using rules to encode different types of knowledge provides high flexibility to generate temporal information, enabling different temporal-based analyses and reasoning. Recommendations for Practitioners: The strategy is based on the contextual information of data items, yet most information systems simply save the most recent ones. As a result, in order for the solution to function properly, database snapshots must be stored on a frequent basis. Furthermore, a more practical technique for recording changes in contextual information would be preferable. Recommendation for Researchers: The capability to uniformly represent events using a schema has paved the way for automatic inference of temporal information. Therefore, a richer representation of temporal information should be investigated further. Also, this work demonstrates that rule-based inference provides flexibility to encode different types of knowledge from experts. Consequently, a variety of temporal-based data analyses and reasoning can be performed. Therefore, it will be better to investigate multiple domain-oriented knowledge using the solution. Impact on Society: Using a typical information system to store and manage biodiversity data has not prohibited us from generating data provenance. Since there is no restriction on the type of information system, our solution has a high potential to be widely adopted. Future Research: The data analysis of this work was limited to species traits data. However, there are other types of biodiversity data, including genetic composition, species population, and community composition. In the future, this work will be expanded to cover all those types of biodiversity data. The ultimate goal is to have a standard methodology or strategy for collecting provenance from any biodiversity data regardless of how the data was stored or managed.




auto

Revolutionizing Autonomous Parking: GNN-Powered Slot Detection for Enhanced Efficiency

Aim/Purpose: Accurate detection of vacant parking spaces is crucial for autonomous parking. Deep learning, particularly Graph Neural Networks (GNNs), holds promise for addressing the challenges of diverse parking lot appearances and complex visual environments. Our GNN-based approach leverages the spatial layout of detected marking points in around-view images to learn robust feature representations that are resilient to occlusions and lighting variations. We demonstrate significant accuracy improvements on benchmark datasets compared to existing methods, showcasing the effectiveness of our GNN-based solution. Further research is needed to explore the scalability and generalizability of this approach in real-world scenarios and to consider the potential ethical implications of autonomous parking technologies. Background: GNNs offer a number of advantages over traditional parking spot detection methods. Unlike methods that treat objects as discrete entities, GNNs may leverage the inherent connections among parking markers (lines, dots) inside an image. This ability to exploit spatial connections leads to more accurate parking space detection, even in challenging scenarios with shifting illumination. Real-time applications are another area where GNNs exhibit promise, which is critical for autonomous vehicles. Their ability to intuitively understand linkages across marking sites may further simplify the process compared to traditional deep-learning approaches that need complex feature development. Furthermore, the proposed GNN model streamlines parking space recognition by potentially combining slot inference and marking point recognition in a single step. All things considered, GNNs present a viable method for obtaining stronger and more precise parking slot recognition, opening the door for autonomous car self-parking technology developments. Methodology: The proposed research introduces a novel, end-to-end trainable method for parking slot detection using bird’s-eye images and GNNs. The approach involves a two-stage process. First, a marking-point detector network is employed to identify potential parking markers, extracting features such as confidence scores and positions. After refining these detections, a marking-point encoder network extracts and embeds location and appearance information. The enhanced data is then loaded into a fully linked network, with each node representing a marker. An attentional GNN is then utilized to leverage the spatial relationships between neighbors, allowing for selective information aggregation and capturing intricate interactions. Finally, a dedicated entrance line discriminator network, trained on GNN outputs, classifies pairs of markers as potential entry lines based on learned node attributes. This multi-stage approach, evaluated on benchmark datasets, aims to achieve robust and accurate parking slot detection even in diverse and challenging environments. Contribution: The present study makes a significant contribution to the parking slot detection domain by introducing an attentional GNN-based approach that capitalizes on the spatial relationships between marking points for enhanced robustness. Additionally, the paper offers a fully trainable end-to-end model that eliminates the need for manual post-processing, thereby streamlining the process. Furthermore, the study reduces training costs by dispensing with the need for detailed annotations of marking point properties, thereby making it more accessible and cost-effective. Findings: The goal of this research is to present a unique approach to parking space recognition using GNNs and bird’s-eye photos. The study’s findings demonstrated significant improvements over earlier algorithms, with accuracy on par with the state-of-the-art DMPR-PS method. Moreover, the suggested method provides a fully trainable solution with less reliance on manually specified rules and more economical training needs. One crucial component of this approach is the GNN’s performance. By making use of the spatial correlations between marking locations, the GNN delivers greater accuracy and recall than a completely linked baseline. The GNN successfully learns discriminative features by separating paired marking points (creating parking spots) from unpaired ones, according to further analysis using cosine similarity. There are restrictions, though, especially where there are unclear markings. Successful parking slot identification in various circumstances proves the recommended method’s usefulness, with occasional failures in poor visibility conditions. Future work addresses these limitations and explores adapting the model to different image formats (e.g., side-view) and scenarios without relying on prior entry line information. An ablation study is conducted to investigate the impact of different backbone architectures on image feature extraction. The results reveal that VGG16 is optimal for balancing accuracy and real-time processing requirements. Recommendations for Practitioners: Developers of parking systems are encouraged to incorporate GNN-based techniques into their autonomous parking systems, as these methods exhibit enhanced accuracy and robustness when handling a wide range of parking scenarios. Furthermore, attention mechanisms within deep learning models can provide significant advantages for tasks that involve spatial relationships and contextual information in other vision-based applications. Recommendation for Researchers: Further research is necessary to assess the effectiveness of GNN-based methods in real-world situations. To obtain accurate results, it is important to employ large-scale datasets that include diverse lighting conditions, parking layouts, and vehicle types. Incorporating semantic information such as parking signs and lane markings into GNN models can enhance their ability to interpret and understand context. Moreover, it is crucial to address ethical concerns, including privacy, potential biases, and responsible deployment, in the development of autonomous parking technologies. Impact on Society: Optimized utilization of parking spaces can help cities manage parking resources efficiently, thereby reducing traffic congestion and fuel consumption. Automating parking processes can also enhance accessibility and provide safer and more convenient parking experiences, especially for individuals with disabilities. The development of dependable parking capabilities for autonomous vehicles can also contribute to smoother traffic flow, potentially reducing accidents and positively impacting society. Future Research: Developing and optimizing graph neural network-based models for real-time deployment in autonomous vehicles with limited resources is a critical objective. Investigating the integration of GNNs with other deep learning techniques for multi-modal parking slot detection, radar, and other sensors is essential for enhancing the understanding of the environment. Lastly, it is crucial to develop explainable AI methods to elucidate the decision-making processes of GNN models in parking slot detection, ensuring fairness, transparency, and responsible utilization of this technology.




auto

Automatic pectoral muscles and artefacts removal in mammogram images for improved breast cancer diagnosis

Breast cancer is leading cause of mortality among women compared to other types of cancers. Hence, early breast cancer diagnosis is crucial to the success of treatment. Various pathological and imaging tests are available for the diagnosis of breast cancer. However, it may introduce errors during detection and interpretation, leading to false-negative and false-positive results due to lack of pre-processing of it. To overcome this issue, we proposed a effective image pre-processing technique-based on Otsu's thresholding and single-seeded region growing (SSRG) to remove artefacts and segment the pectoral muscle from breast mammograms. To validate the proposed method, a publicly available MIAS dataset was utilised. The experimental finding showed that proposed technique improved 18% breast cancer detection accuracy compared to existing methods. The proposed methodology works efficiently for artefact removal and pectoral segmentation at different shapes and nonlinear patterns.




auto

On large automata processing: towards a high level distributed graph language

Large graphs or automata have their data that cannot fit in a single machine, or may take unreasonable time to be processed. We implement with MapReduce and Giraph two algorithms for intersecting and minimising large and distributed automata. We provide some comparative analysis, and the experiment results are depicted in figures. Our work experimentally validates our propositions as long as it shows that our choice, in comparison with MapReduce one, is not only more suitable for graph-oriented algorithms, but also speeds the executions up. This work is one of the first steps of a long-term goal that consists in a high level distributed graph processing language.




auto

Data as a potential path for the automotive aftersales business to remain active through and after the decarbonisation

This study aims to identify and understand the perspectives of automotive aftersales stakeholders regarding current challenges posed by decarbonisation strategies. It examines potential responses that the automotive aftersales business could undertake to address these challenges. Semi-structured interviews were undertaken with automotive industry experts from Europe and Latin America. This paper focuses primarily on impacts of decarbonisation upon automotive aftersales and the potential role of data in that business. Results show that investment in technology will be a condition for businesses that want to remain active in the industry. Furthermore, experts agree that incumbent manufacturers are not filling the technology gap that the energy transition is creating in the automotive sector, a consequence of which will be the entrance of new players from other sectors. The current aftersales businesses will potentially lose bargaining control. Moreover, policy makers are seen as unreliable leaders of the transition agenda.




auto

An Integrated Approach for Automatic Aggregation of Learning Knowledge Objects




auto

An Ontology to Automate Learning Scenarios? An Approach to its Knowledge Domain




auto

Plagiarism Management: Challenges, Procedure, and Workflow Automation

Aim/Purpose: This paper presents some of the issues that academia faces in both the detection of plagiarism and the aftermath. The focus is on the latter, how academics and educational institutions around the world can address the challenges that follow the identification of an incident. The scope is to identify the need for and describe specific strategies to efficiently manage plagiarism incidents. Background: Plagiarism is possibly one of the major academic misconduct offences. Yet, only a portion of Higher Education Institutes (HEIs) appear to have well developed policies and procedures aimed at dealing with this issue or to follow these when required. Students who plagiarize and are not caught pose challenges for academia. Students who are caught pose equal challenges. Methodology: Following a literature review that identifies and describes the extent and the seriousness of the problem, procedures and strategies to address the issue are recommended, based on the literature and best practices. Contribution: The paper alerts academics regarding the need for the establishment of rigorous and standardized procedures to address the challenges that follow the identification of a plagiarism incident. It then describes how to streamline the process to improve consistency and reduce the errors and the effort required by academic staff. Recommendations for Practitioners: To ensure that what is expected to happen takes place, HEIs should structure the process of managing suspected plagiarism cases. Operationalization, workflow automation, diagrams that map the processes involved, clear in-formation and examples to support and help academics make informed and consistent decisions, templates to communicate with the offenders, and data-bases to record incidents for future reference are strongly recommended. Future research: This paper provides a good basis for further research that will examine the plagiarism policy, the procedures, and the outcome of employing the procedures within the faculties of a single HEI, or an empirical comparison of these across a group of HEIs. Impact on Society: Considering its potential consequences, educational institutions should strive to prevent, detect, and deter plagiarism – and any type of student misconduct. Inaction can be harmful, as it is likely that some students will not gain the appropriate knowledge that their chosen profession requires, which could put in danger both their wellbeing and the people they will later serve in their careers.




auto

Towards the Automatic Generation of Virtual Presenter Agents




auto

Local Density Estimation Procedure for Autoregressive Modeling of Point Process Data

Nat PAVASANT,Takashi MORITA,Masayuki NUMAO,Ken-ichi FUKUI, Vol.E107-D, No.11, pp.1453-1457
We proposed a procedure to pre-process data used in a vector autoregressive (VAR) modeling of a temporal point process by using kernel density estimation. Vector autoregressive modeling of point-process data, for example, is being used for causality inference. The VAR model discretizes the timeline into small windows, and creates a time series by the presence of events in each window, and then models the presence of an event at the next time step by its history. The problem is that to get a longer history with high temporal resolution required a large number of windows, and thus, model parameters. We proposed the local density estimation procedure, which, instead of using the binary presence as the input to the model, performed kernel density estimation of the event history, and discretized the estimation to be used as the input. This allowed us to reduce the number of model parameters, especially in sparse data. Our experiment on a sparse Poisson process showed that this procedure vastly increases model prediction performance.
Publication Date: 2024/11/01




auto

Aggregated to Pipelined Structure Based Streaming SSN for 1-ms Superpixel Segmentation System in Factory Automation

Yuan LI,Tingting HU,Ryuji FUCHIKAMI,Takeshi IKENAGA, Vol.E107-D, No.11, pp.1396-1407
1 millisecond (1-ms) vision systems are gaining increasing attention in diverse fields like factory automation and robotics, as the ultra-low delay ensures seamless and timely responses. Superpixel segmentation is a pivotal preprocessing to reduce the number of image primitives for subsequent processing. Recently, there has been a growing emphasis on leveraging deep network-based algorithms to pursue superior performance and better integration into other deep network tasks. Superpixel Sampling Network (SSN) employs a deep network for feature generation and employs differentiable SLIC for superpixel generation. SSN achieves high performance with a small number of parameters. However, implementing SSN on FPGAs for ultra-low delay faces challenges due to the final layer’s aggregation of intermediate results. To address this limitation, this paper proposes an aggregated to pipelined structure for FPGA implementation. The final layer is decomposed into individual final layers for each intermediate result. This architectural adjustment eliminates the need for memory to store intermediate results. Concurrently, the proposed structure leverages decomposed layers to facilitate a pipelined structure with pixel streaming input to achieve ultra-low latency. To cooperate with the pipelined structure, layer-partitioned memory architecture is proposed. Each final layer has dedicated memory for storing superpixel center information, allowing values to be read and calculated from memory without conflicts. Calculation results of each final layer are accumulated, and the result of each pixel is obtained as the stream reaches the last layer. Evaluation results demonstrate that boundary recall and under-segmentation error remain comparable to SSN, with an average label consistency improvement of 0.035 over SSN. From a hardware performance perspective, the proposed system processes 1000 FPS images with a delay of 0.947 ms/frame.
Publication Date: 2024/11/01




auto

TALK: Automated Data Augmentation via Wikidata Relationships

Automated Data Augmentation via Wikidata Relationships Oyesh Singh, UMBC10:30-11:30 Monday, 21 October 2019, ITE 346 With the increase in complexity of machine learning models, there is more need for data than ever. In order to fill this gap of annotated data-scarce situation, we look towards the ocean of free data present in Wikipedia and other […]

The post TALK: Automated Data Augmentation via Wikidata Relationships appeared first on UMBC ebiquity.




auto

Reinforcement Quantum Annealing: A Quantum-Assisted Learning Automata Approach

Reinforcement Quantum Annealing: A Quantum-Assisted Learning Automata Approach We introduce the reinforcement quantum annealing (RQA) scheme in which an intelligent agent interacts with a quantum annealer that plays the stochastic environment role of learning automata and tries to iteratively find better Ising Hamiltonians for the given problem of interest. As a proof-of-concept, we propose a […]

The post Reinforcement Quantum Annealing: A Quantum-Assisted Learning Automata Approach appeared first on UMBC ebiquity.




auto

Paper: Reinforcement Quantum Annealing: A Hybrid Quantum Learning Automata

Results using the reinforcement learning technique on two SAT benchmarks using a D-Wave 2000Q quantum processor showed significantly better solutions with fewer samples compared to the best-known quantum annealing techniques.

The post Paper: Reinforcement Quantum Annealing: A Hybrid Quantum Learning Automata appeared first on UMBC ebiquity.





auto

KNOWLEDGE INHERITANCE, VERTICAL INTEGRATION AND ENTRANT SURVIVAL IN THE EARLY U.S. AUTO INDUSTRY

A key finding in the literature on industry evolution and strategy is that knowledge "inherited" from the founder's previous employer can be an important source of a new firm's capabilities. We analyze the conditions under which knowledge that is useful for carrying out a key value chain activity is inherited, and explore the mechanism through which such an inheritance shapes an entrant's strategies and, in the process, influences its performance. Evidence from the early U.S. auto industry indicates that employee spinoffs generated from incumbents that had integrated a key value chain activity were also more likely to integrate that activity than other entrants, which, we suggest, reflects the application of knowledge inheritance relative to that activity. Moreover, we find that the integration of this key activity, stimulated by knowledge inheritance, contributed to the establishment of defensible strategic positioning, thereby enhancing the survival duration of inheriting spinoffs. We thus link together the phenomena of knowledge inheritance, vertical integration, and strategic positioning to explain entrant performance. These three phenomena tend to be treated disparately in the literature, rather than in combination.




auto

Programming-based formal languages and automata theory: design, implement, validate, and prove

This rather difficult read introduces the programming language FSM and the programming platform DrRacket. The author asserts that it is a convenient platform to design and prove an automata-based software




auto

Artificial intelligence to automate the systematic review of scientific literature from Computing

The study shows that artificial intelligence (AI) has become highly important in contemporary computing because of its capacity to efficiently tackle intricate jobs that were typically carried out by people. The authors provide scientific literature that analyzes and




auto

Autocount partners IAB LCCI to launch Asia’s first cloud accounting program

KUALA LUMPUR: AutoCount Dotcom Bhd (ADB), via its wholly-owned subsidiary Auto Count Sdn Bhd (ACSB), partnered with IAB LCCI Ltd, a collaboration formed following the Institute of Accountants and Bookkeepers’ (IAB) acquisition of the London Chamber of Commerce and Industry (LCCI) qualifications.

This agreement sets the stage for Asia’s first Cloud Accounting Certification Program, which will equip finance professionals with essential skills for the digital era.

The program will be launched on January 1, 2025, marking a significant step forward in modernising the region’s accounting landscape.

Under this collaboration, ADB will design the certification curriculum around its AutoCount Cloud Accounting software.

The syllabus will be submitted to IAB LCCI for accreditation.

IAB LCCI is regulated by the UK’s Office of Qualifications and Examinations Regulation (Ofqual), enhancing the certification’s credibility and alignment with global standards.

With LCCI’s extensive reach across Asia, the certification will be accessible through its network of educational centres and partner institutions, providing aspiring accountants with in-demand cloud accounting expertise.

ADB CEO Yan Tiee Choo said this collaboration with IAB LCCI allows the company to empower the next generation of accountants across Asia.

“Our goal is to provide a practical and accessible path to certification in cloud accounting, supporting not only recent SPM (Sijil Pelajaran Malaysia) graduates but also those seeking to upskill in a fast-changing industry.

“Together, we are paving the way for a more adaptable, technology-driven accounting workforce across the region,“ he said.

Bursa Malaysia-listed ADB is a leading provider of accounting and business software solutions.

IAB Group and IAB LCCI CEO Sarah Palmer said LCCI has been a leader in offering globally recognised qualifications for over 120 years.

“Our partnership with ADB reflects our shared commitment to advancing the accounting profession by equipping future finance professionals with relevant, high-quality skills.

“By collaborating with ADB, a pioneer in cloud accounting solutions, we ensure that this certification meets the industry’s evolving needs and helps individuals succeed in a digital-first finance sector,“ she said.

The certification offers a clear advantage for students and professionals looking to expand their accounting capabilities.

By learning on ADB’s cloud platform, candidates will gain hands-on experience in digital accounting practices, preparing them for careers in an increasingly automated finance landscape.

With the signing of this agreement, ADB solidifies its position as a leader in cloud accounting solutions and furthers its commitment to innovation in financial technology and education.

This partnership aligns with ADB’s vision to become Asia’s top business software provider, fostering a future-ready workforce and advancing the region’s digital transformation.




auto

Goodyear becomes official tyre sponsor for Tokyo Auto Salon Kuala Lumpur 2024

GOODYEAR is proud to be the official tyre sponsor of the Tokyo Auto Salon Kuala Lumpur 2024, happening from 8 – 10 November 2024 at MITEC, Kuala Lumpur. Known as the world’s premier customised car show, this event promises to showcase the latest in automotive technology, design, and more, drawing car enthusiasts from across the region.

Event Details

Date: 8 – 10 November 2024

Time: 10:00 am – 10:00 pm

Venue: MITEC, Kuala Lumpur

At the Goodyear booth, attendees can explore the latest in high-performance tyre technology and see how Goodyear is driving innovation in tyre performance and quality. This event offers automotive fans the perfect chance to engage with Goodyear and witness the exceptional standards that Goodyear tyres bring to every journey.

Don’t miss this exciting opportunity to connect with industry leaders and fellow car enthusiasts!




auto

Random Photo: Auto Cookie

Random Photo: Auto Cookie




auto

Should You Allow Your Auto Insurance To Monitor Your Driving?

The number of drivers who let their insurance monitor their driving has more than doubled in less than a decade! While many drivers were once skeptical of the practice, the benefits are becoming more and more appealing as people make the switch to usage-based insurance. And as car insurance rates continue to climb, even more […]

The post Should You Allow Your Auto Insurance To Monitor Your Driving? appeared first on Clark Howard.




auto

Plugins compatibility with Autodesk 2012 products




auto

Electric taxi project: Sindh senior minister meets automaker representatives

Sharjeel Memon meets GAC and Dewan Motors to discuss electric taxi options, assuring full govt cooperation.





auto

Melania Trump's autobiography remains atop Amazon's list of bestsellers

It is of note that incoming first lady Melania Trump's autobiography remains No. 1 on Amazon's "most sold" bestseller list. Her book -- which was published Oct. 8 by Skyhorse -- has also reached No. 1 in the categories of memoirs, political leader biographies and -- interestingly enough -- in traveler and explorer biographies.




auto

Elliott takes more than $5B stake in Honeywell, advises separating automation, aerospace units

Activist investor Elliott Investment Management has taken a more than $5 billion stake in Honeywell International and is calling for the conglomerate to split into two separate companies.




auto

New EU BON article looks into incorporating spatial autocorrelation in rarefaction methods

A new EU BON acknowledged article looks at the recently introduced in scientific literature methods for constructing Spatially Explicit Rarefaction (SER) and their implication for ecologists and conservation biologist. The research was published in the journal Ecological Indicators.

Abstract: 

Recently, methods for constructing Spatially Explicit Rarefaction (SER) curves have been introduced in the scientific literature to describe the relation between the recorded species richness and sampling effort and taking into account for the spatial autocorrelation in the data. Despite these methodological advances, the use of SERs has not become routine and ecologists continue to use rarefaction methods that are not spatially explicit. Using two study cases from Italian vegetation surveys, we demonstrate that classic rarefaction methods that do not account for spatial structure can produce inaccurate results. Furthermore, our goal in this paper is to demonstrate how SERs can overcome the problem of spatial autocorrelation in the analysis of plant or animal communities. Our analyses demonstrate that using a spatially-explicit method for constructing rarefaction curves can substantially alter estimates of relative species richness. For both analyzed data sets, we found that the rank ordering of standardized species richness estimates was reversed between the two methods. We strongly advise the use of Spatially Explicit Rarefaction methods when analyzing biodiversity: the inclusion of spatial autocorrelation into rarefaction analyses can substantially alter conclusions and change the way we might prioritize or manage nature reserves.

Original Source: 

Bacaro, G., Altobelli, A., Camelletti, M., Ciccarelli, D., Martellos, S., Palmer, M.W., Ricotta, C., Rocchini, D., Scheiner, S.M., Tordoni, E., Chiarucci, A. (2016). Incorporating spatial autocorrelation in rarefaction methods: implications for ecologists and conservation biologists. Ecological Indicators, 69: 233-238. [5years-IF: 3.494] doi: http://dx.doi.org/10.1016/j.ecolind.2016.04.026





auto

EU BON research keeps flowing: Downscaling and the automation of species monitoring

Biodiversity data are sparse, biased and collected at many resolutions. So techniques are needed to combine these data and provide some clarity. This is where downscaling comes in. Downscaling predicts the occupancy of a species in a given area. That is, the number of grid squares a species is predicted to occupy in a standard grid of equally sized squares. Downscaling uses the intrinsic patterns in the spatial organization of an organism’s distributions to predict what the occupancy would be, given the occupancy at a coarser resolution.

Groom et al. (2018) tests different downscaling models on birds and plants in four countries and in different landscapes and shows which models work best. The results show that all models work similarly, irrespective of the type of organism and landscape. However, some models were biased, either under- or overestimating occupancy. However, a few models were both reliable and unbiased. This means we can automate calculation of species occupancy. Workflows can harvest data from many sources and calculate species metrics in a timely manner, potentially delivering warnings so that interventions can be made.

Species invasions, habitat degradation and mass extinctions are not a future threat, they are happening now. Understanding how we should react, and what policies we need should be underpinned by solid evidence. Imagine if we had systems where we could monitor biodiversity just like we monitor the climate in easy to understand numbers that are both accurate and sensitive to change.

Original Source: 

Groom QJ, Marsh CJ, Gavish Y, Kunin WE. (2018) How to predict fine resolution occupancy from coarse occupancy data. Methods Ecol Evol.;00:1–10. https://doi.org/10.1111/2041-210X.13078

Figure 1: Comparison of downscaling performance of difference mathematical models with the percentage error from the known distribution of breeding birds of Flanders. Points above the zero line are overestimates of occupancy and under the line are underestimates. The x-axis is the prevalence of the species in Flanders.

 





auto

An Automatic Weighting System for Wild Animals Based in an Artificial Neural Network: How to Weigh Wild Animals without Causing Stress





auto

Incorporating spatial autocorrelation in rarefaction methods: implications for ecologists and conservation biologists




auto

Online direct import of specimen records into manuscripts and automatic creation of data papers from biological databases




auto

The Automated Edition

Bananas and foreign travel: What it means to be a computer hacker in North Korea.

In North Korea’s spy agency, operatives aren’t just trained to gather intel. They also hack banks. We hear from a couple of North defectors about what it’s actually like to be a government hacker.

Also on the programme: we meet a robot assistant breaking down gender stereotypes; we get to the bottom of a robocall scam; we check our own voicemail box for messages from our listeners; and we visit a restaurant where the chefs are robots.

(Image: North Korea's leader Kim Jong-un waves from a car on April 27, 2018. Credit: AFP/Getty Images)




auto

It’s automatic

Farmers in the US face a labour shortage, so they’re turning to new technology to fill the gap. Also, meet “Pepper", a robot that’s already replacing thousands of jobs around the world; a researcher from Silicon Valley finds a robot in his hotel room and discovers a potential security breach; how 3D printing could help the global housing crisis; and an instrument that sounds like it’s from outer space, but was invented on earth 100 years ago.

(Robots named “Pepper” work in banks across the US. They help answer basic questions and allow customers to skip the line for a cashier. Credit: Jason Margolis/The World)