cl

A Classification Schema for Designing Augmented Reality Experiences

Aim/Purpose: Designing augmented reality (AR) experiences for education, health or entertainment involves multidisciplinary teams making design decisions across several areas. The goal of this paper is to present a classification schema that describes the design choices when constructing an AR interactive experience. Background: Existing extended reality schema often focuses on single dimensions of an AR experience, with limited attention to design choices. These schemata, combined with an analysis of a diverse range of AR applications, form the basis for the schema synthesized in this paper. Methodology: An extensive literature review and scoring of existing classifications were completed to enable a definition of seven design dimensions. To validate the design dimensions, the literature was mapped to the seven-design choice to represent opportunities when designing AR iterative experiences. Contribution: The classification scheme of seven dimensions can be applied to communicating design considerations and alternative design scenarios where teams of domain specialists need to collaborate to build AR experiences for a defined purpose. Findings: The dimensions of nature of reality, location (setting), feedback, objects, concepts explored, participant presence and interactive agency, and style describe features common to most AR experiences. Classification within each dimension facilitates ideation for novel experiences and proximity to neighbours recommends feasible implementation strategies. Recommendations for Practitioners: To support professionals, this paper presents a comprehensive classification schema and design rationale for AR. When designing an AR experience, the schema serves as a design template and is intended to ensure comprehensive discussion and decision making across the spectrum of design choices. Recommendations for Researchers: The classification schema presents a standardized and complete framework for the review of literature and AR applications that other researchers will benefit from to more readily identify relevant related work. Impact on Society: The potential of AR has not been fully realized. The classification scheme presented in this paper provides opportunities to deliberately design and evaluate novel forms of AR experience. Future Research: The classification schema can be extended to include explicit support for the design of virtual and extended reality applications.




cl

Adaptation of a Cluster Discovery Technique to a Decision Support System




cl

Clickers in the Laboratory: Student Thoughts and Views




cl

Text Classification Techniques: A Literature Review

Aim/Purpose: The aim of this paper is to analyze various text classification techniques employed in practice, their strengths and weaknesses, to provide an improved awareness regarding various knowledge extraction possibilities in the field of data mining. Background: Artificial Intelligence is reshaping text classification techniques to better acquire knowledge. However, in spite of the growth and spread of AI in all fields of research, its role with respect to text mining is not well understood yet. Methodology: For this study, various articles written between 2010 and 2017 on “text classification techniques in AI”, selected from leading journals of computer science, were analyzed. Each article was completely read. The research problems related to text classification techniques in the field of AI were identified and techniques were grouped according to the algorithms involved. These algorithms were divided based on the learning procedure used. Finally, the findings were plotted as a tree structure for visualizing the relationship between learning procedures and algorithms. Contribution: This paper identifies the strengths, limitations, and current research trends in text classification in an advanced field like AI. This knowledge is crucial for data scientists. They could utilize the findings of this study to devise customized data models. It also helps the industry to understand the operational efficiency of text mining techniques. It further contributes to reducing the cost of the projects and supports effective decision making. Findings: It has been found more important to study and understand the nature of data before proceeding into mining. The automation of text classification process is required, with the increasing amount of data and need for accuracy. Another interesting research opportunity lies in building intricate text data models with deep learning systems. It has the ability to execute complex Natural Language Processing (NLP) tasks with semantic requirements. Recommendations for Practitioners: Frame analysis, deception detection, narrative science where data expresses a story, healthcare applications to diagnose illnesses and conversation analysis are some of the recommendations suggested for practitioners. Recommendation for Researchers: Developing simpler algorithms in terms of coding and implementation, better approaches for knowledge distillation, multilingual text refining, domain knowledge integration, subjectivity detection, and contrastive viewpoint summarization are some of the areas that could be explored by researchers. Impact on Society: Text classification forms the base of data analytics and acts as the engine behind knowledge discovery. It supports state-of-the-art decision making, for example, predicting an event before it actually occurs, classifying a transaction as ‘Fraudulent’ etc. The results of this study could be used for developing applications dedicated to assisting decision making processes. These informed decisions will help to optimize resources and maximize benefits to the mankind. Future Research: In the future, better methods for parameter optimization will be identified by selecting better parameters that reflects effective knowledge discovery. The role of streaming data processing is still rarely explored when it comes to text classification.




cl

The Relationship between Ambidextrous Knowledge Sharing and Innovation within Industrial Clusters: Evidence from China

Aim/Purpose: This study examines the influence of ambidextrous knowledge sharing in industrial clusters on innovation performance from the perspective of knowledge-based dynamic capabilities. Background: The key factor to improving innovation performance in an enterprise is to share knowledge with other enterprises in the same cluster and use dynamic capabilities to absorb, integrate, and create knowledge. However, the relationships among these concepts remain unclear. Based on the dynamic capability theory, this study empirically reveals how enterprises drive innovation performance through knowledge sharing. Methodology: Survey data from 238 cluster enterprises were used in this study. The sample was collected from industrial clusters in China’s Fujian province that belong to the automobile, optoelectronic, and microwave communications industries. Through structural equation modeling, this study assessed the relationships among ambidextrous knowledge sharing, dynamic capabilities, and innovation performance. Contribution: This study contributes to the burgeoning literature on knowledge management in China, an important emerging economy. It also enriches the exploration of innovation performance in the cluster context and expands research on the dynamic mechanism from a knowledge perspective. Findings: Significant relationships are found between ambidextrous knowledge sharing and innovation performance. First, ambidextrous knowledge sharing positively influences the innovation performance of cluster enterprises. Further, knowledge absorption and knowledge generation capabilities play a mediating role in this relationship, which confirms that dynamic capabilities are a partial mediator in the relationship between ambidextrous knowledge sharing and innovation performance. Recommendations for Practitioners: The results highlight the crucial role of knowledge management in contributing to cluster innovation and management practices. They indicate that cluster enterprises should consider the importance of knowledge sharing and dynamic capabilities for improving innovation performance and establish a multi-agent knowledge sharing platform. Recommendation for Researchers: Researchers could further explore the role of other mediating variables (e.g., organizational agility, industry growth) as well as moderating variables (e.g., environmental uncertainty, learning orientation). Impact on Society: This study provides a reference for enterprises in industrial clusters to use knowledge-based capabilities to enhance their competitive advantage. Future Research: Future research could collect data from various countries and regions to test the research model and conduct a comparative analysis of industrial clusters.




cl

A Multicluster Approach to Selecting Initial Sets for Clustering of Categorical Data

Aim/Purpose: This article proposes a methodology for selecting the initial sets for clustering categorical data. The main idea is to combine all the different values of every single criterion or attribute, to form the first proposal of the so-called multiclusters, obtaining in this way the maximum number of clusters for the whole dataset. The multiclusters thus obtained, are themselves clustered in a second step, according to the desired final number of clusters. Background: Popular cluster methods for categorical data, such as the well-known K-Modes, usually select the initial sets by means of some random process. This fact introduces some randomness in the final results of the algorithms. We explore a different application of the clustering methodology for categorical data that overcomes the instability problems and ultimately provides a greater clustering efficiency. Methodology: For assessing the performance of the proposed algorithm and its comparison with K-Modes, we apply both of them to categorical databases where the response variable is known but not used in the analysis. In our examples, that response variable can be identified to the real clusters or classes to which the observations belong. With every data set, we perform a two-step analysis. In the first step we perform the clustering analysis on data where the response variable (the real clusters) has been omitted, and in the second step we use that omitted information to check the efficiency of the clustering algorithm (by comparing the real clusters to those given by the algorithm). Contribution: Simplicity, efficiency and stability are the main advantages of the multicluster method. Findings: The experimental results attained with real databases show that the multicluster algorithm has greater precision and a better grouping effect than the classical K-modes algorithm. Recommendations for Practitioners: The method can be useful for those researchers working with small and medium size datasets, allowing them to detect the underlying structure of the data in an intuitive and reasonable way. Recommendation for Researchers: The proposed algorithm is slower than K-Modes, since it devotes a lot of time to the calculation of the initial combinations of attributes. The reduction of the computing time is therefore an important research topic. Future Research: We are concerned with the scalability of the algorithm to large and complex data sets, as well as the application to mixed data sets with both quantitative and qualitative attributes.




cl

IDCUP Algorithm to Classifying Arbitrary Shapes and Densities for Center-based Clustering Performance Analysis

Aim/Purpose: The clustering techniques are normally considered to determine the significant and meaningful subclasses purposed in datasets. It is an unsupervised type of Machine Learning (ML) where the objective is to form groups from objects based on their similarity and used to determine the implicit relationships between the different features of the data. Cluster Analysis is considered a significant problem area in data exploration when dealing with arbitrary shape problems in different datasets. Clustering on large data sets has the following challenges: (1) clusters with arbitrary shapes; (2) less knowledge discovery process to decide the possible input features; (3) scalability for large data sizes. Density-based clustering has been known as a dominant method for determining the arbitrary-shape clusters. Background: Existing density-based clustering methods commonly cited in the literature have been examined in terms of their behavior with data sets that contain nested clusters of varying density. The existing methods are not enough or ideal for such data sets, because they typically partition the data into clusters that cannot be nested. Methodology: A density-based approach on traditional center-based clustering is introduced that assigns a weight to each cluster. The weights are then utilized in calculating the distances from data vectors to centroids by multiplying the distance by the centroid weight. Contribution: In this paper, we have examined different density-based clustering methods for data sets with nested clusters of varying density. Two such data sets were used to evaluate some of the commonly cited algorithms found in the literature. Nested clusters were found to be challenging for the existing algorithms. In utmost cases, the targeted algorithms either did not detect the largest clusters or simply divided large clusters into non-overlapping regions. But, it may be possible to detect all clusters by doing multiple runs of the algorithm with different inputs and then combining the results. This work considered three challenges of clustering methods. Findings: As a result, a center with a low weight will attract objects from further away than a centroid with higher weight. This allows dense clusters inside larger clusters to be recognized. The methods are tested experimentally using the K-means, DBSCAN, TURN*, and IDCUP algorithms. The experimental results with different data sets showed that IDCUP is more robust and produces better clusters than DBSCAN, TURN*, and K-means. Finally, we compare K-means, DBSCAN, TURN*, and to deal with arbitrary shapes problems at different datasets. IDCUP shows better scalability compared to TURN*. Future Research: As future recommendations of this research, we are concerned with the exploration of further available challenges of the knowledge discovery process in clustering along with complex data sets with more time. A hybrid approach based on density-based and model-based clustering algorithms needs to compare to achieve maximum performance accuracy and avoid the arbitrary shapes related problems including optimization. It is anticipated that the comparable kind of the future suggested process will attain improved performance with analogous precision in identification of clustering shapes.




cl

Software as a Service (SaaS) Cloud Computing: An Empirical Investigation on University Students’ Perception

Aim/Purpose: This study aims to propose and empirically validate a model and investigates the factors influencing acceptance and use of Software as a Services cloud computing services (SaaS) from individuals’ perspectives utilizing an integrative model of Theory of Planned Behavior (TPB) and Technology Acceptance Model (TAM) with modifications to suit the objective of the study. Background: Even though SaaS cloud computing services has gained the acceptance in its educational and technical aspects, it is still expanding constantly with emerging cloud technologies. Moreover, the individual as an end-user of this technology has not been given the ample attention pertaining to SaaS acceptance and adoption (AUSaaS). Additionally, the higher education sector needs to be probed regarding AUSaaS perception, not only from a managerial stance, but also the individual. Hence, further investigation in all aspects, including the human factor, deserves deeper inspection. Methodology: A quantitative approach with probability multi-stage sampling procedure conducted utilizing survey instrument distributed among students from three public Malaysian universities. The valid collected responses were 289 Bachelor’s degree students. The survey included the demographic part as well as the items to measure the constructs relationships hypothesized. Contribution: The empirical results disclosed the appropriateness of the integrated model in explaining the individual’s attitude (R2 = 57%), the behavior intention (R2 = 64%), and AUSaaS at the university settings (R2 = 50%). Also, the study offers valuable findings and examines new relationships that considered a theoretical contribution with proven empirical results. That is, the subjective norms effect on attitude and AUSaaS is adding empirical evidence of the model hypothesized. Knowing the significance of social effect is important in utilizing it to promote university products and SaaS applications – developed inside the university – through social media networks. Also, the direct effect of perceived usefulness on AUSaaS is another important theoretical contribution the SaaS service providers/higher education institutes should consider in promoting the usefulness of their products/services developed or offered to students/end-users. Additionally, the research contributes to the knowledge of the literature and is considered one of the leading studies on accepting SaaS services and applications as proliferation of studies focus on the general and broad concept of cloud computing. Furthermore, by integrating two theories (i.e., TPB and TAM), the study employed different factors in studying the perceptions towards the acceptance of SaaS services and applications: social factors (i.e., subjective norms), personal capabilities and capacities (i.e., perceived behavioral control), technological factors (i.e., perceived usefulness and perceived ease of use), and attitudinal factors. These factors are the strength of both theories and utilizing them is articulated to unveil the salient factors affecting the acceptance of SaaS services and applications. Findings: A statistically positive significant influence of the main TPB constructs with AUSaaS was revealed. Furthermore, subjective norms (SN) and perceived usefulness (PU) demonstrated prediction ability on AUSaaS. Also, SN proved a statically significant effect on attitude (ATT). Specifically, the main contributors of intention are PU, perceived ease of use, ATT, and perceived behavioral control. Also, the proposed framework is validated empirically and statistically. Recommendation for Researchers: The proposed model is highly recommended to be tested in different settings and cultures. Also, recruiting different respondents with different roles, occupations, and cultures would likely draw more insights of the results obtained in the current research and its generalizability Future Research: Participants from private universities or other educational institutes suggested in future work as the sample here focused only on public sector universities. The model included limited number of variables suggesting that it can be extended in future works with other constructs such as trialability, compatibility, security, risk, privacy, and self-efficacy. Comparison of different ethnic groups, ages, genders, or fields of study in future research would be invaluable to enhance the findings or reveal new insights. Replication of the study in different settings is encouraged.




cl

Customer Churn Prediction in the Banking Sector Using Machine Learning-Based Classification Models

Aim/Purpose: Previous research has generally concentrated on identifying the variables that most significantly influence customer churn or has used customer segmentation to identify a subset of potential consumers, excluding its effects on forecast accuracy. Consequently, there are two primary research goals in this work. The initial goal was to examine the impact of customer segmentation on the accuracy of customer churn prediction in the banking sector using machine learning models. The second objective is to experiment, contrast, and assess which machine learning approaches are most effective in predicting customer churn. Background: This paper reviews the theoretical basis of customer churn, and customer segmentation, and suggests using supervised machine-learning techniques for customer attrition prediction. Methodology: In this study, we use different machine learning models such as k-means clustering to segment customers, k-nearest neighbors, logistic regression, decision tree, random forest, and support vector machine to apply to the dataset to predict customer churn. Contribution: The results demonstrate that the dataset performs well with the random forest model, with an accuracy of about 97%, and that, following customer segmentation, the mean accuracy of each model performed well, with logistic regression having the lowest accuracy (87.27%) and random forest having the best (97.25%). Findings: Customer segmentation does not have much impact on the precision of predictions. It is dependent on the dataset and the models we choose. Recommendations for Practitioners: The practitioners can apply the proposed solutions to build a predictive system or apply them in other fields such as education, tourism, marketing, and human resources. Recommendation for Researchers: The research paradigm is also applicable in other areas such as artificial intelligence, machine learning, and churn prediction. Impact on Society: Customer churn will cause the value flowing from customers to enterprises to decrease. If customer churn continues to occur, the enterprise will gradually lose its competitive advantage. Future Research: Build a real-time or near real-time application to provide close information to make good decisions. Furthermore, handle the imbalanced data using new techniques.




cl

Improving the Accuracy of Facial Micro-Expression Recognition: Spatio-Temporal Deep Learning with Enhanced Data Augmentation and Class Balancing

Aim/Purpose: This study presents a novel deep learning-based framework designed to enhance spontaneous micro-expression recognition by effectively increasing the amount and variety of data and balancing the class distribution to improve recognition accuracy. Background: Micro-expression recognition using deep learning requires large amounts of data. Micro-expression datasets are relatively small, and their class distribution is not balanced. Methodology: This study developed a framework using a deep learning-based model to recognize spontaneous micro-expressions on a person’s face. The framework also includes several technical stages, including image and data preprocessing. In data preprocessing, data augmentation is carried out to increase the amount and variety of data and class balancing to balance the distribution of sample classes in the dataset. Contribution: This study’s essential contribution lies in enhancing the accuracy of micro-expression recognition and overcoming the limited amount of data and imbalanced class distribution that typically leads to overfitting. Findings: The results indicate that the proposed framework, with its data preprocessing stages and deep learning model, significantly increases the accuracy of micro-expression recognition by overcoming dataset limitations and producing a balanced class distribution. This leads to improved micro-expression recognition accuracy using deep learning techniques. Recommendations for Practitioners: Practitioners can utilize the model produced by the proposed framework, which was developed to recognize spontaneous micro-expressions on a person’s face, by implementing it as an emotional analysis application based on facial micro-expressions. Recommendation for Researchers: Researchers involved in the development of a spontaneous micro-expression recognition framework for analyzing hidden emotions from a person’s face are playing an essential role in advancing this field and continue to search for more innovative deep learning-based solutions that continue to explore techniques to increase the amount and variety of data and find solutions to balancing the number of sample classes in various micro-expression datasets. They can further improvise to develop deep learning model architectures that are more suitable and relevant according to the needs of recognition tasks and the various characteristics of different datasets. Impact on Society: The proposed framework could significantly impact society by providing a reliable model for recognizing spontaneous micro-expressions in real-world applications, ranging from security systems and criminal investigations to healthcare and emotional analysis. Future Research: Developing a spontaneous micro-expression recognition framework based on spatial and temporal flow requires the learning model to classify optimal features. Our future work will focus more on exploring micro-expression features by developing various alternative learning models and increasing the weights of spatial and temporal features.




cl

Automatic pectoral muscles and artefacts removal in mammogram images for improved breast cancer diagnosis

Breast cancer is leading cause of mortality among women compared to other types of cancers. Hence, early breast cancer diagnosis is crucial to the success of treatment. Various pathological and imaging tests are available for the diagnosis of breast cancer. However, it may introduce errors during detection and interpretation, leading to false-negative and false-positive results due to lack of pre-processing of it. To overcome this issue, we proposed a effective image pre-processing technique-based on Otsu's thresholding and single-seeded region growing (SSRG) to remove artefacts and segment the pectoral muscle from breast mammograms. To validate the proposed method, a publicly available MIAS dataset was utilised. The experimental finding showed that proposed technique improved 18% breast cancer detection accuracy compared to existing methods. The proposed methodology works efficiently for artefact removal and pectoral segmentation at different shapes and nonlinear patterns.




cl

Optimisation with deep learning for leukaemia classification in federated learning

The most common kind of blood cancer in people of all ages is leukaemia. The fractional mayfly optimisation (FMO) based DenseNet is proposed for the identification and classification of leukaemia in federated learning (FL). Initially, the input image is pre-processed by adaptive median filter (AMF). Then, cell segmentation is done using the Scribble2label. After that, image augmentation is accomplished. Finally, leukaemia classification is accomplished utilising DenseNet, which is trained using the FMO. Here, the FMO is devised by merging the mayfly algorithm (MA) and the fractional concept (FC). Following local training, the server performs local updating and aggregation using a weighted average by RV coefficient. The results showed that FMO-DenseNet attained maximum accuracy, true negative rate (TNR) and true positive rate (TPR) of 94.3%, 96.5% and 95.3%. Moreover, FMO-DenseNet gained minimum mean squared error (MSE) and root mean squared error (RMSE) of 5.7%, 9.2% and 30.4%.




cl

Alzheimer's disease classification using hybrid Alex-ResNet-50 model

Alzheimer's disease (AD), a leading cause of dementia and mortality, presents a growing concern due to its irreversible progression and the rising costs of care. Early detection is crucial for managing AD, which begins with memory deterioration caused by the damage to neurons involved in cognitive functions. Although incurable, treatments can manage its symptoms. This study introduces a hybrid AlexNet+ResNet-50 model for AD diagnosis, utilising a pre-trained convolutional neural network (CNN) through transfer learning to analyse MRI scans. This method classifies MRI images into Alzheimer's disease (AD), moderate cognitive impairment (MCI), and normal control (NC), enhancing model efficiency without starting from scratch. Incorporating transfer learning allows for refining the CNN to categorise these conditions accurately. Our previous work also explored atlas-based segmentation combined with a U-Net model for segmentation, further supporting our findings. The hybrid model demonstrates superior performance, achieving 94.21% accuracy in identifying AD cases, indicating its potential as a highly effective tool for early AD diagnosis and contributing to efforts in managing the disease's impact.




cl

Leading the diversity and inclusion narrative through continuing professional education

This conceptual research aims to connect aspects of learning activities of continuing education for professionals (CPE). The objective is to provide conclusions about modes of professional learning within diversity, equity, inclusion, and belonging (DEIB) training. This interpretation is placed in context relating to the process of professional learning objectives. A CPE DEIB training plan is presented as an example of how to provide continuing professional education to adult learners within a DEIB curriculum (El-Amin, 2020). The purpose of incorporating the foundations of CPE into DEIB training permits organisations to strengthening organisational development and productivity. By connecting the foundations of curriculum design, alignment, assessment and mapping, and research-informed innovation, CPE aims to enhance the effectiveness of organisational DEIB initiatives. A CPE DEIB training plan emphasises the importance of accountability, employee involvement, and effective training to drive DEIB initiatives.




cl

The OSEL Taxonomy for the Classification of Learning Objects




cl

Clicker Sets as Learning Objects




cl

Meta-analysis of the Articles Published in SPDECE




cl

A Taxonomy as a Vehicle for Learning




cl

A CSCL Approach to Blended Learning in the Integration of Technology in Teaching




cl

Examining the Effectiveness of Web-Based Learning Tools in Middle and Secondary School Science Classrooms




cl

Teachers for "Smart Classrooms": The Extent of Implementation of an Interactive Whiteboard-based Professional Development Program on Elementary Teachers' Instructional Practices




cl

Using the Interactive White Board in Teaching and Learning – An Evaluation of the SMART CLASSROOM Pilot Project




cl

Teaching and Learning with Clickers: Are Clickers Good for Students?




cl

Facilitation of Formative Assessments using Clickers in a University Physics Course




cl

The Impact of Learning with Laptops in 1:1 Classes on the Development of Learning Skills and Information Literacy among Middle School Students




cl

Does Use of ICT-Based Teaching Encourage Innovative Interactions in the Classroom? Presentation of the CLI-O: Class Learning Interactions – Observation Tool




cl

ICT Use: Educational Technology and Library and Information Science Students' Perspectives – An Exploratory Studyew Article

This study seeks to explore what factors influence students’ ICT use and web technology competence. The objectives of this study are the following: (a) To what extent do certain elements of Rogers’ (2003) Diffusion of Innovations Theory (DOI) explain students’ ICT use, (b) To what extent do personality characteristics derived from the Big Five approach explain students’ ICT use, and (c) To what extent does motivation explain students’ ICT use. The research was conducted in Israel during the second semester of the academic year 2013-14, and included two groups of participants: a group of Educational Technology students (ET) and a group of Library and Information Science students (LIS). Findings add another dimension to the importance of Rogers’ DOI theory in the fields of Educational Technology and Library and Information Science. Further, findings confirm that personality characteristics as well as motivation affect ICT use. If instructors would like to enhance students’ ICT use, they should be aware of individual differences between students, and they should present to students the advantages and usefulness of ICT, thus increasing their motivation to use ICT, in the hopes that they will become innovators or early adopters.




cl

The Voice of Teachers in a Paperless Classroom

Aim/Purpose: This study took place in a school with a “paperless classroom” policy. In this school, handwriting and reading on paper were restricted. The purpose of this study was to gain insights from the teachers teaching in a paperless classroom and to learn about the benefits and challenges of teaching and learning in such an environment. Background: In recent years, many schools are moving towards a “paperless classroom” policy, in which teachers and students use computers (or other devices such as tablet PCs) as an alternative to notebooks and textbooks to exchange information and assignments electronically both in and out of class. This study took place in a school with a “paperless classroom” policy. In this school, handwriting and reading on paper were uncommon. Methodology: This qualitative study involved semi-structured interviews with 12 teachers teaching in a paperless school. The research questions dealt with the instruc-tional model developed, the various ways in which the teachers incorporated the technology in their classrooms, and the challenges and difficulties they encountered. Contribution: This study provides important advice to the way teachers have to work in paperless classrooms. Findings: It pointed out the contribution to students in three ways: preparing students for the future; efficiency of learning; empowerment of students. The teachers presented a variety of innovative methods of using the laptops in class and described a very similar structure of the lesson. The teachers described the difficulties involved in conducting a paperless classroom instruction and emphasized that despite the efficiency of the computer and its ability to support the teaching process, they used technology critically. The findings also indicate that some teachers were concerned that the transition from the regular classroom to a paperless one may negatively impact students’ reading and writing skills. Recommendations for Practitioners: Teaching in a paperless school is challenging. On the one hand, going paperless contributes to active and adaptive learning, efficiency, and the acquisition of 21st-century skills or, as they described their main goal, to prepare students for the future. On the other hand, computers in class cause problems such as distraction and disciplinary issues, information overload, and disorganized information as well as technological concerns. Impact on Society: Teachers in the paperless school develop a solid rationale relying on ideas for teaching and learning in a paperless environment, and use varied technologies and develop innovative pedagogies. They are aware of the challenges of this environment and concerned about the disadvantages of using the technology. Thus they develop a realistic and critical view of the paperless classroom. Future Research: Future studies investigating the teachers’ voice as well as the pupils’ aspect could help guide schools in preparing teachers for the paperless classroom.




cl

Beyond the Walls of the Classroom: Introduction to the IJELL Special Series of Chais Conference 2017 Best Papers

Aim/Purpose: This preface presents the papers included in the ninth issue of the Interdisciplinary Journal of e-Skills and Lifelong Learning (IJELL) special series of selected Chais Conference best papers. Background: The Chais Conference for the Study of Innovation and Learning Technologies: Learning in the Technological Era, is organized by the Research Center for Innovation in Learning Technologies, The Open University of Israel. The 12th Chais Conference was held at The Open University of Israel, Raanana, Israel, on February 14-15, 2017. Each year, selected papers of the Chais conference are expanded and published in IJELL. Methodology: A qualitative conceptual analysis of the themes and insights of the papers included in the ninth selection of IJELL special series of selected Chais Conference best papers. Contribution: The presentation of the papers of this selection emphasizes their novelty, as well as their main implications, describes current research issues, and chronicles the main themes within the discourse of learning technologies research, as reflected at the Chais 2017 conference. Findings: Contemporary research goes ‘beyond the walls of the classroom’ and investigates systemic and pedagogical aspects of integrating learning technologies in education on a large scale. Recommendation for Researchers: Researchers are encouraged to investigate broad aspects of seizing the opportunities and overcoming the challenges of integrating innovative technologies in education. Impact on Society: Effective application of learning technologies has a major potential to improve the well-being of individuals and societies. Future Research: The conceptual analysis of contemporary main themes of innovative learning technologies may provide researchers with novel directions for future research on various aspects of the effective utilization of learning technologies.




cl

Closing the Digital Divide in Low-Income Urban Communities: A Domestication Approach

Aim/Purpose: Significant urban digital divide exists in Nairobi County where low income households lack digital literacy skills and do not have access to the internet. The study was undertaken as an intervention, designed to close the digital divide among low income households in Nairobi by introducing internet access using the domestication framework. Background: Information and Communication Technologies (ICTs) have the potential to help reduce social inequality and have been hailed as critical to the achievement of the Sustainable Development goals (SDGs). Skills in use of ICTs have also become a prerequisite for almost all forms of employment and in accessing government services, hence, the need for digital inclusion for all. Methodology: In this research study, I employed a mixed methods approach to investigate the problem. This was achieved through a preliminary survey to collect data on the existence of urban digital divide in Nairobi and a contextual analysis of the internet domestication process among the eighteen selected case studies. Contribution: While there have been many studies on digital divide between Africa and the rest of the world, within the African continent, among genders and between rural and urban areas at national levels, there are few studies exploring urban digital divide and especially among the marginalized communities living in the low-income urban areas. Findings: Successful domestication of internet and related technologies was achieved among the selected households, and the households appreciated the benefits of having and using the internet for the first time. A number of factors that impede use of internet among the marginalized communities in Nairobi were also identified. Recommendations for Practitioners: In the study, I found that use of differentiated costs internet services targeting specific demographic groups is possible and that use of such a service could help the marginalized urban communities’ access the internet. Therefore, ISPs should offer special internet access packages for the low-income households. Recommendation for Researchers: In this research study, I found that the urban digital divide in Nairobi is an indication of social economic development problems. Therefore, researchers should carryout studies involving multipronged strategies to address the growing digital divide among the marginalized urban communities. Impact on Society: The absence of an Information and Communication Technology (ICT) inclusion policy is a huge setback to the achievement of the SDGs in Kenya. Digital inclusion policies prioritizing digital literacy training, universal internet access and to elucidate the social-economic benefits of internet access for all Kenyans should be developed. Future Research: Future studies should explore ways of providing affordable mass internet access solutions among the residents of low-income communities and in eliminating the persistence urban digital divide in Kenya.




cl

Informing Clientele through Networked Multimedia Information Systems: Introduction to the Special Issues




cl

Building an Internet-Based Learning Environment in Higher Education: Learner Informing Systems and the Life Cycle Approach




cl

Informing Clients through Multimedia Communications:




cl

Issues in Informing Clients using Multimedia Communications




cl

Reclassification of Electronic Product Catalogs: The “Apricot” Approach and Its Evaluation Results




cl

The Informing Sciences at a Crossroads: The Role of the Client




cl

Resonance within the Client-to-Client System: Criticality, Cascades, and Tipping Points




cl

The Single Client Resonance Model: Beyond Rigor and Relevance




cl

The Role of the Client in Informing Science:




cl

An Informing Service Based on Models Defined by Its Clients




cl

Towards an Information Sharing Pedagogy: A Case of Using Facebook in a Large First Year Class




cl

Informing Science and Andragogy: A Conceptual Scheme of Client-Side Barriers to Informing University Students




cl

The Knowledge Innovation Matrix (KIM): A Clarifying Lens for Innovation




cl

Social Networks in which Users are not Small Circles

Understanding of social network structure and user behavior has important implications for site design, applications (e.g., ad placement policies), accurate modeling for social studies, and design of next-generation infrastructure and content distribution systems. Currently, characterizations of social networks have been dominated by topological studies in which graph representations are analyzed in terms of connectivity using techniques such as degree distribution, diameter, average degree, clustering coefficient, average path length, and cycles. The problem is that these parameters are not completely satisfactory in the sense that they cannot account for individual events and have only limited use, since one can produce a set of synthetic graphs that have the exact same metrics or statistics but exhibit fundamentally different connectivity structures. In such an approach, a node drawn as a small circle represents an individual. A small circle reflects a black box model in which the interior of the node is blocked from view. This paper focuses on the node level by considering the structural interiority of a node to provide a more fine-grained understanding of social networks. Node interiors are modeled by use of six generic stages: creation, release, transfer, arrival, acceptance, and processing of the artifacts that flow among and within nodes. The resulting description portrays nodes as comprising mostly creators (e.g., of data), receivers/senders (e.g., bus boys), and processors (re-formatters). Two sample online social networks are analyzed according to these features of nodes. This examination points to the viability of the representational method for characterization of social networks.




cl

Ensemble Learning Approach for Clickbait Detection Using Article Headline Features

Aim/Purpose: The aim of this paper is to propose an ensemble learners based classification model for classification clickbaits from genuine article headlines. Background: Clickbaits are online articles with deliberately designed misleading titles for luring more and more readers to open the intended web page. Clickbaits are used to tempted visitors to click on a particular link either to monetize the landing page or to spread the false news for sensationalization. The presence of clickbaits on any news aggregator portal may lead to an unpleasant experience for readers. Therefore, it is essential to distinguish clickbaits from authentic headlines to mitigate their impact on readers’ perception. Methodology: A total of one hundred thousand article headlines are collected from news aggregator sites consists of clickbaits and authentic news headlines. The collected data samples are divided into five training sets of balanced and unbalanced data. The natural language processing techniques are used to extract 19 manually selected features from article headlines. Contribution: Three ensemble learning techniques including bagging, boosting, and random forests are used to design a classifier model for classifying a given headline into the clickbait or non-clickbait. The performances of learners are evaluated using accuracy, precision, recall, and F-measures. Findings: It is observed that the random forest classifier detects clickbaits better than the other classifiers with an accuracy of 91.16 %, a total precision, recall, and f-measure of 91 %.




cl

Trust in Google - A Textual Analysis of News Articles About Cyberbullying

Aim/Purpose: Cyberbullying (CB) is an ongoing phenomenon that affects youth in negative ways. Using online news articles to provide information to schools can help with the development of comprehensive cyberbullying prevention campaigns, and in restoring faith in news reporting. The inclusion of online news also allows for increased awareness of cybersafety issues for youth. Background: CB is an inherent problem of information delivery and security. Textual analysis provides input into prevention and training efforts to combat the issue. Methodology: Text extraction and text analysis methods of term and concept extraction; text link analysis and sentiment analysis are performed on a body of news articles. Contribution: News articles are determined to be a major source of information for comprehensive cyberbullying prevention campaigns. Findings: Online news articles are relatively neutral in their sentiment; terms and topic extraction provide fertile ground for information presentation and context. Recommendation for Researchers: Researchers should seek support for research projects that extract timely information from online news articles. Future Research: Refinement of the terms and topics analytic model, as well as a system development approach for information extraction of online CB news.




cl

Rationalizing Fiction Cues: Psychological Effects of Disclosing Ads and the Inaccuracy of the Human Mind When Being in Parasocial Relationships

Aim/Purpose: Parasocial relationships are today established on social media between influencers and their followers. While marketing effects are well-researched, little is known about the meaning of such relationships and the psychological mechanisms behind them. This study, therefore, explores the questions: “How do followers on Instagram interpret explicit fiction cues from influencers?” and “What does this reveal about the meaning of parasocial attachment?” Background: With a billion-dollar advertising industry and leading in influencing opinion, Instagram is a significant societal and economic player. One factor for the effective influence of consumers is the relationship between influencer and follower. Research shows that disclosing advertisements surprisingly does not harm credibility, and sometimes even leads to greater trustworthiness and, in turn, willingness to purchase. While such reverse dynamics are measurable, the mechanisms behind them remain largely unexplored. Methodology: The study follows an explorative approach with in-depth interviews, which are analyzed with Mayring’s content analysis under a reconstructive paradigm. The findings are discussed through the lens of critical psychology. Contribution: Firstly, this study contributes to the understanding of the communicative dynamics of influencer-follower communication alongside the reality-fiction-gap model, and, secondly, it contributes empirical insights through the analysis of 22 explorative interviews. Findings: The findings show (a) how followers rationalize fiction cues and justify compulsive decision-making, (b) how followers are vulnerable to influences, and (c) how parasocial attachment formation overshadows rational logic and agency. The findings are discussed with regard to mechanisms, vulnerabilities, rationalizations and cognitive bias, and the social self, as well as the ethics of influencer marketing and politics. Recommendation for Researchers: The contribution is relevant to relationship research, group dynamics and societal organizing, well-being, identity, and health perspectives, within psychology, sociology, media studies, and pedagogy to management. Future Research: Future research might seek to understand more about (a) quantifiable vulnerabilities, such as attachment styles, dispositions, and demographics, (b) usage patterns and possible factors of prevention, (c) cognitive and emotional mechanisms involved with larger samples, (d) the impact on relationships and well-being, and (e) possible conditions for the potential of parasocial attachment.




cl

Colleagues’ Support and Techno-Complexity: The Importance of a Positive Aging Climate

Aim/Purpose: With a focus on promoting sustainable career paths, this article investigates the intricate relationship between age diversity management and techno-complexity, emphasizing the pivotal role of a supportive work environment. Background: In the modern workplace, the dynamics of age diversity emerge as a crucial element influencing the well-being and productivity of employees, particularly amidst the swiftly evolving digital landscape. This becomes especially pertinent when considering workers’ unique challenges adapting to technological advancements. Methodology: Utilizing a cross-sectional design, data were collected from 160 employees in an Italian multinational company within the metalworking sector. Contribution: This study provides valuable insights into the complex dynamics between the aging climate, colleagues’ support, and techno-complexity. It emphasized the importance of considering the direct effects of organizational factors and their in-direct influences through social dynamics and support structures within the workplace. Findings: The results revealed the mediating role of colleagues’ support in the relationship between the aging climate and techno-complexity. These findings highlight the importance of a supportive work environment in the context of sustainable career development, contributing to a comprehensive understanding of diversity management within the modern digital era. Recommendation for Researchers: Our results open to a series of implications and future directions. First, the unexpected finding regarding the direct relationship between the aging climate and technostress calls for a deeper exploration of the intricacies involved. Future studies could delve into specific organizational contexts, technological demands, and individual differences that may modulate this relationship. Future Research: Future studies could delve into specific organizational contexts, technological demands, and individual differences that may modulate this relationship.




cl

Information Technology and the Complexity Cycle

Aim/Purpose: In this paper we propose a framework identifying many of the unintended consequences of information technology and posit that the increased complexity brought about by IT is a proximate cause for these negative effects. Background: Builds upon the three-world model that has been evolving within the informing science transdiscipline. Methodology: We separate complexity into three categories: experienced complexity, intrinsic complexity, and extrinsic complexity. With the complexity cycle in mind, we consider how increasing complexity of all three forms can lead to unintended consequences at the individual, task and system levels. Examples of these consequences are discussed at the individual level (e.g., deskilling, barriers to advancement), the task level (e.g., perpetuation of past practices), as well as broader consequences that may result from the need to function in an environment that is more extrinsically complex (e.g., erosion of predictable causality, shortened time horizons, inequality, tribalism). We conclude by reflecting on the implications of attempting to manage or limit increases of complexity. Contribution: Shows how many unintended consequences of IT could be attributed to growing complexity. Findings: We find that these three forms of complexity feed into one another resulting in a positive feedback loop that we term the Complexity Cycle. As examples, we analyze ChatGPT, blockchain and quantum computing, through the lens of the complexity cycle, speculating how experienced complexity can lead to greater intrinsic complexity in task performance through the incorporation of IT which, in turn, increases the extrinsic complexity of the economic/technological environment. Recommendations for Practitioners: Consider treating increasing task complexity as an externality that should be considered as new systems are developed and deployed. Recommendation for Researchers: Provides opportunities for empirical investigation of the proposed model. Impact on Society: Systemic risks of complexity are proposed along with some proposals regarding how they might be addressed. Future Research: Empirical investigation of the proposed model and the degree to which cognitive changes created by the proposed complexity cycle are necessarily problematic.




cl

Critical Review of Stack Ensemble Classifier for the Prediction of Young Adults’ Voting Patterns Based on Parents’ Political Affiliations

Aim/Purpose: This review paper aims to unveil some underlying machine-learning classification algorithms used for political election predictions and how stack ensembles have been explored. Additionally, it examines the types of datasets available to researchers and presents the results they have achieved. Background: Predicting the outcomes of presidential elections has always been a significant aspect of political systems in numerous countries. Analysts and researchers examining political elections rely on existing datasets from various sources, including tweets, Facebook posts, and so forth to forecast future elections. However, these data sources often struggle to establish a direct correlation between voters and their voting patterns, primarily due to the manual nature of the voting process. Numerous factors influence election outcomes, including ethnicity, voter incentives, and campaign messages. The voting patterns of successors in regions of countries remain uncertain, and the reasons behind such patterns remain ambiguous. Methodology: The study examined a collection of articles obtained from Google Scholar, through search, focusing on the use of ensemble classifiers and machine learning classifiers and their application in predicting political elections through machine learning algorithms. Some specific keywords for the search include “ensemble classifier,” “political election prediction,” and “machine learning”, “stack ensemble”. Contribution: The study provides a broad and deep review of political election predictions through the use of machine learning algorithms and summarizes the major source of the dataset in the said analysis. Findings: Single classifiers have featured greatly in political election predictions, though ensemble classifiers have been used and have proven potent use in the said field is rather low. Recommendation for Researchers: The efficacy of stack classification algorithms can play a significant role in machine learning classification when modelled tactfully and is efficient in handling labelled datasets. however, runtime becomes a hindrance when the dataset grows larger with the increased number of base classifiers forming the stack. Future Research: There is the need to ensure a more comprehensive analysis, alternative data sources rather than depending largely on tweets, and explore ensemble machine learning classifiers in predicting political elections. Also, ensemble classification algorithms have indeed demonstrated superior performance when carefully chosen and combined.