sse Making Information Systems less Scrugged: Reflecting on the Processes of Change in Teaching and Learning By Published On :: Full Article
sse Improving Outcome Assessment in Information Technology Program Accreditation By Published On :: Full Article
sse Using Digital Logs to Reduce Academic Misdemeanour by Students in Digital Forensic Assessments By Published On :: Full Article
sse Effective Adoption of Tablets in Post-Secondary Education: Recommendations Based on a Trial of iPads in University Classes By Published On :: Full Article
sse An Exploratory Study on Using Wiki to Foster Student Teachers’ Learner-centered Learning and Self and Peer Assessment By Published On :: Full Article
sse Self-regulated Mobile Learning and Assessment: An Evaluation of Assessment Interfaces By Published On :: 2014-12-22 Full Article
sse Experiences of Using Automated Assessment in Computer Science Courses By Published On :: 2015-10-08 In this paper we discuss the use of automated assessment in a variety of computer science courses that have been taught at Israel Academic College by the authors. The course assignments were assessed entirely automatically using Checkpoint, a web-based automated assessment framework. The assignments all used free-text questions (where the students type in their own answers). Students were allowed to correct errors based on feedback provided by the system and resubmit their answers. A total of 141 students were surveyed to assess their opinions of this approach, and we analysed their responses. Analysis of the questionnaire showed a low correlation between questions, indicating the statistical independence of the individual questions. As a whole, student feedback on using Checkpoint was very positive, emphasizing the benefits of multiple attempts, impartial marking, and a quick turnaround time for submissions. Many students said that Checkpoint gave them confidence in learning and motivation to practise. Students also said that the detailed feedback that Checkpoint generated when their programs failed helped them understand their mistakes and how to correct them. Full Article
sse Effectiveness of Peer Assessment in a Professionalism Course Using an Online Workshop By Published On :: 2015-01-22 An online Moodle Workshop was evaluated for peer assessment effectiveness. A quasi-experiment was designed using a Seminar in Professionalism course taught in face-to-face mode to undergraduate students across two campuses. The first goal was to determine if Moodle Workshop awarded a fair peer grader grade. The second objective was to estimate if students were consistent and reliable in performing their peer assessments. Statistical techniques were used to answer the research hypotheses. Although Workshop Moodle did not have a built-in measure for peer assessment validity, t-tests and reliability estimates were calculated to demonstrate that the grades were consistent with what faculty expected. Implications were asserted to improve teaching and recommendations were provided to enhance Moodle. Full Article
sse A Detailed Rubric for Assessing the Quality of Teacher Resource Apps By Published On :: 2016-06-25 Since the advent of the iPhone and rise of mobile technologies, educational apps represent one of the fastest growing markets, and both the mobile technology and educational app markets are predicted to continue experiencing growth into the foreseeable future. The irony, however, is that even with a booming market for educational apps, very little research regarding the quality of them has been conducted. Though some instruments have been developed to evaluate apps geared towards student learning, no such instrument has been created for teacher resource apps, which are designed to assist teachers in completing common tasks (e.g., taking attendance, communicating with parents, monitoring student learning and behavior, etc.). Moreover, when teachers visit the App Store or Google Play to learn about apps, the only ratings provided to them are generic, five-point evaluations, which do not provide qualifiers that explain why an app earned three, two, or five points. To address that gap, previously conducted research related to designing instructional technologies coupled with best practices for supporting teachers were first identified. That information was then used to construct a comprehensive rubric for assessing teacher re-source apps. In this article, a discussion that explains the need for such a rubric is offered before describing the process used to create it. The article then presents the rubric and discusses its different components and potential limitations and concludes with suggestions for future research based on the rubric. Full Article
sse Investigating the Feasibility of Automatic Assessment of Programming Tasks By Published On :: 2018-11-24 Aim/Purpose: The aims of this study were to investigate the feasibility of automatic assessment of programming tasks and to compare manual assessment with automatic assessment in terms of the effect of the different assessment methods on the marks of the students. Background: Manual assessment of programs written by students can be tedious. The assistance of automatic assessment methods might possibly assist in reducing the assessment burden, but there may be drawbacks diminishing the benefits of applying automatic assessment. The paper reports on the experience of a lecturer trying to introduce automated grading. Students’ solutions to a practical Java programming test were assessed both manually and automatically and the lecturer tied the experience to the unified theory of acceptance and use of technology (UTAUT). Methodology: The participants were 226 first-year students registered for a Java programming course. Of the tests the participants submitted, 214 were assessed both manually and automatically. Various statistical methods were used to compare the manual assessment of student’s solutions with the automatic assessment of the same solutions. A detailed investigation of reasons for differences was also carried out. A further data collection method was the lecturer’s reflection on the feasibility of automatic assessment of programming tasks based on the UTAUT. Contribution: This study enhances the knowledge regarding benefits and drawbacks of automatic assessment of students’ programming tasks. The research contributes to the UTAUT by applying it in a context where it has hardly been used. Furthermore, the study is a confirmation of previous work stating that automatic assessment may be less reliable for students with lower marks, but more trustworthy for the high achieving students. Findings: An automatic assessment tool verifying functional correctness might be feasible for assessment of programs written during practical lab sessions but could be less useful for practical tests and exams where functional, conceptual and structural correctness should be evaluated. In addition, the researchers found that automatic assessment seemed to be more suitable for assessing high achieving students. Recommendations for Practitioners: This paper makes it clear that lecturers should know what assessment goals they want to achieve. The appropriate method of assessment should be chosen wisely. In addition, practitioners should be aware of the drawbacks of automatic assessment before choosing it. Recommendation for Researchers: This work serves as an example of how researchers can apply the UTAUT theory when conducting qualitative research in different contexts. Impact on Society: The study would be of interest to lecturers considering automated assessment. The two assessments used in the study are typical of the way grading takes place in practice and may help lecturers understand what could happen if they switch from manual to automatic assessment. Future Research: Investigate the feasibility of automatic assessment of students’ programming tasks in a practical lab environment while accounting for structural, functional and conceptual assessment goals. Full Article
sse A Real-time Plagiarism Detection Tool for Computer-based Assessments By Published On :: 2018-02-21 Aim/Purpose: The aim of this article is to develop a tool to detect plagiarism in real time amongst students being evaluated for learning in a computer-based assessment setting. Background: Cheating or copying all or part of source code of a program is a serious concern to academic institutions. Many academic institutions apply a combination of policy driven and plagiarism detection approaches. These mechanisms are either proactive or reactive and focus on identifying, catching, and punishing those found to have cheated or plagiarized. To be more effective against plagiarism, mechanisms that detect cheating or colluding in real-time are desirable. Methodology: In the development of a tool for real-time plagiarism prevention, literature review and prototyping was used. The prototype was implemented in Delphi programming language using Indy components. Contribution: A real-time plagiarism detection tool suitable for use in a computer-based assessment setting is developed. This tool can be used to complement other existing mechanisms. Findings: The developed tool was tested in an environment with 55 personal computers and found to be effective in detecting unauthorized access to internet, intranet, and USB ports on the personal computers. Recommendations for Practitioners: The developed tool is suitable for use in any environment where computer-based evaluation may be conducted. Recommendation for Researchers: This work provides a set of criteria for developing a real-time plagiarism prevention tool for use in a computer-based assessment. Impact on Society: The developed tool prevents academic dishonesty during an assessment process, consequently, inculcating confidence in the assessment processes and respectability of the education system in the society. Future Research: As future work, we propose a comparison between our tool and other such tools for its performance and its features. In addition, we want to extend our work to include testing for scalability of the tool to larger settings. Full Article
sse Constructed Response or Multiple-Choice Questions for Assessing Declarative Programming Knowledge? That is the Question! By Published On :: 2019-12-26 Aim/Purpose: This paper presents a data mining approach for analyzing responses to advanced declarative programming questions. The goal of this research is to find a model that can explain the results obtained by students when they perform exams with Constructed Response questions and with equivalent Multiple-Choice Questions. Background: The assessment of acquired knowledge is a fundamental role in the teaching-learning process. It helps to identify the factors that can contribute to the teacher in the developing of pedagogical methods and evaluation tools and it also contributes to the self-regulation process of learning. However, better format of questions to assess declarative programming knowledge is still a subject of ongoing debate. While some research advocates the use of constructed responses, others emphasize the potential of multiple-choice questions. Methodology: A sensitivity analysis was applied to extract useful knowledge from the relevance of the characteristics (i.e., the input variables) used for the data mining process to compute the score. Contribution: Such knowledge helps the teachers to decide which format they must consider with respect to the objectives and expected students results. Findings: The results shown a set of factors that influence the discrepancy between answers in both formats. Recommendations for Practitioners: Teachers can make an informed decision about whether to choose multiple-choice questions or constructed-response taking into account the results of this study. Recommendation for Researchers: In this study a block of exams with CR questions is verified to complement the area of learning, returning greater performance in the evaluation of students and improving the teaching-learning process. Impact on Society: The results of this research confirm the findings of several other researchers that the use of ICT and the application of MCQ is an added value in the evaluation process. In most cases the student is more likely to succeed with MCQ, however if the teacher prefers to evaluate with CR other research approaches are needed. Future Research: Future research must include other question formats. Full Article
sse Improving Workgroup Assessment with WebAVALIA: The Concept, Framework and First Results By Published On :: 2020-09-21 Aim/Purpose: The purpose of this study is to develop an efficient methodology that can assist the evaluators in assessing a variable number of individuals that are working in groups and guarantee that the assessment is dependent on the group members’ performance and contribution to the work developed. Background: Collaborative work has been gaining more popularity in academic settings. However, group assessment needs to be performed according to each individual’s performance. The problem rests on the need to distinguish each member of the group in order to provide fair and unbiased assessments. Methodology: Design Science Research methodology supported the design of a framework able to provide the evaluator with the means to distinguish individuals in a workgroup and deliver fair results. Hevner’s DSR guidelines were fulfilled in order to describe WebAVALIA. To evaluate the framework, a quantitative study was performed and the first results are presented. Contribution: This paper provides a methodological solution regarding a fair evaluation of collaborative work through a tool that allows its users to perform their own assessment and peer assessment. These are made accordingly to the user’s perspectives on the performance of each group member throughout the work development. Findings: The first analysis of the results indicates that the developed method provides fairness in the assessment of group members, delivering a distinction amongst individuals. Therefore, each group member obtains a mark that corresponds to their specific contribution to the workgroup. Recommendations for Practitioners: For those who intend to apply this workgroup assessment method, it is relevant to raise student awareness about the methodology that is going to be used. That is, all the functionalities and steps in WebAVALIA have to be thoroughly explained before beginning of the project. Then, the evaluators have to decide about the students’ intermediate voting, namely if the evaluator chooses or not to publish student results throughout the project’s development. If there is the decision to display these intermediate results, the evaluator must try to encourage collaboration among workgroup members, instead of competition. Recommendation for Researchers: This study explores the design and development of an e-assessment tool – WebAVALIA. In order to assess its feasibility, its use in other institutions or contexts is recommended. The gathering of user opinions is suggested as well. It would then be interesting to compare the findings of this study with the results from other experimentations Impact on Society: Sometimes, people develop a rejection of collaborative work because they feel exploited due to the biased evaluation results. However, the group members assessment distinction, according to each one’s performance, may give each individual a sense of fairness and reward, leading to an openness/willingness towards collaborative work. Future Research: As future work, there are plans to implement the method in other group assessment contexts – such as sports and business environments, other higher education institutions, technical training students – in other cultures and countries. From this myriad of contexts, satisfaction results would be compared. Other future plans are to further explore the mathematical formulations and the respective WebAVALIA supporting algorithms. Full Article
sse E- Assessment with Multiple-Choice Questions: A 5 Year Study of Students’ Opinions and Experience By Published On :: 2020-01-24 Aim/Purpose: The aim of this study is to understand student’s opinions and perceptions about e-assessment when the assessment process was changed from the traditional computer assisted method to a multiple-choice Moodle based method. Background: In order to implement continuous assessment to a large number of students, several shifts are necessary, which implies as many different tests as the number of shifts required. Consequently, it is difficult to ensure homogeneity through the different tests and a huge amount of grading time is needed. These problems related to the traditional assessment based on computer assisted tests, lead to a re-design of the assessment resulting in the use of multiple-choice Moodle tests. Methodology: A longitudinal, concurrent, mixed method study was implemented over a five-year period. A survey was developed and carried out by 815 undergraduate students who experienced the electronic multiple-choice questions (eMCQ) assessment in the courses of the IS department. Qualitative analyses included open-ended survey responses and interviews with repeating students in the first year. Contribution: This study provides a reflection tool on how to incorporate frequent moments of assessment in courses with a high number of students without overloading teachers with a huge workload. The research analysed the efficiency of assessing non-theoretical topics using eMCQ, while ensuring the homogeneity of assessment tests, which needs to be complemented with other assessment methods in order to assure that students develop and acquire the expected skills and competencies. Findings: The students involved in the study appreciate the online multiple-choice quiz assessment method and perceive it as fair but have a contradictory opinion regarding the preference of the assessment method, throughout the years. These changes in perception may be related to the improvement of the question bank and categorisation of questions according to difficulty level, which lead to the nullification of the ‘luck factor’. Other major findings are that although the online multiple-choice quizzes are used with success in the assessment of theoretical topics, the same is not in evidence regarding practical topics. Therefore, this assessment needs to be complemented with other methods in order to achieve the expected learning outcomes. Recommendations for Practitioners: In order to be able to evaluate the same expected learning outcomes in practical topics, particularly in technology and information systems subjects, the evaluator should complement the online multiple-choice quiz assessment with other approaches, such as a PBL method, homework assignments, and/or other tasks performed during the semester. Recommendation for Researchers: This study explores e-assessment with online multiple-choice quizzes in higher education. It provides a survey that can be applied in other institutions that are also using online multiple-choice quizzes to assess non-theorical topics. In order to better understand the students’ opinions on the development of skills and competencies with online multiple-choice quizzes and on the other hand with classical computer assisted assessment, it would be necessary to add questions concerning these aspects. It would then be interesting to compare the findings of this study with the results from other institutions. Impact on Society: The increasing number of students in higher education has led to a raised use of e-assessment activities, since it can provide a fast and efficient manner to assess a high number of students. Therefore, this research provides meaningful insight of the stakeholders’ perceptions of online multiple-choice quizzes about practical topics. Future Research: An interesting study, in the future, would be to obtain the opinions of a particular set of students on two tests, one of the tests using online multiple-choice quizzes and the other through a classical computer assisted assessment method. A natural extension of the present study is a comparative analysis regarding the grades obtained by students who performed one or another type of assessment (online multiple-choice quizzes vs. classical computer assisted assessment). Full Article
sse A Cognitive Approach to Assessing the Materials in Problem-Based Learning Environments By Published On :: 2021-07-12 Aim/Purpose: The purpose of this paper is to develop and evaluate a debiasing-based approach to assessing the learning materials in problem-based learning (PBL) environments. Background: Research in cognitive debiasing suggests nine debiasing strategies improve decision-making. Given the large number of decisions made in semester-long, problem-based learning projects, multiple tools and techniques help students make decisions. However, instructors may struggle to identify the specific tools or techniques that could be modified to best improve students’ decision-making in the project. Furthermore, a structured approach for identifying these modifications is lacking. Such an approach would match the debiasing strategies with the tools and techniques. Methodology: This debiasing framework for the PBL environment is developed through a study of debiasing literature and applied within an e-commerce course using the Model for Improvement, continuous improvement process, as an illustrative case to show its potential. In addition, a survey of the students, archival information, and participant observation provided feedback on the debiasing framework and its ability to assess the tools and techniques within the PBL environment. Contribution: This paper demonstrates how debiasing theory can be used within a continuous improvement process for PBL courses. By focusing on a cognitive debiasing-based approach, this debiasing framework helps instructors 1) identify what tools and techniques to change in an PBL environment, and 2) assess which tools and techniques failed to debias the students adequately, providing potential changes for future cycles. Findings: Using the debiasing framework in an e-commerce course with significant PBL elements provides evidence that this framework can be used within IS courses and more broadly. In this particular case, the change identified in a prior cycle proved effective and additional issues were identified for improvement. Recommendations for Practitioners: With the growing usage of semester-long PBL projects in business schools, instructors need to ensure that their design of the projects incorporates techniques that improve student learning and decision making. This approach provides a means for assessing the quality of that design. Recommendation for Researchers: This study uses debiasing theory to improve course techniques. Researchers interested in assessment, course improvement, and program improvement should incorporate debiasing theory within PBL environments or other types of decision-making scenarios. Impact on Society: Increased awareness of cognitive biases can help instructors, students, and professionals make better decisions and recommendations. By developing a framework for evaluating cognitive debiasing strategies, we help instructors improve projects that prepare students for complex and multifaceted real-world projects. Future Research: The approach could be applied to multiple contexts, within other courses, and more widely within information systems to extend this research. The framework might also be refined to make it more concise, integrated with assessment, or usable in more contexts. Full Article
sse Formative Assessment Activities to Advance Education: A Case Study By Published On :: 2021-05-30 Aim/Purpose: During the education of future engineers and experts in the field of computer science and information communication technology, the achievement of learning outcomes related to different levels of cognitive ability and knowledge dimensions can be a challenge. Background: Teachers need to design an appropriate set of activities for students and combine theory-based knowledge acquisition with practical training in technical skills. Including various activities for formative assessment during the course can positively affect students’ motivation for learning and ensure appropriate and timely feedback that will guide students in further learning. Methodology: The aim of the research presented in this paper is to propose an approach for course delivery in the field of software engineering and to determine whether the use of the approach increases student’s academic achievement. Using the proposed approach, the course Process Modeling for undergraduate students was redesigned and experimental study was conducted. Course results of the students (N=82) who took the new version of the course (experimental group) were compared to the results of the students from the control group (N=66). Contribution: An approach for a blended learning course in the field of software engineering was developed. This approach is based on the formative assessment activities that promote collaboration and the use of digital tools. Newly designed activities are used to encourage a greater level of acquired theoretical content and enhance the acquisition of subject-specific skills needed for practical tasks. Findings: The results showed that students who participated in the formative assessment activities achieved significantly better results. They had significantly higher scores in the main components of assessment compared to the students from the control group. In addition, students from the experimental group expressed positive views about the effectiveness of the used approach. Recommendations for Practitioners: The proposed approach has potential to increase students’ motivation and academic achievements so practitioners should consider to apply it in their own context. Recommendation for Researchers: Researchers are encouraged to conduct additional studies to explore the effectiveness of the approach with different courses and participants as well as to provide further insights regarding its applicability and acceptance by students. Impact on Society: The paper provides an approach and an example of good practice that may be beneficial for the university teachers in the field of computer science, information-communication technology, and engineering. Future Research: In the future, face-to-face activities will be adapted for performance in an online environment. Future work will also include a research on the possibilities of personalization of activities in accordance with the students’ characteristics. Full Article
sse Objective Assessment in Java Programming Language Using Rubrics By Published On :: 2022-12-12 Aim/Purpose: This paper focuses on designing and implementing the rubric for objective JAVA programming assessments. An unsupervised learning approach was used to group learners based on their performance in the results obtained from the rubric, reflecting their learning ability. Background: Students' learning outcomes have been evaluated subjectively using a rubric for years. Subjective assessments are simple to construct yet inconsistent and biased to evaluate. Objective assessments are stable, reliable, and easy to conduct. However, they usually lack rubrics. Methodology: In this study, a Top-Down assessment approach is followed, i.e., a rubric focused on the learning outcome of the subject is designed, and the proficiency of learners is judged by their performance in conducting the task given. A JAVA rubric is proposed based on the learning outcomes like syntactical, logical, conceptual, and advanced JAVA skills. A JAVA objective quiz (with multiple correct options) is prepared based on the rubric criteria, comprising five questions per criterion. The examination was conducted for 209 students (100 from the MCA course and 109 from B.Tech. course). The suggested rubric was used to compute the results. K-means clustering was applied to the results to classify the students according to their learning preferences and abilities. Contribution: This work contributes to the field of rubric designing by creating an objective programming assessment and analyzing the learners’ performance using machine learning techniques. It also facilitates a reliable feedback approach offering various possibilities in student learning analytics. Findings: The designed rubric, partial scoring, and cluster analysis of the results help us to provide individual feedback and also, group the students based on their learning skills. Like on average, learners are good at remembering the syntax and concepts, mediocre in logical and critical thinking, and need more practice in code optimization and designing applications. Recommendations for Practitioners: The practical implications of this work include rubric designing for objective assessments and building an informative feedback process. Faculty can use this approach as an alternative assessment measure. They are the strong pillars of e-assessments and virtual learning platforms. Recommendation for Researchers: This research presents a novel approach to rubric-based objective assessments. Thus, it provides a fresh perspective to the researchers promising enough opportunities in the current era of digital education. Impact on Society: In order to accomplish the shared objective of reflective learning, the grading rubric and its accompanying analysis can be utilized by both instructors and students. As an instructional assessment tool, the rubric helps instructors to align their pedagogies with the students’ learning levels and assists students in updating their learning paths based on the informative topic-wise scores generated with the help of the rubric. Future Research: The designed rubric in this study can be extended to other programming languages and subjects. Further, an adaptable weighted rubric can be created to execute a flexible and reflective learning process. In addition, outcome-based learning can be achieved by measuring and analyzing student improvements after rubric evaluation. Full Article
sse Matching Authors and Reviewers in Peer Assessment Based on Authors’ Profiles By Published On :: 2022-12-06 Aim/Purpose: To encourage students’ engagement in peer assessments and provide students with better-quality feedback, this paper describes a technique for author-reviewer matching in peer assessment systems – a Balanced Allocation algorithm. Background: Peer assessment concerns evaluating the work of colleagues and providing feedback on their work. This process is widely applied as a learning method to involve students in the progress of their learning. However, as students have different ability levels, the efficacy of the peer feedback differs from case to case. Thus, peer assessment may not provide satisfactory results for students. In order to mitigate this issue, this paper explains and evaluates an algorithm that matches the author to a set of reviewers. The technique matches authors and reviewers based on how difficult the authors perceived the assignment to be, and the algorithm then matches the selected author to a group of reviewers who may meet the author’s needs in regard to the selected assignment. Methodology: This study used the Multiple Criteria Decision-Making methodology (MCDM) to determine a set of reviewers from among the many available options. The weighted sum method was used because the data that have been collected in user profiles are expressed in the same unit. This study produced an experimental result, examining the algorithm with a real collected dataset and mock-up dataset. In total, there were 240 students in the real dataset, and it contained self-assessment scores, peer scores, and instructor scores for the same assignment. The mock-up dataset created 1000 records for self-assessment scores. The algorithm was evaluated using focus group discussions with 29 programming students and interviews with seven programming instructors. Contribution: This paper contributes to the field in the following two ways. First, an algorithm using a MCDM methodology was proposed to match authors and reviewers in order to facilitate the peer assessment process. In addition, the algorithm used self-assessment as an initial data source to match users, rather than randomly creating reviewer – author pairs. Findings: The findings show the accurate results of the algorithm in matching three reviewers for each author. Furthermore, the algorithm was evaluated based on students’ and instructors’ perspectives. The results are very promising, as they depict a high level of satisfaction for the Balanced Allocation algorithm. Recommendations for Practitioners: We recommend instructors to consider using the Balanced Allocation algorithm to match students in peer assessments, and consequently to benefit from personalizing peer assessment based on students' needs. Recommendation for Researchers: Several MCDM methods could be expanded upon, such as the analytic hierarchy process (AHP) if different attributes are collected, or the artificial neural network (ANN) if fuzzy data is available in the user profile. Each method is suitable for special cases depending on the data available for decision-making. Impact on Society: Suitable pairing in peer assessment would increase the credibility of the peer assessment process and encourage students’ engagement in peer assessments. Future Research: The Balanced Allocation algorithm could be applied using a single group, and a peer assessment with random matching with another group may also be conducted, followed by performing a t-test to determine the impact of matching on students’ performances in the peer assessment activity. Full Article
sse Unveiling Learner Emotions: Sentiment Analysis of Moodle-Based Online Assessments Using Machine Learning By Published On :: 2023-07-24 Aim/Purpose: The study focused on learner sentiments and experiences after using the Moodle assessment module and trained a machine learning classifier for future sentiment predictions. Background: Learner assessment is one of the standard methods instructors use to measure students’ performance and ascertain successful teaching objectives. In pedagogical design, assessment planning is vital in lesson content planning to the extent that curriculum designers and instructors primarily think like assessors. Assessment aids students in redefining their understanding of a subject and serves as the basis for more profound research in that particular subject. Positive results from an evaluation also motivate learners and provide employment directions to the students. Assessment results guide not just the students but also the instructor. Methodology: A modified methodology was used for carrying out the study. The revised methodology is divided into two major parts: the text-processing phase and the classification model phase. The text-processing phase consists of stages including cleaning, tokenization, and stop words removal, while the classification model phase consists of dataset training using a sentiment analyser, a polarity classification model and a prediction validation model. The text-processing phase of the referenced methodology did not utilise tokenization and stop words. In addition, the classification model did not include a sentiment analyser. Contribution: The reviewed literature reveals two major omissions: sentiment responses on using the Moodle for online assessment, particularly in developing countries with unstable internet connectivity, have not been investigated, and variations of the k-fold cross-validation technique in detecting overfitting and developing a reliable classifier have been largely neglected. In this study we built a Sentiment Analyser for Learner Emotion Management using the Moodle for assessment with data collected from a Ghanaian tertiary institution and developed a classification model for future sentiment predictions by evaluating the 10-fold and the 5-fold techniques on prediction accuracy. Findings: After training and testing, the RF algorithm emerged as the best classifier using the 5-fold cross-validation technique with an accuracy of 64.9%. Recommendations for Practitioners: Instead of a closed-ended questionnaire for learner feedback assessment, the open-ended mechanism should be utilised since learners can freely express their emotions devoid of restrictions. Recommendation for Researchers: Feature selection for sentiment analysis does not always improve the overall accuracy for the classification model. The traditional machine learning algorithms should always be compared to either the ensemble or the deep learning algorithms Impact on Society: Understanding learners’ emotions without restriction is important in the educational process. The pedagogical implementation of lessons and assessment should focus on machine learning integration Future Research: To compare ensemble and deep learning algorithms Full Article
sse Development and validation of scale to measure minimalism - a study analysing psychometric assessment of minimalistic behaviour! Consumer perspective By www.inderscience.com Published On :: 2024-11-11T23:20:50-05:00 This research aims to establish a valid and accurate measurement scale and identify consumer-driven characteristics for minimalism. The study has employed a hybrid approach to produce items for minimalism. Expert interviews were conducted to identify the items for minimalism in the first phase followed by consumer survey to obtain their response in second phase. A five-point Likert scale was used to collect the data. Further, data was subjected to reliability and validity check. Structural equation modelling was used to test the model. The findings demonstrated that there are five dimensions by which consumers perceive minimalism: decluttering, mindful consumption, aesthetic choices, financial freedom, and sustainable lifestyle. The outcome also revealed a high correlation between simplicity and well-being. This study is the first to provide a reliable and valid instrument for minimalism. The results will have several theoretical and practical ramifications for society and policymakers. It will support policymakers in gauging and encouraging minimalistic practices, which enhance environmental performance and lower carbon footprint. Full Article
sse Assessing supply chain risk management capabilities and its impact on supply chain performance: moderation of AI-embedded technologies By www.inderscience.com Published On :: 2024-10-10T23:20:50-05:00 This research investigates the correlation between risk management and supply chain performance (SCP) along with moderation of AI-embedded technologies such as big data analytics, Internet of Things (IoT), virtual reality, and blockchain technologies. To calculate the results, this study utilised 644 questionnaires through the structural equation modelling (SEM) method. It is revealed using SmartPls that financial risk management (FRM) is positively linked with SCP. Second, it was observed that AI significantly moderates the connection between FRM and SCP. In addition, the study presents certain insights into supply chain and AI-enabled technologies and how these capabilities can beneficially advance SCP. Besides, certain implications, both managerial and theoretical are described for the supply chain managers along with limitations for future scholars of the world. Full Article
sse Intellectual property protection for virtual assets and brands in the Metaverse: issues and challenges By www.inderscience.com Published On :: 2024-10-30T23:20:50-05:00 Intellectual property rights face new obstacles and possibilities as a result of the emergence of the Metaverse, a simulation of the actual world. This paper explores the current status of intellectual property rights in the Metaverse and examines the challenges and opportunities for enforcement. The article describes virtual assets and investigates their copyright and trademark protection. It also examines the protection of user-generated content in the Metaverse and the potential liability for copyright infringement. The article concludes with a consideration of the technological and jurisdictional obstacles to enforcing intellectual property rights in the Metaverse, as well as possible solutions for stakeholders. This paper will appeal to lawyers, policymakers, developers of virtual assets, platform owners, and anyone interested in the convergence of technology and intellectual property rights. Full Article
sse Risk assessment method of power grid construction project investment based on grey relational analysis By www.inderscience.com Published On :: 2024-07-04T23:20:50-05:00 In view of the problems of low accuracy, long time consuming and low efficiency of the existing engineering investment risk assessment method; this paper puts forward the investment risk assessment method of power grid construction project based on grey correlation analysis. Firstly, classify the risks of power grid construction project; secondly, determine the primary index and secondary index of investment risk assessment of power grid construction project; then construct the correlation coefficient matrix of power grid project investment risk to calculate the correlation degree and weight of investment risk index; finally, adopt the grey correlation analysis method to construct investment risk assessment function to realise investment risk assessment. The experimental results show that the average accuracy of evaluating the investment risk of power grid construction projects using the method is 95.08%, and the maximum time consuming is 49 s, which proves that the method has high accuracy, short time consuming and high evaluation efficiency. Full Article
sse Assessing the Impact of Instructional Methods and Information Technology on Student Learning Styles By Published On :: Full Article
sse Development and Validation of an Instrument for Assessing Users’ Views about the Usability of Digital Libraries By Published On :: Full Article
sse An Assessment of Software Project Management Maturity in Mauritius By Published On :: Full Article
sse Towards an Information System Making Transparent Teaching Processes and Applying Informing Science to Education By Published On :: Full Article
sse Development of Scoring Rubrics for Projects as an Assessment Tool across an IS Program By Published On :: Full Article
sse Processes for Ex-ante Evaluation of IT Projects - Case Studies in Brazilian Companies By Published On :: Full Article
sse Will It Work? An Initial Examination of the Processes and Outcomes of Converting Course Materials to CD-ROMs By Published On :: Full Article
sse The Importance of Partnerships: The Relationship between Small Businesses, ICT and Local Communities By Published On :: Full Article
sse Experimenting with eXtreme Teaching Method – Assessing Students’ and Teachers’ Experiences By Published On :: Full Article
sse Blended Proposal of Orientation Scientific Works by Comparison Face-to-Face and Online Processes By Published On :: Full Article
sse Assessment of School Information System Utilization in the UAE Primary Schools By Published On :: Full Article
sse Interweaving Rubrics in Information Systems Program Assessments- Experiences from Action Research at Two Universities By Published On :: Full Article
sse The Development of Students Geometrical Thinking through Transformational Processes and Interaction Techniques in a Dynamic Geometry Environment By Published On :: Full Article
sse Highs and Lows of Implementing a Management Strategy Eliminating ‘Free Passengers’ in Group Projects By Published On :: Full Article
sse Managing Information Systems Textbooks: Assessing their Orientation toward Potential General Managers By Published On :: Full Article
sse Name-display Feature for Self-disclosure in an Instant Messenger Program: A Qualitative Study in Taiwan By Published On :: Full Article