v

A method for evaluating the quality of college curriculum teaching reform based on data mining

In order to improve the evaluation effect of current university teaching reform, a new method for evaluating the quality of university course teaching reform is proposed based on data mining algorithms. Firstly, the optimal data clustering criterion was used to select evaluation indicators and a quality evaluation system for university curriculum teaching reform was established. Next, a reform quality evaluation model is constructed using BP neural network, and the training process is improved through genetic algorithm to obtain the model weight and threshold of the optimal solution. Finally, the calculated parameters are substituted into the model to achieve accurate evaluation of the quality of university curriculum teaching reform. Selecting evaluation accuracy and evaluation efficiency as evaluation indicators, the practicality of the proposed method was verified through experiments. The experimental results showed that the proposed method can mine teaching reform data and evaluate the quality of teaching reform. Its evaluation accuracy is higher than 96.3%, and the evaluation time is less than 10ms, which is much better than the comparison method, fully demonstrating the practicality of the method.




v

Evaluation method of teaching reform quality in colleges and universities based on big data analysis

Research on the quality evaluation of teaching reforms plays an important role in promoting improvements in teaching quality. Therefore, an evaluation method of teaching reform quality in colleges and universities based on big data analysis is proposed. A multivariate logistic model is used to select the evaluation indicators for the quality evaluation of teaching reforms in universities. And clustering and cleaning of the evaluation indicator data are performed through big data analysis. The evaluation indicator data is used as input vectors, and the results of the teaching reform quality evaluation are used as output vectors. A support vector machine model based on the whale algorithm is built to obtain the relevant evaluation results. Experimental results show that the proposed method achieves a minimum recall rate of 98.7% for evaluation indicator data, the minimum data processing time of 96.3 ms, the accuracy rate consistently above 97.1%.




v

Integrating MOOC online and offline English teaching resources based on convolutional neural network

In order to shorten the integration and sharing time of English teaching resources, a MOOC English online and offline mixed teaching resource integration model based on convolutional neural networks is proposed. The intelligent integration model of MOOC English online and offline hybrid teaching resources based on convolutional neural network is constructed. The intelligent integration unit of teaching resources uses the Arduino device recognition program based on convolutional neural network to complete the classification of hybrid teaching resources. Based on the classification results, an English online and offline mixed teaching resource library for Arduino device MOOC is constructed, to achieve intelligent integration of teaching resources. The experimental results show that when the regularisation coefficient is 0.00002, the convolutional neural network model has the best training effect and the fastest convergence speed. And the resource integration time of the method in this article should not exceed 2 s at most.




v

Prediction method of college students' achievements based on learning behaviour data mining

This paper proposes a method for predicting college students' performance based on learning behaviour data mining. The method addresses the issue of limited sample size affecting prediction accuracy. It utilises the K-means clustering algorithm to mine learning behaviour data and employs a density-based approach to determine optimal clustering centres, which are then output as the results of the clustering process. These clustering results are used as input for an attention encoder-decoder model to extract features from the learning behaviour sequence, incorporating an attention mechanism, sequence feature generator, and decoder. The characteristics derived from the learning behaviour sequence are then used to establish a prediction model for college students' performance, employing support vector regression. Experimental results demonstrate that this method accurately predicts students' performance with a relative error of less than 4% by leveraging the results obtained from learning behaviour data mining.




v

A method for evaluating the quality of teaching reform based on fuzzy comprehensive evaluation

In order to improve the comprehensiveness of evaluation results and reduce errors, a teaching reform quality evaluation method based on fuzzy comprehensive evaluation is proposed. Firstly, on the premise of meeting the principles of indicator selection, factor analysis is used to construct an evaluation indicator system. Then, calculate the weights of various evaluation indicators through fuzzy entropy, establish a fuzzy evaluation matrix, and calculate the weight vector of evaluation indicators. Finally, the fuzzy cognitive mapping method is introduced to improve the fuzzy comprehensive evaluation method, obtaining the final weight of the evaluation indicators. The weight is multiplied by the fuzzy evaluation matrix to obtain the comprehensive evaluation result. The experimental results show that the maximum relative error of the proposed method's evaluation results is about 2.0, the average comprehensive evaluation result is 92.3, and the determination coefficient is closer to 1, verifying the application effect of this method.




v

Constitutional and international legal framework for the protection of genetic resources and associated traditional knowledge: a South African perspective

The value and utility of traditional knowledge in conserving and commercialising genetic resources are increasingly becoming apparent due to advances in biotechnology and bioprospecting. However, the absence of an international legally binding instrument within the WIPO system means that traditional knowledge associated with genetic resources is not sufficiently protected like other forms of intellectual property. This means that indigenous peoples and local communities (IPLCs) do not benefit from the commercial exploitation of these resources. The efficacy of domestic tools to protect traditional knowledge and in balancing the rights of IPLCs and intellectual property rights (IPRs) is still debated. This paper employs a doctrinal research methodology based on desktop research of international and regional law instruments and the Constitution of the Republic of South Africa, 1996, to determine the basis for balancing the protection of genetic resources and associated traditional knowledge with competing interests of IPLCs and IPRs in South Africa.




v

Multiplication complexity in education activities with fair use principle of copyright in Indonesia

Copying and duplicating papers for educational purposes is a violation form of copyright in Indonesia. The principle of fair use in education is a form of structured violation. Copying and duplicating the papers of the authors for educational purposes has provided commercial (business) benefits for libraries and universities. The research method is conducted using the observation method in libraries and universities that duplicate papers. The method also uses the normative juridical method that connects duplication of the papers in libraries and universities with the fair use principle. The results explain the authors' loss from copying and duplicating of papers in libraries and universities. Therefore, copying and duplicating the papers can only be done by implementing the responsibility system. Copying and duplicating the papers of the authors' in libraries and universities can be allowed if they fulfil the elements of copyright protection in the new concept.




v

Intellectual property management in technology management: a comprehensive bibliometric analysis during 2000-2022

Presently, there are many existing academic studies on the development, protection and operation of intellectual property management (IPM). Therefore, provides a comprehensive econometric analysis in order to provide scholars, with a clearer understanding of the evolution and development of IP management research during 2000 to 2022. The study is aiming to help scholars to better discern the expanding IPM research field from a multidimensional perspective. The database used for this analysis is the Web of Science Core Collection database. After retrieval through keywords and using a variety of tools such as CiteSpace, VOSviewer, Bibliometrix and HistCite, 1033 documents were refined to conduct the econometric analysis, and produce graphs. The findings indicate that the US is a highly active country/region in the field IP management research, and its expanding IP management research is branching out into other disciplines. The study also presents the future directions and possible challenges for IPM in technology management.




v

Intellectual property protection for virtual assets and brands in the Metaverse: issues and challenges

Intellectual property rights face new obstacles and possibilities as a result of the emergence of the Metaverse, a simulation of the actual world. This paper explores the current status of intellectual property rights in the Metaverse and examines the challenges and opportunities for enforcement. The article describes virtual assets and investigates their copyright and trademark protection. It also examines the protection of user-generated content in the Metaverse and the potential liability for copyright infringement. The article concludes with a consideration of the technological and jurisdictional obstacles to enforcing intellectual property rights in the Metaverse, as well as possible solutions for stakeholders. This paper will appeal to lawyers, policymakers, developers of virtual assets, platform owners, and anyone interested in the convergence of technology and intellectual property rights.




v

Emotional intelligence and managerial leadership in the fast moving consumer durable goods industry in India's perspective

Dynamic nature of the FMCG sector perpetually provides a tricky challenge for organisational leaders to nurture their employees. High demand for products, less shelf life and tough competitors always challenge the leaders to uphold their products in the market. Due to technology and e-commerce, many new competitors have joined the market, vying with the industry's veterans. Due to their unique business models that match client needs, these firms are expected to boost FMCG industry income in the future. Managers' leadership styles depend primarily on emotional intelligence. This quantitative study examines how emotional intelligence influences West Bengal FMCG senior managers' leadership styles. 500 FMCG managers were selected. PLS-SEM is used to study. Emotionally competent leaders choose transactional and transformational leadership styles depending on the occasion. Managers' transactional leadership style is strongly influenced by their sympathetic awareness, as shown by a path coefficient of 0.755. Transformational leadership style has a path coefficient of 0.693, indicating that managers' empathy affects their organisational management. Thus, sympathetic awareness and emotion regulation predict good management leadership.




v

An evaluation of English distance information teaching quality based on decision tree classification algorithm

In order to overcome the problems of low evaluation accuracy and long evaluation time in traditional teaching quality evaluation methods, a method of English distance information teaching quality evaluation based on decision tree classification algorithm is proposed. Firstly, construct teaching quality evaluation indicators under different roles. Secondly, the information gain theory in decision tree classification algorithm is used to divide the attributes of teaching resources. Finally, the rough set theory is used to calculate the index weight and establish the risk evaluation index factor set. The result of teaching quality evaluation is obtained through fuzzy comprehensive evaluation method. The experimental results show that the accuracy rate of the teaching quality evaluation of this method can reach 99.2%, the recall rate of the English information teaching quality evaluation is 99%, and the time used for the English distance information teaching quality evaluation of this method is only 8.9 seconds.




v

Quantitative evaluation method of ideological and political teaching achievements based on collaborative filtering algorithm

In order to overcome the problems of large error, low evaluation accuracy and long evaluation time in traditional evaluation methods of ideological and political education, this paper designs a quantitative evaluation method of ideological and political education achievements based on collaborative filtering algorithm. First, the evaluation index system is constructed to divide the teaching achievement evaluation index data in a small scale; then, the quantised dataset is determined and the quantised index weight is calculated; finally, the collaborative filtering algorithm is used to generate a set with high similarity, construct a target index recommendation list, construct a quantitative evaluation function and solve the function value to complete the quantitative evaluation of teaching achievements. The results show that the evaluation error of this method is only 1.75%, the accuracy can reach 98%, and the time consumption is only 2.0 s, which shows that this method can improve the quantitative evaluation effect.




v

The performance evaluation of teaching reform based on hierarchical multi-task deep learning

The research goal is to solve the problems of low accuracy and long time existing in traditional teaching reform performance evaluation methods, a performance evaluation method of teaching reform based on hierarchical multi-task deep learning is proposed. Under the principle of constructing the evaluation index system, the evaluation indicator system should be constructed. The weight of the evaluation index is calculated through the analytic hierarchy process, and the calculation result of the evaluation weight is taken as the model input sample. A hierarchical multi-task deep learning model for teaching reform performance evaluation is built, and the final teaching reform performance score is obtained. Through relevant experiments, it is proved that compared with the experimental comparison method, this method has the advantages of high evaluation accuracy and short time, and can be further applied in relevant fields.




v

Research on evaluation method of e-commerce platform customer relationship based on decision tree algorithm

In order to overcome the problems of poor evaluation accuracy and long evaluation time in traditional customer relationship evaluation methods, this study proposes a new customer relationship evaluation method for e-commerce platform based on decision tree algorithm. Firstly, analyse the connotation and characteristics of customer relationship; secondly, the importance of customer relationship in e-commerce platform is determined by using decision tree algorithm by selecting and dividing attributes according to the information gain results. Finally, the decision tree algorithm is used to design the classifier, the weighted sampling method is used to obtain the training samples of the base classifier, and the multi-period excess income method is used to construct the customer relationship evaluation function to achieve customer relationship evaluation. The experimental results show that the accuracy of the customer relationship evaluation results of this method is 99.8%, and the evaluation time is only 51 minutes.




v

Online allocation of teaching resources for ideological and political courses in colleges and universities based on differential search algorithm

In order to improve the classification accuracy and online allocation accuracy of teaching resources and shorten the allocation time, this paper proposes a new online allocation method of college ideological and political curriculum teaching resources based on differential search algorithm. Firstly, the feedback parameter model of teaching resources cleaning is constructed to complete the cleaning of teaching resources. Secondly, according to the results of anti-interference consideration, the linear feature extraction of ideological and political curriculum teaching resources is carried out. Finally, the online allocation objective function of teaching resources for ideological and political courses is constructed, and the differential search algorithm is used to optimise the objective function to complete the online allocation of resources. The experimental results show that this method can accurately classify the teaching resources of ideological and political courses, and can shorten the allocation time, with the highest allocation accuracy of 97%.




v

Evaluation method of cross-border e-commerce supply chain innovation mode based on blockchain technology

In view of the low evaluation accuracy of the effectiveness of cross-border e-commerce supply chain innovation model and the low correlation coefficient of innovation model influencing factors, the evaluation method of cross-border e-commerce supply chain innovation model based on blockchain technology is studied. First, analyse the operation mode of cross-border e-commerce supply chain, and determine the key factors affecting the innovation mode; Then, the comprehensive integration weighting method is used to analyse the factors affecting innovation and calculate the weight value; Finally, the blockchain technology is introduced to build an evaluation model for the supply chain innovation model and realise the evaluation of the cross-border e-commerce supply chain innovation model. The experimental results show that the evaluation accuracy of the proposed method is high, and the highest correlation coefficient of the influencing factors of innovation mode is about 0.99, which is feasible.




v

Risk assessment method of power grid construction project investment based on grey relational analysis

In view of the problems of low accuracy, long time consuming and low efficiency of the existing engineering investment risk assessment method; this paper puts forward the investment risk assessment method of power grid construction project based on grey correlation analysis. Firstly, classify the risks of power grid construction project; secondly, determine the primary index and secondary index of investment risk assessment of power grid construction project; then construct the correlation coefficient matrix of power grid project investment risk to calculate the correlation degree and weight of investment risk index; finally, adopt the grey correlation analysis method to construct investment risk assessment function to realise investment risk assessment. The experimental results show that the average accuracy of evaluating the investment risk of power grid construction projects using the method is 95.08%, and the maximum time consuming is 49 s, which proves that the method has high accuracy, short time consuming and high evaluation efficiency.




v

Student's classroom behaviour recognition method based on abstract hidden Markov model

In order to improve the standardisation of mutual information index, accuracy rate and recall rate of student classroom behaviour recognition method, this paper proposes a student's classroom behaviour recognition method based on abstract hidden Markov model (HMM). After cleaning the students' classroom behaviour data, improve the data quality through interpolation and standardisation, and then divide the types of students' classroom behaviour. Then, in support vector machine, abstract HMM is used to calculate the output probability density of support vector machine. Finally, according to the characteristic interval of classroom behaviour, we can judge the category of behaviour characteristics. The experiment shows that normalised mutual information (NMI) index of this method is closer to one, and the maximum AUC-PR index can reach 0.82, which shows that this method can identify students' classroom behaviour more effectively and reliably.




v

A data mining method based on label mapping for long-term and short-term browsing behaviour of network users

In order to improve the speedup and recognition accuracy of the recognition process, this paper designs a data mining method based on label mapping for long-term and short-term browsing behaviour of network users. First, after removing the noise information in the behaviour sequence, calculate the similarity of behaviour characteristics. Then, multi-source behaviour data is mapped to the same dimension, and a behaviour label mapping layer and a behaviour data mining layer are established. Finally, the similarity of the tag matrix is calculated based on the similarity calculation results, and the mining results are output using SVM binary classification process. Experimental results show that the acceleration ratio of this method exceeds 0.9; area under curve receiver operating characteristic curve (AUC-ROC) value increases rapidly in a short time, and the maximum value can reach 0.95, indicating that the mining precision of this method is high.




v

Research on fast mining of enterprise marketing investment databased on improved association rules

Because of the problems of low mining precision and slow mining speed in traditional enterprise marketing investment data mining methods, a fast mining method for enterprise marketing investment databased on improved association rules is proposed. First, the enterprise marketing investment data is collected through the crawler framework, and then the collected data is cleaned. Then, the cleaned data features are extracted, and the correlation degree between features is calculated. Finally, according to the calculation results, all data items are used as constraints to reduce the number of frequent itemsets. A pruning strategy is designed in advance. Combined with the constraints, the Apriori algorithm of association rules is improved, and the improved algorithm is used to calculate all frequent itemsets, Obtain fast mining results of enterprise marketing investment data. The experimental results show that the proposed method is fast and accurate in data mining of enterprise marketing investment.




v

An evaluation of customer trust in e-commerce market based on entropy weight analytic hierarchy process

In order to solve the problems of large generalisation error, low recall rate and low retrieval accuracy of customer evaluation information in traditional trust evaluation methods, an evaluation method of customer trust in e-commerce market based on entropy weight analytic hierarchy process was designed. Firstly, build an evaluation index system of customer trust in e-commerce market. Secondly, the customer trust matrix is established, and the index weight is calculated by using the analytic hierarchy process and entropy weight method. Finally, five-scale Likert method is used to analyse the indicator factors and establish a comment set, and the trust evaluation value is obtained by combining the indicator membership. The experiment shows that the maximum generalisation error of this method is only 0.029, the recall rate is 97.5%, and the retrieval accuracy of customer evaluation information is closer to 1.




v

Study on marketing strategy innovation of mobile payment service under internet environment

In order to overcome the problems of low efficiency, low user satisfaction and poor customer growth rate under the traditional marketing strategy, this paper studies the innovative strategy of mobile payment business marketing strategy under the internet environment. First of all, study the status quo of mobile payment business marketing in the internet environment, obtain mobile payment business data through questionnaire survey, and analyse the problems in mobile payment business marketing. Secondly, build a user profile of mobile payment business marketing, and classify user attributes, consumption characteristics and user activity through K-means clustering method; Finally, the marketing strategy is innovated from three aspects: product marketing, pricing marketing and channel marketing. The results show that the marketing benefit after the application of this strategy is 19.52 million yuan, the user satisfaction can reach 98.9%, and the customer growth rate can reach 21.3%, improving the marketing benefit of mobile payment business.




v

Auditing the Performing Rights Society - investigating a new European Union Collective Management Organization member audit method

The European Union Rights Management Directive 2014/26/EU, provides regulatory oversight of European Union (EU) Collective Management Organizations (CMOs). However, the Directive has no provision indicating how members of EU CMOs may conduct non-financial audits of their CMO income and reporting. This paper addresses the problem of a lack of an audit method through a case study of the five writer members of the music group Duran Duran, who have been members of the UK's CMO for performing rights - the Performing Rights Society (PRS) for over 35 years. The paper argues a new audit CMO member method that can address the lacunae regarding the absence of CMO member right to audit a CMO and an applicable CMO audit method.




v

National ICT policy challenges for developing countries: a grounded theory informed literature review

This paper presents a review of the literature on the challenges of national information and communication technology (ICT) policies in the context of African countries. National ICT policies have been aligned with socio-development agendas of African countries. However, the policies have not delivered the expected outcomes due to many challenges. Studies have been conducted in isolation to highlight the challenges in the policy process. The study used grounded theory informed literature review to holistically analyse the problems in the context of African countries. The results were categorised in the typology of the policy process to understand the challenges from a broad perspective. The problems were categorised into agenda setting, policy formulation, legal frameworks, implementation and evaluation. In addition, there were constraints related to policy monitoring in the policy phases and imbalance of power among the policy stakeholders. The review suggests areas of further research.




v

A survey on predicting at-risk students through learning analytics

This paper analyses the adoption of learning analytics to predict at-risk students. A total of 233 research articles between 2004 and 2023 were collected from Scopus for this study. They were analysed in terms of the relevant types and sources of data, targets of prediction, learning analytics methods, and performance metrics. The results show that data related to students' academic performance, socio-demographics, and learning behaviours have been commonly collected. Most studies have addressed the identification of students who have a higher chance of poor academic performance or dropping out of their courses. Decision trees, random forests, and artificial neural networks are the most frequently used techniques for prediction, with ensemble methods gaining popularity in recent years. Classification accuracy, recall, sensitivity, and true positive rate are commonly used as performance metrics for evaluation. The results reveal the potential of learning analytics for informing timely and evidence-based support for at-risk students.




v

International Journal of Innovation and Learning




v

Transformative advances in volatility prediction: unveiling an innovative model selection method using exponentially weighted information criteria

Using information criteria is a common method for making a decision about which model to use for forecasting. There are many different methods for evaluating forecasting models, such as MAE, RMSE, MAPE, and Theil-U, among others. After the creation of AIC, AICc, HQ, BIC, and BICc, the two criteria that have become the most popular and commonly utilised are Bayesian IC and Akaike's IC. In this investigation, we are innovative in our use of exponential weighting to get the log-likelihood of the information criteria for model selection, which means that we propose assigning greater weight to more recent data in order to reflect their increased precision. All research data is from the major stock markets' daily observations, which include the USA (GSPC, DJI), Europe (FTSE 100, AEX, and FCHI), and Asia (Nikkei).




v

A study on value chain of mushroom for value addition: challenges, opportunities and prospects of cultivation of mushroom

This research was carried out with an objective of studying the existing mushroom value chain, identifying demand-supply gap, carrying out SWOT analysis to explore challenges, proposing action plan and presenting finally standard operating procedure for enhancing value chain effectiveness. Data was collected from 71 actors identified in the oyster mushroom value chain in Tumakuru Taluk, Karnataka State, India and analysed. Analysis showed that there were five different models of value chain, and the shortest value chain was the most profitable one. Based on the respondents' perceptions, mushroom cultivation offers many opportunities such as creating employment, improving economic condition and diet. Meanwhile they face challenges like, pest attack, hike in input materials' prices, lack of technical guidance during farming, finance support, inefficient marketing system. There is a need to address demand-supply gap, invest more in facilities and related research, integrate all the actors in value chain to enhance productivity.




v

Visualizing Research Data Records for their Better Management

As academia in general, and research funders in particular, place ever greater importance on data as an output of research, so the value of good research data management practices becomes ever more apparent. In response to this, the Innovative Design and Manufacturing Research Centre (IdMRC) at the University of Bath, UK, with funding from the JISC, ran a project to draw up a data management planning regime. In carrying out this task, the ERIM (Engineering Research Information Management) Project devised a visual method of mapping out the data records produced in the course of research, along with the associations between them. This method, called Research Activity Information Development (RAID) Modelling, is based on the Unified Modelling Language (UML) for portability. It is offered to the wider research community as an intuitive way for researchers both to keep track of their own data and to communicate this understanding to others who may wish to validate the findings or re-use the data.




v

Preserving and delivering audiovisual content integrating Fedora Commons and MediaMosa

The article describes the integrated adoption of Fedora Commons and MediaMosa for managing a digital repository. The integration was experimented along with the development of a cooperative project, Sapienza Digital Library (SDL). The functionalities of the two applications were exploited to built a weaving factory, useful for archiving, preserving and disseminating of multi-format and multi-protocol audio video contents, in different fruition contexts. The integration was unleashed by means of both repository-to-repository interaction, and mapping of video Content Model's disseminators to MediaMosa's Restful services. The outcomes of this integration will lead to a more flexible management of the dissemination services, as well as to economize the overproduction of different dissemination formats.




v

Sheer Curation of Experiments: Data, Process, Provenance

This paper describes an environment for the “sheer curation” of the experimental data of a group of researchers in the fields of biophysics and structural biology. The approach involves embedding data capture and interpretation within researchers' working practices, so that it is automatic and invisible to the researcher. The environment does not capture just the individual datasets generated by an experiment, but the entire workflow that represent the “story” of the experiment, including intermediate files and provenance metadata, so as to support the verification and reproduction of published results. As the curation environment is decoupled from the researchers’ processing environment, the provenance is inferred from a variety of domain-specific contextual information, using software that implements the knowledge and expertise of the researchers. We also present an approach to publishing the data files and their provenance according to linked data principles by using OAI-ORE (Open Archives Initiative Object Reuse and Exchange) and OPMV.




v

Building the Hydra Together: Enhancing Repository Provision through Multi-Institution Collaboration

In 2008 the University of Hull, Stanford University and University of Virginia decided to collaborate with Fedora Commons (now DuraSpace) on the Hydra project. This project has sought to define and develop repository-enabled solutions for the management of multiple digital content management needs that are multi-purpose and multi-functional in such a way as to allow their use across multiple institutions. This article describes the evolution of Hydra as a project, but most importantly as a community that can sustain the outcomes from Hydra and develop them further. The data modelling and technical implementation are touched on in this context, and examples of the Hydra heads in development or production are highlighted. Finally, the benefits of working together, and having worked together, are explored as a key element in establishing a sustainable open source solution.




v

Beyond The Low Hanging Fruit: Data Services and Archiving at the University of New Mexico

Open data is becoming increasingly important in research. While individual researchers are slowlybecoming aware of the value, funding agencies are taking the lead by requiring data be made available, and also by requiring data management plans to ensure the data is available in a useable form. Some journals also require that data be made available. However, in most cases, “available upon request” is considered sufficient. We describe a number of historical examples of data use and discovery, then describe two current test cases at the University of New Mexico. The lessons learned suggest that an instituional data services program needs to not only facilitate fulfilling the mandates of granting agencies but to realize the true value of open data. Librarians and institutional archives should actively collaborate with their researchers. We should also work to find ways to make open data enhance a researchers career. In the long run, better quality data and metadata will result if researchers are engaged and willing participants in the dissemination of their data.




v

Kindura: Repository services for researchers based on hybrid clouds

The paper describes the investigations and outcomes of the JISC-funded Kindura project, which is piloting the use of hybrid cloud infrastructure to provide repository-focused services to researchers. The hybrid cloud services integrate external commercial cloud services with internal IT infrastructure, which has been adapted to provide cloud-like interfaces. The system provides services to manage and process research outputs, primarily focusing on research data. These services include both repository services, based on use of the Fedora Commons repository, as well as common services such as preservation operations that are provided by cloud compute services. Kindura is piloting the use of the DuraCloud2, open source software developed by DuraSpace, to provide a common interface to interact with cloud storage and compute providers. A storage broker integrates with DuraCloud to optimise the usage of available resources, taking into account such factors as cost, reliability, security and performance. The development is focused on the requirements of target groups of researchers.




v

CLIF: Moving repositories upstream in the content lifecycle

The UK JISC-funded Content Lifecycle Integration Framework (CLIF) project has explored the management of digital content throughout its lifecycle from creation through to preservation or disposal. Whilst many individual systems offer the capability of carrying out lifecycle stages to varying degrees, CLIF recognised that only by facilitating the movement of content between systems could the full lifecycle take advantage of systems specifically geared towards different stages of the digital lifecycle. The project has also placed the digital repository at the heart of this movement and has explored this through carrying out integrations between Fedora and Sakai, and Fedora and SharePoint. This article will describe these integrations in the context of lifecycle management and highlight the issues discovered in enabling the smooth movement of content as required.




v

REDDNET and Digital Preservation in the Open Cloud: Research at Texas Tech University Libraries on Long-Term Archival Storage

In the realm of digital data, vendor-supplied cloud systems will still leave the user with responsibility for curation of digital data. Some of the very tasks users thought they were delegating to the cloud vendor may be a requirement for users after all. For example, cloud vendors most often require that users maintain archival copies. Beyond the better known vendor cloud model, we examine curation in two other models: inhouse clouds, and what we call "open" clouds—which are neither inhouse nor vendor. In open clouds, users come aboard as participants or partners—for example, by invitation. In open cloud systems users can develop their own software and data management, control access, and purchase their own hardware while running securely in the cloud environment. To do so will still require working within the rules of the cloud system, but in some open cloud systems those restrictions and limitations can be walked around easily with surprisingly little loss of freedom. It is in this context that REDDnet (Research and Education Data Depot network) is presented as the place where the Texas Tech University (TTU)) Libraries have been conducting research on long-term digital archival storage. The REDDnet network by year's end will be at 1.2 petabytes (PB) with an additional 1.4 PB for a related project (Compact Muon Soleniod Heavy Ion [CMS-HI]); additionally there are over 200 TB of tape storage. These numbers exclude any disk space which TTU will be purchasing during the year. National Science Foundation (NSF) funding covering REDDnet and CMS-HI was in excess of $850,000 with $850,000 earmarked toward REDDnet. In the terminology we used above, REDDnet is an open cloud system that invited TTU Libraries to participate. This means that we run software which fits the REDDnet structure. We are beginning to complete the final design of our system, and starting to move into the first stages of construction. And we have made a decision to move forward and purchase one-half petabyte of disk storage in the initial phase. The concerns, deliberations and testing are presented here along with our initial approach.




v

Repository as a Service (RaaS)

In his oft-quoted seminal paper ‘Institutional Repositories: Essential Infrastructure For Scholarship In The Digital Age’ Clifford Lynch (2003) described the Institutional Repository as “a set of services that a university offers to the members of its community for the management and dissemination of digital materials created by the institution and its community members.” This paper seeks instead to define the repository service at a more primitive level, without the specialism of being an ‘Institutional Repository’, and looks at how it can viewed as providing a service within appropriate boundaries, and what that could mean for the future development of repositories, our expectations of what repositories should be, and how they could fit into the set of services required to deliver an Institutional Repository service as describe by Lynch.




v

Document Viewers for Non-Born-Digital Files in DSpace

As more institutions continue to work with large and diverse type of content for their digital repositories, there is an inherent need to evaluate, prototype, and implement user-friendly websites -regardless of the digital files' size, format, location or the content management system in use. This article aims to provide an overview of the need and current development of Document Viewers for digitized objects in DSpace repositories -includign a local viewer developed for an newspaper collection and four other viewers currently implemented in DSpace repositories. According to the DSpace Registry, 22% of institutions are currently storing "Images" in their repositories and 21% are using DSpace for non-traditional IR content such as: Image Repository, Subject Repository, Museum Cultural, or Learning Resources. The combination of current technologies such as Djatoka Image Server, IIPImage Server, DjVu Libre, and the Internet Archive BookReader, as well as the growing number of digital repositories hosting digitized content, suggests that the DSpace community will probably benefit with an "out-of-the-box" Document Viewer, especially one for large, high-resolution, and multi-page objects.




v

Mobile wallet payments - a systematic literature review with bibliometric and network visualisation analysis over two decades

The study aims to review the literature on mobile wallet payment and align research trends using a systematic literature review with bibliometric and network visualisation analysis over two decades. It uses bibliometric analysis of the literature research retrieved from the Web of Science database. The study period was from 2001 to 2021, with 1,134 research papers. It also provides the indicators like citation trends, cited reference patterns, authorship patterns, subject areas published on the mobile wallet, top contributing authors, and highly cited research articles using the database. Furthermore, network visualisation analysis, like the co-occurrence of author keywords and keywords plus terms, has also been examined using VOSviewer software. The bibliometric analysis shows that the Republic of China dominates mobile wallet payment, and India is a significant contributor. Furthermore, the constructions of the network map using a co-citation analysis and bibliographic coupling shows an interesting pattern of mobile wallet payment.




v

Agricultural informatics: emphasising potentiality and proposed model on innovative and emerging Doctor of Education in Agricultural Informatics program for smart agricultural systems

International universities are changing with their style of operation, mode of teaching and learning operations. This change is noticeable rapidly in India and also in international contexts due to healthy and innovative methods, educational strategies, and nomenclature throughout the world. Technologies are changing rapidly, including ICT. Different subjects are developed in the fields of IT and computing with the interaction or applications to other fields, viz. health informatics, bio informatics, agriculture informatics, and so on. Agricultural informatics is an interdisciplinary subject dedicated to combining information technology and information science utilisation in agricultural sciences. The digital agriculture is powered by agriculture informatics practice. For teaching, research and development of any subject educational methods is considered as important and various educational programs are there in this regard viz. Bachelor of Education, Master of Education, PhD in Education, etc. Degrees are also available to deal with the subjects and agricultural informatics should not be an exception of this. In this context, Doctor of Education (EdD or DEd) is an emerging degree having features of skill sets, courses and research work. This paper proposed on EdD program with agricultural informatics specialisation for improving healthy agriculture system. Here, a proposed model core curriculum is also presented.




v

Cognitive biases in decision making during the pandemic: insights and viewpoint from people's behaviour

In this article, we have attempted to study the ways in which the COVID-19 pandemic has gradually increased and impacted the world. The authors integrate the knowledge from cognitive psychology literature to illustrate how the limitations of the human mind might have a critical role in the decisions taken during the COVID-19 pandemic. The authors show the correlation between different biases in various contexts involved in the COVID-19 pandemic and highlight the ways in which we can nudge ourselves and various stakeholders involved in the decision-making process. This study uses a typology of biases to examine how different patterns of biases affect the decision-making behaviour of people during the pandemic. The presented model investigates the potential interrelations among environmental transformations, cognitive biases, and strategic decisions. By referring to cognitive biases, our model also helps to understand why the same performance improvement practices might incite different opinions among decision-makers.




v

Performance improvement in inventory classification using the expectation-maximisation algorithm

Multi-criteria inventory classification (MCIC) is popularly used to aid managers in categorising the inventory. Researchers have used numerous mathematical models and approaches, but few resorted to unsupervised machine-learning techniques to address MCIC. This study uses the expectation-maximisation (EM) algorithm to estimate the parameters of the Gaussian mixture model (GMM), a popular unsupervised machine learning algorithm, for ABC inventory classification. The EM-GMM algorithm is sensitive to initialisation, which in turn affects the results. To address this issue, two different initialisation procedures have been proposed for the EM-GMM algorithm. Inventory classification outcomes from 14 existing MCIC models have been given as inputs to study the significance of the two proposed initialisation procedures of the EM-GMM algorithm. The effectiveness of these initialisation procedures corresponding to various inputs has been analysed toward inventory management performance measures, i.e., fill rate, total relevant cost, and inventory turnover ratio.




v

Unveiling learner experience in MOOC reviews

The surge of learner enrolment in massive open online courses (MOOCs) has led to a wealth of learner-generated data, such as online course reviews that document learner experience. To unveil learner experience with MOOCs, this research uses machine learning methods to extract prominent topics from MOOC reviews and assess the sentiments expressed by learners within them. Furthermore, this research investigates the cooccurrence of the topics using association rule mining. The findings reveal six central topics discussed in MOOC reviews, such as "instructor", "design", "material", "assignment", "platform", and "experience". Notably, most learners express positive sentiments in their reviews. The sentiment indicated in reviews of skill-seeking MOOCs is higher than that in reviews of knowledge-seeking MOOCs. Furthermore, the association rule mining identifies four meaningful association rules. The findings offer valuable insights for MOOC instructors to enhance course design and for platform operators to ensure the long-term viability and success of MOOC platforms.




v

LDSAE: LeNet deep stacked autoencoder for secure systems to mitigate the errors of jamming attacks in cognitive radio networks

A hybrid network system for mitigating errors due to jamming attacks in cognitive radio networks (CRNs) is named LeNet deep stacked autoencoder (LDSAE) and is developed. In this exploration, the sensing stage and decision-making are considered. The sensing unit is composed of four steps. First, the detected signal is forwarded to filtering progression. Here, BPF is utilised to filter the detected signal. The filtered signal is squared in the second phase. Third, signal samples are combined and jamming attacks occur by including false energy levels. Last, the attack is maliciously affecting the FC decision in the fourth step. On the other hand, FC initiated the decision-making and also recognised jamming attacks that affect the link amidst PU and SN in decision-making stage and it is accomplished by employing LDSAE-based trust model where the proposed module differentiates the malicious and selfish users. The analytic measures of LDSAE gained 79.40%, 79.90%, and 78.40%.




v

The role of shopping apps and their impact on the online purchasing behaviour patterns of working women in Bangalore

The study aims to analyse the impact of shopping applications on the shopping behaviour of the working women community in Bangalore, a city known as the IT hub. The research uses a quantitative analysis with SPSS version 23 software and a structured questionnaire survey technique to gather data from the working women community. The study uses descriptive statistics, ANOVA, regression, and Pearson correlation analysis to evaluate the perception of working women regarding the significance of online shopping applications. The results show that digital shopping applications are more prevalent among the working women community in Bangalore. The study also evaluates the socio-economic and psychological factors that influence their purchasing behaviour. The findings suggest that online marketers should enhance their strategies to improve their business on digital platforms. The research provides valuable insights into the shopping habits of the working women community in Bangalore.




v

Cognitively-inspired intelligent decision-making framework in cognitive IoT network

Numerous Internet of Things (IoT) applications require brain-empowered intelligence. This necessity has caused the emergence of a new area called cognitive IoT (CIoT). Reasoning, planning, and selection are typically involved in decision-making within the network bandwidth limit. Consequently, data minimisation is needed. Therefore, this research proposes a novel technique to extract conscious data from a massive dataset. First, it groups the data using k-means clustering, and the entropy is computed for each cluster. The most prominent cluster is then determined by selecting the cluster with the highest entropy. Subsequently, it transforms each cluster element into an informative element. The most informative data is chosen from the most prominent cluster that represents the whole massive data, which is further used for intelligent decision-making. The experimental evaluation is conducted on the 21.25 years of environmental dataset, revealing that the proposed method is efficient over competing approaches.




v

International Journal of Networking and Virtual Organisations




v

Location-Oriented Knowledge Management in a Tourism Context: Connecting Virtual Communities to Physical Locations




v

Towards Egocentric Way-Finding Appliances Supporting Navigation in Unfamiliar Terrain




v

Manufacturing Organizational Memory: Logged Conversation Thread Analysis