model

Tutorial de Modelismo para principiantes

Categoría: Artes:Varios
Aprende a construir y pintar con herramientas básicas, un Mirage 2000 a escala 1/72.




model

Amanda Holden's daughter joins Kate Moss' modelling agency




model

Russian girls of model quality - ELENAS MODELS -

Russian girls of model quality seeking love and marriage to western men: Russian, Ukrainian, Belarus and Eastern European girls. Every week we add 50-100 new Russian girls to our database.



  • Society & Culture -- Love & Romance

model

Russian girls of model quality - ELENAS MODELS

Russian girls of model quality seeking love and marriage to western men: Russian, Ukrainian, Belarus and Eastern European girls. Every week we add 50-100 new Russian girls to our database.



  • Home & Family -- Marriage

model

Africa: Four Out of Five People in Africa Use Wood for Cooking - a Transition to Clean Fuels Would Cut Emissions and Save Lives, a Model for Nigeria Shows

[The Conversation Africa] Four in every five people in Africa cook using wood, charcoal and other polluting fuels in open fires or inefficient stoves. This releases harmful pollutants and leads to respiratory illnesses and heart disease, particularly among children.




model

Africa: Misinformation Really Does Spread Like a Virus, Suggest Mathematical Models Drawn From Epidemiology

[The Conversation Africa] We're increasingly aware of how misinformation can influence elections. About 73% of Americans report seeing misleading election news, and about half struggle to discern what is true or false.




model

Two new Harley-Davidson models showcased




model

Jawa 42 models, Perak and Yezdi line-up receive OBD-2 update




model

Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval

Kyra Wilson, Aylin Caliskan, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Nov 13, 2024

The topic of AI-based recruitment and hiring has been discussed here before and research continues apace. This item (13 page PDF), despite the characterization in GeekWire, is a fairly narrow study. It looks at three text-embedding models based on Mistral-7B-v0.1, and tests for gender and racial bias on applications containing name and position only, and name and position and some content (the paper discusses removing the name but does do it). The interesting bit is that intersectional bias (ie., combining gender and race) is not merely a combination of the separate biases; while separate biases exaggerated the discrimination, "intersectional results, on the other hand, do correspond more strongly to real-world discrimination in resume screening." Via Lisa Marie Blaschke, who in turn credits Audrey Watters.

Web: [Direct Link] [This Post]






model

Using Google Ads’ Data-Driven Attribution Model

Data-driven attribution is the default attribution model in Google Ads. Understand how and why to use it in your Google Ads campaign.

The post Using Google Ads’ Data-Driven Attribution Model appeared first on Portent.




model

The Samsung phone I recommend to most people is not a flagship model (and now it's $100 off)

The Samsung Galaxy A35 isn't perfect, but with a two-day battery life and a vibrant OLED display, it's hard to deny its value -- especially at its new Black Friday price.




model

Best iPhones 2024: Which iPhone model should you buy?

We've tested every iPhone model on the market, including the iPhone 16 Pro Max. Here are your best options.




model

Bishops - Part 2: The New Testament and Early Church Model

In the second in his series on bishops, Fr. Tom reflects on the New Testament and early Church formulations of the offices and functions of clergy.




model

Bishops - Part 3: The Post Apostolic Model

Fr. Thomas looks at the writings of three of the earliest Church Fathers to see the structure of the Church in the second and third centuries.




model

Modeling Healthy Fasting

We all must embrace fasting with a willing heart. In addition, we need to nourish our bodies during the fast. Rita provides information about the nutritional value of various fasting foods, whether for children or adults. 




model

The Transfiguration as Model for Ministry (Sermon Aug. 6, 2017)

Celebrating the great feast of the Transfiguration of Christ, Fr. Andrew discusses how what we learn from it about Who Jesus is also teaches us about how to do ministry. And he gives one suggestion for applying what we learn.




model

Oasis, a playable real-time AI model trained on Minecraft video footage

anything out of frame is immediately forgotten, making it very dream-like and surreal to explore #




model

Role Models

Fr. Apostolos encourages us to let the light of Christ shine through us.




model

The Apostolic Model

In 1 Corinthians 4:16, Paul urges the Corinthian Christians to be imitators of him. In what ways should they, and we, do this?




model

Models for Lent




model

Caribbean disturbance has potential path toward Florida, models show | Tracking the Tropics




model

Strictly Blackout dancer 'an amazing role model'

Chris McCausland is described as an "amazing role model" for blind people after wowing Strictly judges.





model

Color Image Restoration Using Neural Network Model

Neural network learning approach for color image restoration has been discussed in this paper and one of the possible solutions for restoring images has been presented. Here neural network weights are considered as regularization parameter values instead of explicitly specifying them. The weights are modified during the training through the supply of training set data. The desired response of the network is in the form of estimated value of the current pixel. This estimated value is used to modify the network weights such that the restored value produced by the network for a pixel is as close as to this desired response. One of the advantages of the proposed approach is that, once the neural network is trained, images can be restored without having prior information about the model of noise/blurring with which the image is corrupted.




model

Let Me Tell You a Story - On How to Build Process Models

Process Modeling has been a very active research topic for the last decades. One of its main issues is the externalization of knowledge and its acquisition for further use, as this remains deeply related to the quality of the resulting process models produced by this task. This paper presents a method and a graphical supporting tool for process elicitation and modeling, combining the Group Storytelling technique with the advances of Text Mining and Natural Language Processing. The implemented tool extends its previous versions with several functionalities to facilitate group story telling by the users, as well as to improve the results of the acquired process model from the stories.




model

Modeling Quality Attributes with Aspect-Oriented Architectural Templates

The quality attributes of a software system are, to a large extent, determined by the decisions taken early in the development process. Best practices in software engineering recommend the identification of important quality attributes during the requirements elicitation process, and the specification of software architectures to satisfy these requirements. Over the years the software engineering community has studied the relationship between quality attributes and the use of particular architectural styles and patterns. In this paper we study the relationship between quality attributes and Aspect-Oriented Software Architectures - which apply the principles of Aspect-Oriented Software Development (AOSD) at the architectural level. AOSD focuses on identifying, modeling and composing crosscutting concerns - i.e. concerns that are tangled and/or scattered with other concerns of the application. In this paper we propose to use AO-ADL, an aspect-oriented architectural description language, to specify quality attributes by means of parameterizable, and thus reusable, architectural patterns. We particularly focus on quality attributes that: (1) have major implications on software functionality, requiring the incorporation of explicit functionality at the architectural level; (2) are complex enough as to be modeled by a set of related concerns and the compositions among them, and (3) crosscut domain specific functionality and are related to more than one component in the architecture. We illustrate our approach for usability, a critical quality attribute that satisfies the previous constraints and that requires special attention at the requirements and the architecture design stages.




model

Automatically Checking Feature Model Refactorings

A feature model (FM) defines the valid combinations of features, whose combinations correspond to a program in a Software Product Line (SPL). FMs may evolve, for instance, during refactoring activities. Developers may use a catalog of refactorings as support. However, the catalog is incomplete in principle. Additionally, it is non-trivial to propose correct refactorings. To our knowledge, no previous analysis technique for FMs is used for checking properties of general FM refactorings (a transformation that can be applied to a number of FMs) containing a representative number of features. We propose an efficient encoding of FMs in the Alloy formal specification language. Based on this encoding, we show how the Alloy Analyzer tool, which performs analysis on Alloy models, can be used to automatically check whether encoded general and specific FM refactorings are correct. Our approach can analyze general transformations automatically to a significant scale in a few seconds. In order to evaluate the analysis performance of our encoding, we evaluated in automatically generated FMs ranging from 500 to 2,000 features. Furthermore, we analyze the soundness of general transformations.




model

Context-Aware Composition and Adaptation based on Model Transformation

Using pre-existing software components (COTS) to develop software systems requires the composition and adaptation of the component interfaces to solve mismatch problems. These mismatches may appear at different interoperability levels (signature, behavioural, quality of service and semantic). In this article, we define an approach which supports composition and adaptation of software components based on model transformation by taking into account the four levels. Signature and behavioural levels are addressed by means of transition systems. Context-awareness and semanticbased techniques are used to tackle quality of service and semantic, respectively, but also both consider the signature level. We have implemented and validated our proposal for the design and application of realistic and complex systems. Here, we illustrate the need to support the variability of the adaptation process in a context-aware pervasive system through a real-world case study, where software components are implemented using Windows Workflow Foundation (WF). We apply our model transformation process to extract transition systems (CA-STS specifications) from WF components. These CA-STSs are used to tackle the composition and adaptation. Then, we generate a CASTS adaptor specification, which is transformed into its corresponding WF adaptor component with the purpose of interacting with all the WF components of the system, thereby avoiding mismatch problems.




model

An Approach for Feature Modeling of Context-Aware Software Product Line

Feature modeling is an approach to represent commonalities and variabilities among products of a product line. Context-aware applications use context information to provide relevant services and information for their users. One of the challenges to build a context-aware product line is to develop mechanisms to incorporate context information and adaptation knowledge in a feature model. This paper presents UbiFEX, an approach to support feature analysis for context-aware software product lines, which incorporates a modeling notation and a mechanism to verify the consistency of product configuration regarding context variations. Moreover, an experimental study was performed as a preliminary evaluation, and a prototype was developed to enable the application of the proposed approach.




model

Hierarchical Graph-Grammar Model for Secure and Efficient Handwritten Signatures Classification

One important subject associated with personal authentication capabilities is the analysis of handwritten signatures. Among the many known techniques, algorithms based on linguistic formalisms are also possible. However, such techniques require a number of algorithms for intelligent image analysis to be applied, allowing the development of new solutions in the field of personal authentication and building modern security systems based on the advanced recognition of such patterns. The article presents the approach based on the usage of syntactic methods for the static analysis of handwritten signatures. The graph linguistic formalisms applied, such as the IE graph and ETPL(k) grammar, are characterised by considerable descriptive strength and a polynomial membership problem of the syntactic analysis. For the purposes of representing the analysed handwritten signatures, new hierarchical (two-layer) HIE graph structures based on IE graphs have been defined. The two-layer graph description makes it possible to take into consideration both local and global features of the signature. The usage of attributed graphs enables the storage of additional semantic information describing the properties of individual signature strokes. The verification and recognition of a signature consists in analysing the affiliation of its graph description to the language describing the specimen database. Initial assessments display a precision of the method at a average level of under 75%.




model

A feature-based model selection approach using web traffic for tourism data

The increased volume of accessible internet data creates an opportunity for researchers and practitioners to improve time series forecasting for many indicators. In our study, we assess the value of web traffic data in forecasting the number of short-term visitors travelling to Australia. We propose a feature-based model selection framework which combines random forest with feature ranking process to select the best performing model using limited and informative number of features extracted from web traffic data. The data was obtained for several tourist attraction and tourism information websites that could be visited by potential tourists to find out more about their destinations. The results of random forest models were evaluated over 3- and 12-month forecasting horizon. Features from web traffic data appears in the final model for short term forecasting. Further, the model with additional data performs better on unseen data post the COVID19 pandemic. Our study shows that web traffic data adds value to tourism forecasting and can assist tourist destination site managers and decision makers in forming timely decisions to prepare for changes in tourism demand.




model

An architectural view of VANETs cloud: its models, services, applications and challenges

This research explores vehicular ad hoc networks (VANETs) and their extensive applications, such as enhancing traffic efficiency, infotainment, and passenger safety. Despite significant study, widespread deployment of VANETs has been hindered by security and privacy concerns. Challenges in implementation, including scalability, flexibility, poor connection, and insufficient intelligence, have further complicated VANETs. This study proposes leveraging cloud computing to address these challenges, marking a paradigm shift. Cloud computing, recognised for its cost-efficiency and virtualisation, is integrated with VANETs. The paper details the nomenclature, architecture, models, services, applications, and challenges of VANET-based cloud computing. Three architectures for VANET clouds - vehicular clouds (VCs), vehicles utilising clouds (VuCs), and hybrid vehicular clouds (HVCs) - are discussed in detail. The research provides an overview, delves into related work, and explores VANET cloud computing's architectural frameworks, models, and cloud services. It concludes with insights into future work and a comprehensive conclusion.




model

E-commerce growth prediction model based on grey Markov chain

In order to solve the problems of long prediction consumption time and many prediction iterations existing in traditional prediction models, an e-commerce growth prediction model based on grey Markov chain is proposed. The Scrapy crawler framework is used to collect a variety of e-commerce data from e-commerce websites, and the feedforward neural network model is used to clean the collected data. With the cleaned e-commerce data as the input vector and the e-commerce growth prediction results as the output vector, an e-commerce growth prediction model based on the grey Markov chain is built. The prediction model is improved by using the background value optimisation method. After training the model through the improved particle swarm optimisation algorithm, accurate e-commerce growth prediction results are obtained. The experimental results show that the maximum time consumption of e-commerce growth prediction of this model is only 0.032, and the number of iterations is small.




model

Enabling smart city technologies: impact of smart city-ICTs on e-Govt. services and society welfare using UTAUT model

Smart cities research is growing all over the world seeking to understand the effect of smart cities from different angles, domains and countries. The aim of this study is to analyse how the smart city ICTs (e.g., big data analytics, AI, IoT, cloud computing, smart grids, wireless communication, intelligent transportation system, smart building, e-governance, smart health, smart education and cyber security) are related to government. services and society welfare from the perspective of China. This research confirmed a positive correlation of smart city ICTs to e-Govt. Services (e-GS). On the other hand, the research showed a positive influence of smart city ICTs on society's welfare. These findings about smart cities and ICTs inform us how the thought paradigm to smart technologies can cause the improvement of e-GS through economic development, job creation and social welfare. The study offers different applications of the theoretical perspectives and the management perspective which are significant to building a society during recent technologised era.




model

The role of mediator variable in digital payments: a structural equation model analysis

The proliferation of technology and communication has resulted in increased digitalisation that includes digital payments. This study is aimed at unravelling the relationship between awareness of individuals about the digital payment system and customer satisfaction with digital payments. Two models were developed in this study. First model considers awareness → usage pattern → customer satisfaction. Second model considers usage pattern → customer satisfaction → perception of digital payments. These two alternative models were tested by collecting data from 507 respondents in southern India was analysed using the structural equation modelling. The results indicate that usage pattern acted as a mediator between awareness and satisfaction, and satisfaction acted as a mediator between usage pattern and consumers' perception of digital payments. The implications for theory and practice are discussed.




model

Integrating big data collaboration models: advancements in health security and infectious disease early warning systems

In order to further improve the public health assurance system and the infectious diseases early warning system to give play to their positive roles and enhance their collaborative capacity, this paper, based on the big and thick data analytics technology, designs a 'rolling-type' data synergy model. This model covers districts and counties, municipalities, provinces, and the country. It forms a data blockchain for the public health assurance system and enables high sharing of data from existing system platforms such as the infectious diseases early warning system, the hospital medical record management system, the public health data management system, and the health big and thick data management system. Additionally, it realises prevention, control and early warning by utilising data mining and synergy technologies, and ideally solves problems of traditional public health assurance system platforms such as excessive pressure on the 'central node', poor data tamper-proofing capacity, low transmission efficiency of big and thick data, bad timeliness of emergency response, and so on. The realisation of this technology can greatly improve the application and analytics of big and thick data and further enhance the public health assurance capacity.




model

Digital transformation in universities: models, frameworks and road map

Digital Transformation seeks to improve the processes of an organisation by integrating digital technology in all its areas, this is inevitable due to technological evolution that generates new demands, new habits and greater demands on customers and users, therefore Digital Transformation is important. In organisations to maintain competitiveness. In this context, universities are no strangers to this reality, but they find serious problems in their execution, it is not clear how to deal with an implementation of this type. The work seeks to identify tools that can be used in the implementation of Digital Transformation in universities, for this a systematic review of literature is carried out with a method based on three stages, 23 models, 13 frameworks and 8 roadmaps are identified. The elements found are analysed, obtaining eight main components with their relationships and dependencies, which can be used to generate more optimal models for universities.




model

Loan delinquency analysis using predictive model

The research uses a machine learning approach to appraising the validity of customer aptness for a loan. Banks and non-banking financial companies (NBFC) face significant non-performing assets (NPAs) threats because of the non-payment of loans. In this study, the data is collected from Kaggle and tested using various machine learning models to determine if the borrower can repay its loan. In addition, we analysed the performance of the models [K-nearest neighbours (K-NN), logistic regression, support vector machines (SVM), decision tree, naive Bayes and neural networks]. The purpose is to support decisions that are based not on subjective aspects but objective data analysis. This work aims to analyse how objective factors influence borrowers to default loans, identify the leading causes contributing to a borrower's default loan. The results show that the decision tree classifier gives the best result, with a recall rate of 0.0885 and a false- negative rate of 5.4%.




model

The Pentagonal E-Portfolio Model for Selecting, Adopting, Building, and Implementing an E-Portfolio




model

Advancing Creative Visual Thinking with Constructive Function-based Modelling




model

Professional Development in Higher Education: A Model for Meaningful Technology Integration

While many institutions provide centralized technology support for faculty, there is a lack of centralized professional development opportunities that focus on simultaneously developing instructors’ technological, pedagogical, and content knowledge (TPACK) in higher education. Additionally, there are few professional development opportunities for faculty that continue throughout the practice of teaching with technology. We propose a model of continuing professional development that provides instructors with the ability to meaningfully integrate technology into their teaching practices through centralized support for developing TPACK. In doing so, we draw upon several theoretical frameworks and evidence based practices.




model

Creatıng Infographics Based on the Bridge21 Model for Team-based and Technology-mediated Learning

Aim/Purpose: The main aim of this study was modeling a collaborative process for knowledge visualization, via the creation of infographics. Background: As an effective method for visualizing complex information, creating infographics requires learners to generate and cultivate a deep knowledge of content and enables them to concisely visualize and share this knowledge. This study investigates creating infographics as a knowledge visualization process for collaborative learning situations by integrating the infographic design model into the team-based and technology-mediated Bridge21 learning model. Methodology: This study was carried out using an educational design perspective by conducting three main cycles comprised of three micro cycles: analysis and exploration; design and construction; evaluation and reflection. The process and the scaffolding were developed and enhanced from cycle to cycle based on both qualitative and quantitative methods by using the infographic design rubric and researcher observations acquired during implementation. Respectively, twenty-three, twenty-four, and twenty-four secondary school students participated in the infographic creation process cycles. Contribution: This research proposes an extensive step-by-step process model for creating infographics as a method of visualization for learning. It is particularly relevant for working with complex information, in that it enables collaborative knowledge construction and sharing of condensed knowledge. Findings: Creating infographics can be an effective method for collaborative learning situations by enabling knowledge construction, visualization and sharing. The Bridge21 activity model constituted the spine of the infographic creation process. The content generation, draft generation, and visual and digital design generation components of the infographic design model matched with the investigate, plan and create phases of the Bridge21 activity model respectively. Improvements on infographic design results from cycle to cycle suggest that the revisions on the process model succeeded in their aims. The rise in each category was found to be significant, but the advance in visual design generation was particularly large. Recommendations for Practitioners: The effectiveness of the creation process and the quality of the results can be boosted by using relevant activities based on learner prior knowledge and skills. While infographic creation can lead to a focus on visual elements, the importance of wording must be emphasized. Being a multidimensional process, groups need guidance to ensure effective collaboration. Recommendation for Researchers: The proposed collaborative infographic creation process could be structured and evaluated for online learning environments, which will improve interaction and achievement by enhancing collaborative knowledge creation. Impact on Society: In order to be knowledge constructors, innovative designers, creative communicators and global collaborators, learners need to be surrounded by adequate learning environments. The infographic creation process offers them a multidimensional learning situation. They must understand the problem, find an effective way to collect information, investigate their data, develop creative and innovative perspectives for visual design and be comfortable for using digital creation tools. Future Research: The infographic creation process could be investigated in terms of required learner prior knowledge and skills, and could be enhanced by developing pre-practices and scaffolding.




model

A Deep Learning Based Model to Assist Blind People in Their Navigation

Aim/Purpose: This paper proposes a new approach to developing a deep learning-based prototyping wearable model which can assist blind and visually disabled people to recognize their environments and navigate through them. As a result, visually impaired people will be able to manage day-to-day activities and navigate through the world around them more easily. Background: In recent decades, the development of navigational devices has posed challenges for researchers to design smart guidance systems for visually impaired and blind individuals in navigating through known or unknown environments. Efforts need to be made to analyze the existing research from a historical perspective. Early studies of electronic travel aids should be integrated with the use of assistive technology-based artificial vision models for visually impaired persons. Methodology: This paper is an advancement of our previous research work, where we performed a sensor-based navigation system. In this research, the navigation of the visually disabled person is carried out with a vision-based 3D-designed wearable model and a vision-based smart stick. The wearable model used a neural network-based You Only Look Once (YOLO) algorithm to detect the course of the navigational path which is augmented by a GPS-based smart Stick. Over 100 images of each of the three classes, namely straight path, left path and right path, are being trained using supervised learning. The model accurately predicts a straight path with 79% mean average precision (mAP), the right path with 83% mAP, and the left path with 85% mAP. The average accuracy of the wearable model is 82.33% and that of the smart stick is 96.14% which combined gives an overall accuracy of 89.24%. Contribution: This research contributes to the design of a low-cost navigational standalone system that will be handy to use and help people to navigate safely in real-time scenarios. The challenging self-built dataset of various paths is generated and transfer learning is performed on the YOLO-v5 model after augmentation and manual annotation. To analyze and evaluate the model, various metrics, such as model losses, recall value, precision, and maP, are used. Findings: These were the main findings of the study: • To detect objects, the deep learning model uses a higher version of YOLO, i.e., a YOLOv5 detector, that may help those with visual im-pairments to improve their quality of navigational mobilities in known or unknown environments. • The developed standalone model has an option to be integrated into any other assistive applications like Electronic Travel Aids (ETAs) • It is the single neural network technology that allows the model to achieve high levels of detection accuracy of around 0.823 mAP with a custom dataset as compared to 0.895 with the COCO dataset. Due to its lightning-speed of 45 FPS object detection technology, it has become popular. Recommendations for Practitioners: Practitioners can help the model’s efficiency by increasing the sample size and classes used in training the model. Recommendation for Researchers: To detect objects in an image or live cam, there are various algorithms, e.g., R-CNN, Retina Net, Single Shot Detector (SSD), YOLO. Researchers can choose to use the YOLO version owing to its superior performance. Moreover, one of the YOLO versions, YOLOv5, outperforms its other versions such as YOLOv3 and YOLOv4 in terms of speed and accuracy. Impact on Society: We discuss new low-cost technologies that enable visually impaired people to navigate effectively in indoor environments. Future Research: The future of deep learning could incorporate recurrent neural networks on a larger set of data with special AI-based processors to avoid latency.




model

Measurement of Doctoral Students’ Intention to Use Online Learning: A SEM Approach Using the TRAM Model

Aim/Purpose: The study aims to supplement existing knowledge of information systems by presenting empirical data on the factors influencing the intentions of doctoral students to learn through online platforms. Background: E-learning platforms have become popular among students and professionals over the past decade. However, the intentions of the doctoral students are not yet known. They are an important source of knowledge production in academics by way of teaching and research. Methodology: The researchers collected data from universities in the Delhi National Capital Region (NCR) using a survey method from doctoral students using a convenience sampling method. The model studied was the Technology Readiness and Acceptance Model (TRAM), an integration of the Technology Readiness Index (TRI) and Technology Acceptance Model (TAM). Contribution: TRAM provides empirical evidence that it positively predicts behavioral intentions to learn from online platforms. Hence, the study validated the model among doctoral students from the perspective of a developing nation. Findings: The model variables predicted 49% of the variance in doctoral students’ intent. The TRAM model identified motivating constructs such as optimism and innovativeness as influencing TAM predictors. Finally, doctoral students have positive opinions about the usefulness and ease of use of online learning platforms. Recommendations for Practitioners: Academic leaders motivate scholars to use online platforms, and application developers to incorporate features that facilitate ease of use. Recommendation for Researchers: Researchers can explore the applicability of TRAM in other developing countries and examine the role of cultural and social factors in the intent to adopt online learning. Future Research: The influence of demographic variables on intentions can lead to additional insights.




model

Unravelling e-governance adoption drivers: insights from the UTAUT 3 model

The study aims to unveil the various determinants that drive the adoption of e-governance services (EGS). Using the UTAUT 3 model, the research investigated these factors within the Indian context. A purposive sampling technique was utilised to collect the samples from 680 respondents through the online survey method. Furthermore, the study employs structural equation modelling (SEM) to examine the structural relationships between the UTAUT3 model's dimensions in the context of e-governance. Findings revealed that the UTAUT3 model adequately predicts the intention to adopt EGS. The present study addressed a significant gap in the literature on EGS and technology adoption by establishing a relationship between different dimensions of the UTAUT3 model and actual usage of EGS. The findings have implications for practitioners and policymakers as they throw light on the effective implementation of e-governance programs, which are essential for providing the citizens with high-quality services.




model

Student's classroom behaviour recognition method based on abstract hidden Markov model

In order to improve the standardisation of mutual information index, accuracy rate and recall rate of student classroom behaviour recognition method, this paper proposes a student's classroom behaviour recognition method based on abstract hidden Markov model (HMM). After cleaning the students' classroom behaviour data, improve the data quality through interpolation and standardisation, and then divide the types of students' classroom behaviour. Then, in support vector machine, abstract HMM is used to calculate the output probability density of support vector machine. Finally, according to the characteristic interval of classroom behaviour, we can judge the category of behaviour characteristics. The experiment shows that normalised mutual information (NMI) index of this method is closer to one, and the maximum AUC-PR index can reach 0.82, which shows that this method can identify students' classroom behaviour more effectively and reliably.




model

Transformative advances in volatility prediction: unveiling an innovative model selection method using exponentially weighted information criteria

Using information criteria is a common method for making a decision about which model to use for forecasting. There are many different methods for evaluating forecasting models, such as MAE, RMSE, MAPE, and Theil-U, among others. After the creation of AIC, AICc, HQ, BIC, and BICc, the two criteria that have become the most popular and commonly utilised are Bayesian IC and Akaike's IC. In this investigation, we are innovative in our use of exponential weighting to get the log-likelihood of the information criteria for model selection, which means that we propose assigning greater weight to more recent data in order to reflect their increased precision. All research data is from the major stock markets' daily observations, which include the USA (GSPC, DJI), Europe (FTSE 100, AEX, and FCHI), and Asia (Nikkei).




model

Agricultural informatics: emphasising potentiality and proposed model on innovative and emerging Doctor of Education in Agricultural Informatics program for smart agricultural systems

International universities are changing with their style of operation, mode of teaching and learning operations. This change is noticeable rapidly in India and also in international contexts due to healthy and innovative methods, educational strategies, and nomenclature throughout the world. Technologies are changing rapidly, including ICT. Different subjects are developed in the fields of IT and computing with the interaction or applications to other fields, viz. health informatics, bio informatics, agriculture informatics, and so on. Agricultural informatics is an interdisciplinary subject dedicated to combining information technology and information science utilisation in agricultural sciences. The digital agriculture is powered by agriculture informatics practice. For teaching, research and development of any subject educational methods is considered as important and various educational programs are there in this regard viz. Bachelor of Education, Master of Education, PhD in Education, etc. Degrees are also available to deal with the subjects and agricultural informatics should not be an exception of this. In this context, Doctor of Education (EdD or DEd) is an emerging degree having features of skill sets, courses and research work. This paper proposed on EdD program with agricultural informatics specialisation for improving healthy agriculture system. Here, a proposed model core curriculum is also presented.