algorithm Researchers detail RoboPAIR, an algorithm that is designed to induce robots, relying on LLMs for their inputs, to ignore models' safeguards without exception By biztoc.com Published On :: Thu, 14 Nov 2024 07:02:23 GMT AI chatbots such as ChatGPT and other applications powered by large language models (LLMs) have exploded in popularity, leading a number of companies to explore LLM-driven robots. However, a new study now reveals an automated way to hack into such machines with 100 percent success. By… Full Article
algorithm Tech of the Day: Microsoft Has Developed an Algorithm to Turn First Person GoPro Videos Into Awesome Hyperlapses By cheezburger.com Published On :: Mon, 11 Aug 2014 12:59:00 -0700 Full Article microsoft Video GoPro
algorithm Digital Baroque: History Meets Algorithm, a future-looking exhibition that channels history, opens 31 January-7 February, 2022 on the newly launched 4ART NFT+ marketplace By www.prleap.com Published On :: Wed, 26 Jan 2022 08:00:00 PST 4ARTechnologies, a pioneer in art digitization and security is proud to launch its inaugural exhibition, Digital Baroque: History Meets Algorithm that brings together 11 individual artists and 3 collectives. Full Article
algorithm Google Mobile Friendly Algorithm Gives Small Businesses The Advantage By www.small-business-software.net Published On :: Mon, 18 May 2015 09:00:00 -0400 Despite the warning, a number of extremely well known brands are still scrabbling to minimise the impact of Google’s mobile algorithm update, leaving smaller businesses with the advantage as long as they do not panic according to leading UK and US Search experts. complete article Full Article
algorithm Learn About Social Media Algorithms, Augmented Reality and More Changing Business Trends By www.small-business-software.net Published On :: Fri, 27 Oct 2017 21:09:00 -0400 The tools, platforms and methods you use to grow your business are constantly changing. So you need to keep up with all the latest updates and trends. Members of the online small business community have some great insights about these changing tools and trends, ranging from social media algorithms to augmented reality. Read on for some helpful tips. complete article Full Article
algorithm A New Approach to Water Flow Algorithm for Text Line Segmentation By www.jucs.org Published On :: 2011-04-07T14:38:30+02:00 This paper proposes a new approach to water flow algorithm for the text line segmentation. Original method assumes hypothetical water flows under a few specified angles to the document image frame from left to right and vice versa. As a result, unwetted image frames are extracted. These areas are of major importance for text line segmentation. Method modifications mean extension values of water flow angle and unwetted image frames function enlargement. Results are encouraging due to text line segmentation improvement which is the most challenging process stage in document image processing. Full Article
algorithm Nondeterministic Query Algorithms By www.jucs.org Published On :: 2011-07-04T16:04:43+02:00 Query algorithms are used to compute Boolean functions. The definition of the function is known, but input is hidden in a black box. The aim is to compute the function value using as few queries to the black box as possible. As in other computational models, different kinds of query algorithms are possible: deterministic, probabilistic, as well as nondeterministic. In this paper, we present a new alternative definition of nondeterministic query algorithms and study algorithm complexity in this model. We demonstrate the power of our model with an example of computing the Fano plane Boolean function. We show that for this function the difference between deterministic and nondeterministic query complexity is 7N versus O(3N). Full Article
algorithm Least Slack Time Rate first: an Efficient Scheduling Algorithm for Pervasive Computing Environment By www.jucs.org Published On :: 2011-07-04T16:04:46+02:00 Real-time systems like pervasive computing have to complete executing a task within the predetermined time while ensuring that the execution results are logically correct. Such systems require intelligent scheduling methods that can adequately promptly distribute the given tasks to a processor(s). In this paper, we propose LSTR (Least Slack Time Rate first), a new and simple scheduling algorithm, for a multi-processor environment, and demonstrate its efficient performance through various tests. Full Article
algorithm Algorithms for the Evaluation of Ontologies for Extended Error Taxonomy and their Application on Large Ontologies By www.jucs.org Published On :: 2011-07-20T10:20:31+02:00 Ontology evaluation is an integral and important part of the ontology development process. Errors in ontologies could be catastrophic for the information system based on those ontologies. As per our experiments, the existing ontology evaluation systems were unable to detect many errors (like, circulatory error in class and property hierarchy, common class and property in disjoint decomposition, redundancy of sub class and sub property, redundancy of disjoint relation and disjoint knowledge omission) as defined in the error taxonomy. We have formulated efficient algorithms for the evaluation of these and other errors as per the extended error taxonomy. These algorithms are implemented (named as OntEval) and the implementations are used to evaluate well-known ontologies including Gene Ontology (GO), WordNet Ontology and OntoSem. The ontologies are indexed using a variant of already proposed scheme Ontrel. A number of errors and warnings in these ontologies have been discovered using the OntEval. We have also reported the performance of our implementation, OntEval. Full Article
algorithm De la Société pulsionnelle à la société de l’algorithme By marc-vasseur.over-blog.com Published On :: Fri, 15 Mar 2019 07:04:57 +0100 Le court XXème siècle que le monde aura connu, est qualifié par certains historiens comme l’âge des extrêmes. Avec en point d’orgue, des guerres et des gouvernements qui auront décimés plusieurs dizaines de millions d’individus et dans le même temps,... Full Article
algorithm Multi-agent Q-learning algorithm-based relay and jammer selection for physical layer security improvement By www.inderscience.com Published On :: 2024-10-07T23:20:50-05:00 Physical Layer Security (PLS) and relay technology have emerged as viable methods for enhancing the security of wireless networks. Relay technology adoption enhances the extent of coverage and enhances dependability. Moreover, it can improve the PLS. Choosing relay and jammer nodes from the group of intermediate nodes effectively mitigates the presence of powerful eavesdroppers. Current methods for Joint Relay and Jammer Selection (JRJS) address the optimisation problem of achieving near-optimal secrecy. However, most of these techniques are not scalable for large networks due to their computational cost. Secrecy will decrease if eavesdroppers are aware of the relay and jammer intermediary nodes because beamforming can be used to counter the jammer. Consequently, this study introduces a multi-agent Q-learning-based PLS-enhanced secured joint relay and jammer in dual-hop wireless cooperative networks, considering the existence of several eavesdroppers. The performance of the suggested algorithm is evaluated in comparison to the current algorithms for secure node selection. The simulation results verified the superiority of the proposed algorithm. Full Article
algorithm BEFA: bald eagle firefly algorithm enabled deep recurrent neural network-based food quality prediction using dairy products By www.inderscience.com Published On :: 2024-10-07T23:20:50-05:00 Food quality is defined as a collection of properties that differentiate each unit and influences acceptability degree of food by users or consumers. Owing to the nature of food, food quality prediction is highly significant after specific periods of storage or before use by consumers. However, the accuracy is the major problem in the existing methods. Hence, this paper presents a BEFA_DRNN approach for accurate food quality prediction using dairy products. Firstly, input data is fed to data normalisation phase, which is performed by min-max normalisation. Thereafter, normalised data is given to feature fusion phase that is conducted employing DNN with Canberra distance. Then, fused data is subjected to data augmentation stage, which is carried out utilising oversampling technique. Finally, food quality prediction is done wherein milk is graded employing DRNN. The training of DRNN is executed by proposed BEFA that is a combination of BES and FA. Additionally, BEFA_DRNN obtained maximum accuracy, TPR and TNR values of 93.6%, 92.5% and 90.7%. Full Article
algorithm Computer aided translation technology based on edge computing intelligent algorithm By www.inderscience.com Published On :: 2024-07-02T23:20:50-05:00 To explore the computer-aided translation technology based on the intelligent algorithm of edge computing. This paper presents the research on computer-aided translation technology based on edge computing intelligent algorithm. In the K-means computer edge algorithm, it analyses the traditional way of average resource allocation and the way of virtual machine allocation. In the process of online solution, we have a more detailed understanding of the data information at the edge, and also avoid the connection relationship between network users and the platform, which has a certain impact on the internal operation efficiency of the system. The network user group is divided into several different types of existence through K-means computer algorithm, and various information resources are counted according to their own characteristics. Computer-aided translation technology can significantly improve the quality of translation, improve the translation efficiency, and reduce the translation cost. Full Article
algorithm Urban public space environment design based on intelligent algorithm and fuzzy control By www.inderscience.com Published On :: 2024-07-02T23:20:50-05:00 With the development of urban construction, its spatial evolution is also influenced by behavioural actors such as enterprises, residents, and environmental factors, leading to some decision-making behaviours that are not conducive to urban public space and environmental design. At the same time, some cities are vulnerable to various factors such as distance factors, transportation factors, and human psychological factors during the construction of public areas, resulting in a decline in the quality of urban human settlements. Urban public space is the guarantee of urban life. For this, in order to standardise urban public space and improve the quality of urban living environment, the standardisation of the environment of urban public space is required. The rapid development of intelligent algorithms and fuzzy control provides technical support for the environmental design of urban public spaces. Through the modelling of intelligent algorithms and the construction of fuzzy space, it can meet the diverse. Full Article
algorithm Algorithm Visualization System for Teaching Spatial Data Algorithms By Published On :: Full Article
algorithm Android malware analysis using multiple machine learning algorithms By www.inderscience.com Published On :: 2024-10-07T23:20:50-05:00 Currently, Android is a booming technology that has occupied the major parts of the market share. However, as Android is an open-source operating system there are possibilities of attacks on the users, there are various types of attacks but one of the most common attacks found was malware. Malware with machine learning (ML) techniques has proven as an impressive result and a useful method for malware detection. Here in this paper, we have focused on the analysis of malware attacks by collecting the dataset for the various types of malware and we trained the model with multiple ML and deep learning (DL) algorithms. We have gathered all the previous knowledge related to malware with its limitations. The machine learning algorithms were having various accuracy levels and the maximum accuracy observed is 99.68%. It also shows which type of algorithm is preferred depending on the dataset. The knowledge from this paper may also guide and act as a reference for future research related to malware detection. We intend to make use of Static Android Activity to analyse malware to mitigate security risks. Full Article
algorithm Implementation of a novel technique for ordering of features algorithm in detection of ransomware attack By www.inderscience.com Published On :: 2024-10-07T23:20:50-05:00 In today's world, malware has become a part and threat to our computer systems. All electronic devices are very susceptible/vulnerable to various threats like different types of malware. There is one subset of malware called ransomware, which is majorly used to have large financial gains. The attacker asks for a ransom amount to regain access to the system/data. When dynamic technique using machine learning is used, it is very important to select the correct set of features for the detection of a ransomware attack. In this paper, we present two novel algorithms for the detection of ransomware attacks. The first algorithm is used to assign the time stamp to the features (API calls) for the ordering and second is used for the ordering and ranking of the features for the early detection of a ransomware attack. Full Article
algorithm Identification of badminton players' swinging movements based on improved dense trajectory algorithm By www.inderscience.com Published On :: 2024-10-14T23:20:50-05:00 Badminton, as a fast and highly technical sport, requires high accuracy in identifying athletes' swing movements. Accurately identifying different swing movements is of great significance for technical analysis, coach guidance, and game evaluation. To improve the recognition accuracy of badminton players' swing movements, this text is based on an improved dense trajectory algorithm to improve the accuracy of recognising badminton players' swing movements. The features are efficiently extracted and encoded. The results on the KTH, UCF Sports, and Hollywood2 datasets demonstrated that the improved algorithm achieved recognition accuracy of 94.2%, 88.2%, and 58.3%, respectively. Compared to traditional methods, the innovation of research lies in optimised feature extraction methods, efficient algorithm design, and accurate action recognition. These results provide new ideas for the research and application of badminton swing motion recognition. Full Article
algorithm Combination of Lv-3DCNN algorithm in random noise environment and its application in aerobic gymnastics action recognition By www.inderscience.com Published On :: 2024-10-14T23:20:50-05:00 Action recognition plays a vital role in analysing human body behaviour and has significant implications for research and education. However, traditional recognition methods often suffer from issues such as inaccurate time and spatial feature vectors. Therefore, this study addresses the problem of inaccurate recognition of aerobic gymnastics action image data and proposes a visualised three-dimensional convolutional neural network algorithm-based action recognition model. This model incorporates unsupervised visualisation methods into the traditional network and enhances data recognition capabilities through the introduction of a random noise perturbation enhancement algorithm. The research results indicate that the data augmented with noise perturbation achieves the lowest mean square error, reducing the error value from 0.3352 to 0.3095. The use of unsupervised visualisation analysis enables clearer recognition of human actions, and the algorithm model is capable of accurately recognising aerobic movements. Compared to traditional algorithms, the new algorithm exhibits higher recognition accuracy and superior performance. Full Article
algorithm Study on personalised recommendation method of English online learning resources based on improved collaborative filtering algorithm By www.inderscience.com Published On :: 2024-09-03T23:20:50-05:00 In order to improve recommendation coverage, a personalised recommendation method for English online learning resources based on improved collaborative filtering algorithm is studied to enhance the comprehensiveness of personalised recommendation for learning resources. Use matrix decomposition to decompose the user English online learning resource rating matrix. Cluster low dimensional English online learning resources by improving the K-means clustering algorithm. Based on the clustering results, calculate the backfill value of English online learning resources and backfill the information matrix of low dimensional English online learning resources. Using an improved collaborative filtering algorithm to calculate the predicted score of learning resources, personalised recommendation of English online learning resources for users based on the predicted score. Experimental results have shown that this method can effectively backfill English online learning resources, and the resource backfilling effect is excellent, and it has a high recommendation coverage rate. Full Article
algorithm An evaluation of English distance information teaching quality based on decision tree classification algorithm By www.inderscience.com Published On :: 2024-07-04T23:20:50-05:00 In order to overcome the problems of low evaluation accuracy and long evaluation time in traditional teaching quality evaluation methods, a method of English distance information teaching quality evaluation based on decision tree classification algorithm is proposed. Firstly, construct teaching quality evaluation indicators under different roles. Secondly, the information gain theory in decision tree classification algorithm is used to divide the attributes of teaching resources. Finally, the rough set theory is used to calculate the index weight and establish the risk evaluation index factor set. The result of teaching quality evaluation is obtained through fuzzy comprehensive evaluation method. The experimental results show that the accuracy rate of the teaching quality evaluation of this method can reach 99.2%, the recall rate of the English information teaching quality evaluation is 99%, and the time used for the English distance information teaching quality evaluation of this method is only 8.9 seconds. Full Article
algorithm Quantitative evaluation method of ideological and political teaching achievements based on collaborative filtering algorithm By www.inderscience.com Published On :: 2024-07-04T23:20:50-05:00 In order to overcome the problems of large error, low evaluation accuracy and long evaluation time in traditional evaluation methods of ideological and political education, this paper designs a quantitative evaluation method of ideological and political education achievements based on collaborative filtering algorithm. First, the evaluation index system is constructed to divide the teaching achievement evaluation index data in a small scale; then, the quantised dataset is determined and the quantised index weight is calculated; finally, the collaborative filtering algorithm is used to generate a set with high similarity, construct a target index recommendation list, construct a quantitative evaluation function and solve the function value to complete the quantitative evaluation of teaching achievements. The results show that the evaluation error of this method is only 1.75%, the accuracy can reach 98%, and the time consumption is only 2.0 s, which shows that this method can improve the quantitative evaluation effect. Full Article
algorithm Research on evaluation method of e-commerce platform customer relationship based on decision tree algorithm By www.inderscience.com Published On :: 2024-07-04T23:20:50-05:00 In order to overcome the problems of poor evaluation accuracy and long evaluation time in traditional customer relationship evaluation methods, this study proposes a new customer relationship evaluation method for e-commerce platform based on decision tree algorithm. Firstly, analyse the connotation and characteristics of customer relationship; secondly, the importance of customer relationship in e-commerce platform is determined by using decision tree algorithm by selecting and dividing attributes according to the information gain results. Finally, the decision tree algorithm is used to design the classifier, the weighted sampling method is used to obtain the training samples of the base classifier, and the multi-period excess income method is used to construct the customer relationship evaluation function to achieve customer relationship evaluation. The experimental results show that the accuracy of the customer relationship evaluation results of this method is 99.8%, and the evaluation time is only 51 minutes. Full Article
algorithm Online allocation of teaching resources for ideological and political courses in colleges and universities based on differential search algorithm By www.inderscience.com Published On :: 2024-07-04T23:20:50-05:00 In order to improve the classification accuracy and online allocation accuracy of teaching resources and shorten the allocation time, this paper proposes a new online allocation method of college ideological and political curriculum teaching resources based on differential search algorithm. Firstly, the feedback parameter model of teaching resources cleaning is constructed to complete the cleaning of teaching resources. Secondly, according to the results of anti-interference consideration, the linear feature extraction of ideological and political curriculum teaching resources is carried out. Finally, the online allocation objective function of teaching resources for ideological and political courses is constructed, and the differential search algorithm is used to optimise the objective function to complete the online allocation of resources. The experimental results show that this method can accurately classify the teaching resources of ideological and political courses, and can shorten the allocation time, with the highest allocation accuracy of 97%. Full Article
algorithm Performance improvement in inventory classification using the expectation-maximisation algorithm By www.inderscience.com Published On :: 2024-10-29T23:20:50-05:00 Multi-criteria inventory classification (MCIC) is popularly used to aid managers in categorising the inventory. Researchers have used numerous mathematical models and approaches, but few resorted to unsupervised machine-learning techniques to address MCIC. This study uses the expectation-maximisation (EM) algorithm to estimate the parameters of the Gaussian mixture model (GMM), a popular unsupervised machine learning algorithm, for ABC inventory classification. The EM-GMM algorithm is sensitive to initialisation, which in turn affects the results. To address this issue, two different initialisation procedures have been proposed for the EM-GMM algorithm. Inventory classification outcomes from 14 existing MCIC models have been given as inputs to study the significance of the two proposed initialisation procedures of the EM-GMM algorithm. The effectiveness of these initialisation procedures corresponding to various inputs has been analysed toward inventory management performance measures, i.e., fill rate, total relevant cost, and inventory turnover ratio. Full Article
algorithm A Memory Optimized Public-Key Crypto Algorithm Using Modified Modular Exponentiation (MME) By Published On :: Full Article
algorithm Novel Phonetic Name Matching Algorithm with a Statistical Ontology for Analysing Names Given in Accordance with Thai Astrology By Published On :: Full Article
algorithm Effectiveness of Combining Algorithm and Program Animation: A Case Study with Data Structure Course By Published On :: Full Article
algorithm Usability and Pedagogical Assessment of an Algorithm Learning Tool: A Case Study for an Introductory Programming Course for High School By Published On :: 2015-06-03 An algorithm learning tool was developed for an introductory computer science class in a specialized science and technology high school in Japan. The tool presents lessons and simple visualizations that aim to facilitate teaching and learning of fundamental algorithms. Written tests and an evaluation questionnaire were designed and implemented along with the learning tool among the participants. The tool’s effect on the learning performance of the students was examined. The differences of the two types of visualizations offered by the tool, one with more input and control options and the other with fewer options, were analyzed. Based on the evaluation questionnaire, the scales with which the tool can be assessed according to its usability and pedagogical effectiveness were identified. After using the algorithm learning tool there was an increase in the posttest scores of the students, and those who used the visualization with more input and control options had higher scores compared to those who used the one with limited options. The learning objectives used to evaluate the tool correlated with the test performance of the students. Properties comprised of learning objectives, algorithm visualization characteristics, and interface assessment are proposed to be incorporated in evaluating an algorithm learning tool for novice learners. Full Article
algorithm A New Typology Design of Performance Metrics to Measure Errors in Machine Learning Regression Algorithms By Published On :: 2019-01-24 Aim/Purpose: The aim of this study was to analyze various performance metrics and approaches to their classification. The main goal of the study was to develop a new typology that will help to advance knowledge of metrics and facilitate their use in machine learning regression algorithms Background: Performance metrics (error measures) are vital components of the evaluation frameworks in various fields. A performance metric can be defined as a logical and mathematical construct designed to measure how close are the actual results from what has been expected or predicted. A vast variety of performance metrics have been described in academic literature. The most commonly mentioned metrics in research studies are Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), etc. Knowledge about metrics properties needs to be systematized to simplify the design and use of the metrics. Methodology: A qualitative study was conducted to achieve the objectives of identifying related peer-reviewed research studies, literature reviews, critical thinking and inductive reasoning. Contribution: The main contribution of this paper is in ordering knowledge of performance metrics and enhancing understanding of their structure and properties by proposing a new typology, generic primary metrics mathematical formula and a visualization chart Findings: Based on the analysis of the structure of numerous performance metrics, we proposed a framework of metrics which includes four (4) categories: primary metrics, extended metrics, composite metrics, and hybrid sets of metrics. The paper identified three (3) key components (dimensions) that determine the structure and properties of primary metrics: method of determining point distance, method of normalization, method of aggregation of point distances over a data set. For each component, implementation options have been identified. The suggested new typology has been shown to cover a total of over 40 commonly used primary metrics Recommendations for Practitioners: Presented findings can be used to facilitate teaching performance metrics to university students and expedite metrics selection and implementation processes for practitioners Recommendation for Researchers: By using the proposed typology, researchers can streamline development of new metrics with predetermined properties Impact on Society: The outcomes of this study could be used for improving evaluation results in machine learning regression, forecasting and prognostics with direct or indirect positive impacts on innovation and productivity in a societal sense Future Research: Future research is needed to examine the properties of the extended metrics, composite metrics, and hybrid sets of metrics. Empirical study of the metrics is needed using R Studio or Azure Machine Learning Studio, to find associations between the properties of primary metrics and their “numerical” behavior in a wide spectrum of data characteristics and business or research requirements Full Article
algorithm Improving Webpage Access Predictions Based on Sequence Prediction and PageRank Algorithm By Published On :: 2019-01-20 Aim/Purpose: In this article, we provide a better solution to Webpage access prediction. In particularly, our core proposed approach is to increase accuracy and efficiency by reducing the sequence space with integration of PageRank into CPT+. Background: The problem of predicting the next page on a web site has become significant because of the non-stop growth of Internet in terms of the volume of contents and the mass of users. The webpage prediction is complex because we should consider multiple kinds of information such as the webpage name, the contents of the webpage, the user profile, the time between webpage visits, differences among users, and the time spent on a page or on each part of the page. Therefore, webpage access prediction draws substantial effort of the web mining research community in order to obtain valuable information and improve user experience as well. Methodology: CPT+ is a complex prediction algorithm that dramatically offers more accurate predictions than other state-of-the-art models. The integration of the importance of every particular page on a website (i.e., the PageRank) regarding to its associations with other pages into CPT+ model can improve the performance of the existing model. Contribution: In this paper, we propose an approach to reduce prediction space while improving accuracy through combining CPT+ and PageRank algorithms. Experimental results on several real datasets indicate the space reduced by up to between 15% and 30%. As a result, the run-time is quicker. Furthermore, the prediction accuracy is improved. It is convenient that researchers go on using CPT+ to predict Webpage access. Findings: Our experimental results indicate that PageRank algorithm is a good solution to improve CPT+ prediction. An amount of though approximately 15 % to 30% of redundant data is removed from datasets while improving the accuracy. Recommendations for Practitioners: The result of the article could be used in developing relevant applications such as Webpage and product recommendation systems. Recommendation for Researchers: The paper provides a prediction model that integrates CPT+ and PageRank algorithms to tackle the problem of complexity and accuracy. The model has been experimented against several real datasets in order to show its performance. Impact on Society: Given an improving model to predict Webpage access using in several fields such as e-learning, product recommendation, link prediction, and user behavior prediction, the society can enjoy a better experience and more efficient environment while surfing the Web. Future Research: We intend to further improve the accuracy of webpage access prediction by using the combination of CPT+ and other algorithms. Full Article
algorithm IDCUP Algorithm to Classifying Arbitrary Shapes and Densities for Center-based Clustering Performance Analysis By Published On :: 2020-05-04 Aim/Purpose: The clustering techniques are normally considered to determine the significant and meaningful subclasses purposed in datasets. It is an unsupervised type of Machine Learning (ML) where the objective is to form groups from objects based on their similarity and used to determine the implicit relationships between the different features of the data. Cluster Analysis is considered a significant problem area in data exploration when dealing with arbitrary shape problems in different datasets. Clustering on large data sets has the following challenges: (1) clusters with arbitrary shapes; (2) less knowledge discovery process to decide the possible input features; (3) scalability for large data sizes. Density-based clustering has been known as a dominant method for determining the arbitrary-shape clusters. Background: Existing density-based clustering methods commonly cited in the literature have been examined in terms of their behavior with data sets that contain nested clusters of varying density. The existing methods are not enough or ideal for such data sets, because they typically partition the data into clusters that cannot be nested. Methodology: A density-based approach on traditional center-based clustering is introduced that assigns a weight to each cluster. The weights are then utilized in calculating the distances from data vectors to centroids by multiplying the distance by the centroid weight. Contribution: In this paper, we have examined different density-based clustering methods for data sets with nested clusters of varying density. Two such data sets were used to evaluate some of the commonly cited algorithms found in the literature. Nested clusters were found to be challenging for the existing algorithms. In utmost cases, the targeted algorithms either did not detect the largest clusters or simply divided large clusters into non-overlapping regions. But, it may be possible to detect all clusters by doing multiple runs of the algorithm with different inputs and then combining the results. This work considered three challenges of clustering methods. Findings: As a result, a center with a low weight will attract objects from further away than a centroid with higher weight. This allows dense clusters inside larger clusters to be recognized. The methods are tested experimentally using the K-means, DBSCAN, TURN*, and IDCUP algorithms. The experimental results with different data sets showed that IDCUP is more robust and produces better clusters than DBSCAN, TURN*, and K-means. Finally, we compare K-means, DBSCAN, TURN*, and to deal with arbitrary shapes problems at different datasets. IDCUP shows better scalability compared to TURN*. Future Research: As future recommendations of this research, we are concerned with the exploration of further available challenges of the knowledge discovery process in clustering along with complex data sets with more time. A hybrid approach based on density-based and model-based clustering algorithms needs to compare to achieve maximum performance accuracy and avoid the arbitrary shapes related problems including optimization. It is anticipated that the comparable kind of the future suggested process will attain improved performance with analogous precision in identification of clustering shapes. Full Article
algorithm Unveiling the Secrets of Big Data Projects: Harnessing Machine Learning Algorithms and Maturity Domains to Predict Success By Published On :: 2024-08-19 Aim/Purpose: While existing literature has extensively explored factors influencing the success of big data projects and proposed big data maturity models, no study has harnessed machine learning to predict project success and identify the critical features contributing significantly to that success. The purpose of this paper is to offer fresh insights into the realm of big data projects by leveraging machine-learning algorithms. Background: Previously, we introduced the Global Big Data Maturity Model (GBDMM), which encompassed various domains inspired by the success factors of big data projects. In this paper, we transformed these maturity domains into a survey and collected feedback from 90 big data experts across the Middle East, Gulf, Africa, and Turkey regions regarding their own projects. This approach aims to gather firsthand insights from practitioners and experts in the field. Methodology: To analyze the feedback obtained from the survey, we applied several algorithms suitable for small datasets and categorical features. Our approach included cross-validation and feature selection techniques to mitigate overfitting and enhance model performance. Notably, the best-performing algorithms in our study were the Decision Tree (achieving an F1 score of 67%) and the Cat Boost classifier (also achieving an F1 score of 67%). Contribution: This research makes a significant contribution to the field of big data projects. By utilizing machine-learning techniques, we predict the success or failure of such projects and identify the key features that significantly contribute to their success. This provides companies with a valuable model for predicting their own big data project outcomes. Findings: Our analysis revealed that the domains of strategy and data have the most influential impact on the success of big data projects. Therefore, companies should prioritize these domains when undertaking such projects. Furthermore, we now have an initial model capable of predicting project success or failure, which can be invaluable for companies. Recommendations for Practitioners: Based on our findings, we recommend that practitioners concentrate on developing robust strategies and prioritize data management to enhance the outcomes of their big data projects. Additionally, practitioners can leverage machine-learning techniques to predict the success rate of these projects. Recommendation for Researchers: For further research in this field, we suggest exploring additional algorithms and techniques and refining existing models to enhance the accuracy and reliability of predicting the success of big data projects. Researchers may also investigate further into the interplay between strategy, data, and the success of such projects. Impact on Society: By improving the success rate of big data projects, our findings enable organizations to create more efficient and impactful data-driven solutions across various sectors. This, in turn, facilitates informed decision-making, effective resource allocation, improved operational efficiency, and overall performance enhancement. Future Research: In the future, gathering additional feedback from a broader range of big data experts will be valuable and help refine the prediction algorithm. Conducting longitudinal studies to analyze the long-term success and outcomes of Big Data projects would be beneficial. Furthermore, exploring the applicability of our model across different regions and industries will provide further insights into the field. Full Article
algorithm Analysis of Machine-Based Learning Algorithm Used in Named Entity Recognition By Published On :: 2023-03-12 Aim/Purpose: The amount of information published has increased dramatically due to the information explosion. The issue of managing information as it expands at this rate lies in the development of information extraction technology that can turn unstructured data into organized data that is understandable and controllable by computers Background: The primary goal of named entity recognition (NER) is to extract named entities from amorphous materials and place them in pre-defined semantic classes. Methodology: In our work, we analyze various machine learning algorithms and implement K-NN which has been widely used in machine learning and remains one of the most popular methods to classify data. Contribution: To the researchers’ best knowledge, no published study has presented Named entity recognition for the Kikuyu language using a machine learning algorithm. This research will fill this gap by recognizing entities in the Kikuyu language. Findings: An evaluation was done by testing precision, recall, and F-measure. The experiment results demonstrate that using K-NN is effective in classification performance. Recommendation for Researchers: With enough training data, researchers could perform an experiment and check the learning curve with accuracy that compares to state of art NER. Future Research: Future studies may be done using unsupervised and semi-supervised learning algorithms for other resource-scarce languages. Full Article
algorithm Multi-Focus Image Fusion Algorithm Based on Multi-Task Learning and PS-ViT By search.ieice.org Published On :: Qinghua WU,Weitong LI, Vol.E107-D, No.11, pp.1422-1432Multi-focus image fusion involves combining partially focused images of the same scene to create an all-in-focus image. Aiming at the problems of existing multi-focus image fusion algorithms that the benchmark image is difficult to obtain and the convolutional neural network focuses too much on the local region, a fusion algorithm that combines local and global feature encoding is proposed. Initially, we devise two self-supervised image reconstruction tasks and train an encoder-decoder network through multi-task learning. Subsequently, within the encoder, we merge the dense connection module with the PS-ViT module, enabling the network to utilize local and global information during feature extraction. Finally, to enhance the overall efficiency of the model, distinct loss functions are applied to each task. To preserve the more robust features from the original images, spatial frequency is employed during the fusion stage to obtain the feature map of the fused image. Experimental results demonstrate that, in comparison to twelve other prominent algorithms, our method exhibits good fusion performance in objective evaluation. Ten of the selected twelve evaluation metrics show an improvement of more than 0.28%. Additionally, it presents superior visual effects subjectively. Publication Date: 2024/11/01 Full Article
algorithm Derivation and validation of an algorithm to predict transitions from community to residential long-term care among persons with dementia—A retrospective cohort study By ifp.nyu.edu Published On :: Sun, 03 Nov 2024 14:29:36 +0000 The post Derivation and validation of an algorithm to predict transitions from community to residential long-term care among persons with dementia—A retrospective cohort study was curated by information for practice. Full Article Open Access Journal Articles
algorithm Negative performance feedback from algorithms or humans? effect of medical researchers’ algorithm aversion on scientific misconduct By ifp.nyu.edu Published On :: Mon, 04 Nov 2024 00:59:43 +0000 Institutions are increasingly employing algorithms to provide performance feedback to individuals by tracking productivity, conducting performance appraisals, and developing improvement plans, compared to trad… Read the full article › The post Negative performance feedback from algorithms or humans? effect of medical researchers’ algorithm aversion on scientific misconduct was curated by information for practice. Full Article Open Access Journal Articles
algorithm SIAN Design Generates Modern Fine Jewelry Using Algorithms By design-milk.com Published On :: Mon, 05 Aug 2024 16:00:27 +0000 Architects Antonia Frey-Vorhammer and Simon Vorhammer, of SIAN, design jewelry that mirrors their highly intricate + geometric projects. Full Article Main Style + Fashion Technology Antonia Frey-Vorhammer jewelry jewelry designer modern jewelry rings Sian Design Simon Vorhammer
algorithm Modelling dynamical 3D electron diffraction intensities. I. A scattering cluster algorithm By journals.iucr.org Published On :: 2024-01-25 Three-dimensional electron diffraction (3D-ED) is a powerful technique for crystallographic characterization of nanometre-sized crystals that are too small for X-ray diffraction. For accurate crystal structure refinement, however, it is important that the Bragg diffracted intensities are treated dynamically. Bloch wave simulations are often used in 3D-ED, but can be computationally expensive for large unit cell crystals due to the large number of diffracted beams. Proposed here is an alternative method, the `scattering cluster algorithm' (SCA), that replaces the eigen-decomposition operation in Bloch waves with a simpler matrix multiplication. The underlying principle of SCA is that the intensity of a given Bragg reflection is largely determined by intensity transfer (i.e. `scattering') from a cluster of neighbouring diffracted beams. However, the penalty for using matrix multiplication is that the sample must be divided into a series of thin slices and the diffracted beams calculated iteratively, similar to the multislice approach. Therefore, SCA is more suitable for thin specimens. The accuracy and speed of SCA are demonstrated on tri-isopropyl silane (TIPS) pentacene and rubrene, two exemplar organic materials with large unit cells. Full Article text
algorithm Protocol using similarity score and improved shrink-wrap algorithm for better convergence of phase-retrieval calculation in X-ray diffraction imaging By journals.iucr.org Published On :: 2024-01-01 In X-ray diffraction imaging (XDI), electron density maps of a targeted particle are reconstructed computationally from the diffraction pattern alone using phase-retrieval (PR) algorithms. However, the PR calculations sometimes fail to yield realistic electron density maps that approximate the structure of the particle. This occurs due to the absence of structure amplitudes at and near the zero-scattering angle and the presence of Poisson noise in weak diffraction patterns. Consequently, the PR calculation becomes a bottleneck for XDI structure analyses. Here, a protocol to efficiently yield realistic maps is proposed. The protocol is based on the empirical observation that realistic maps tend to yield low similarity scores, as suggested in our prior study [Sekiguchi et al. (2017), J. Synchrotron Rad. 24, 1024–1038]. Among independently and concurrently executed PR calculations, the protocol modifies all maps using the electron-density maps exhibiting low similarity scores. This approach, along with a new protocol for estimating particle shape, improved the probability of obtaining realistic maps for diffraction patterns from various aggregates of colloidal gold particles, as compared with PR calculations performed without the protocol. Consequently, the protocol has the potential to reduce computational costs in PR calculations and enable efficient XDI structure analysis of non-crystalline particles using synchrotron X-rays and X-ray free-electron laser pulses. Full Article text
algorithm Optimization of synchrotron radiation parameters using swarm intelligence and evolutionary algorithms By journals.iucr.org Published On :: 2024-02-22 Alignment of each optical element at a synchrotron beamline takes days, even weeks, for each experiment costing valuable beam time. Evolutionary algorithms (EAs), efficient heuristic search methods based on Darwinian evolution, can be utilized for multi-objective optimization problems in different application areas. In this study, the flux and spot size of a synchrotron beam are optimized for two different experimental setups including optical elements such as lenses and mirrors. Calculations were carried out with the X-ray Tracer beamline simulator using swarm intelligence (SI) algorithms and for comparison the same setups were optimized with EAs. The EAs and SI algorithms used in this study for two different experimental setups are the Genetic Algorithm (GA), Non-dominated Sorting Genetic Algorithm II (NSGA-II), Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC). While one of the algorithms optimizes the lens position, the other focuses on optimizing the focal distances of Kirkpatrick–Baez mirrors. First, mono-objective evolutionary algorithms were used and the spot size or flux values checked separately. After comparison of mono-objective algorithms, the multi-objective evolutionary algorithm NSGA-II was run for both objectives – minimum spot size and maximum flux. Every algorithm configuration was run several times for Monte Carlo simulations since these processes generate random solutions and the simulator also produces solutions that are stochastic. The results show that the PSO algorithm gives the best values over all setups. Full Article text
algorithm Investigation of fast and efficient lossless compression algorithms for macromolecular crystallography experiments By journals.iucr.org Published On :: 2024-06-05 Structural biology experiments benefit significantly from state-of-the-art synchrotron data collection. One can acquire macromolecular crystallography (MX) diffraction data on large-area photon-counting pixel-array detectors at framing rates exceeding 1000 frames per second, using 200 Gbps network connectivity, or higher when available. In extreme cases this represents a raw data throughput of about 25 GB s−1, which is nearly impossible to deliver at reasonable cost without compression. Our field has used lossless compression for decades to make such data collection manageable. Many MX beamlines are now fitted with DECTRIS Eiger detectors, all of which are delivered with optimized compression algorithms by default, and they perform well with current framing rates and typical diffraction data. However, better lossless compression algorithms have been developed and are now available to the research community. Here one of the latest and most promising lossless compression algorithms is investigated on a variety of diffraction data like those routinely acquired at state-of-the-art MX beamlines. Full Article text
algorithm Analysis of crystallographic phase retrieval using iterative projection algorithms By journals.iucr.org Published On :: 2024-10-23 For protein crystals in which more than two thirds of the volume is occupied by solvent, the featureless nature of the solvent region often generates a constraint that is powerful enough to allow direct phasing of X-ray diffraction data. Practical implementation relies on the use of iterative projection algorithms with good global convergence properties to solve the difficult nonconvex phase-retrieval problem. In this paper, some aspects of phase retrieval using iterative projection algorithms are systematically explored, where the diffraction data and density-value distributions in the protein and solvent regions provide the sole constraints. The analysis is based on the addition of random error to the phases of previously determined protein crystal structures, followed by evaluation of the ability to recover the correct phase set as the distance from the solution increases. The properties of the difference-map (DM), relaxed–reflect–reflect (RRR) and relaxed averaged alternating reflectors (RAAR) algorithms are compared. All of these algorithms prove to be effective for crystallographic phase retrieval, and the useful ranges of the adjustable parameter which controls their behavior are established. When these algorithms converge to the solution, the algorithm trajectory becomes stationary; however, the density function continues to fluctuate significantly around its mean position. It is shown that averaging over the algorithm trajectory in the stationary region, following convergence, improves the density estimate, with this procedure outperforming previous approaches for phase or density refinement. Full Article text
algorithm A predicted model-aided reconstruction algorithm for X-ray free-electron laser single-particle imaging By journals.iucr.org Published On :: 2024-06-21 Ultra-intense, ultra-fast X-ray free-electron lasers (XFELs) enable the imaging of single protein molecules under ambient temperature and pressure. A crucial aspect of structure reconstruction involves determining the relative orientations of each diffraction pattern and recovering the missing phase information. In this paper, we introduce a predicted model-aided algorithm for orientation determination and phase retrieval, which has been tested on various simulated datasets and has shown significant improvements in the success rate, accuracy and efficiency of XFEL data reconstruction. Full Article text
algorithm A modified phase-retrieval algorithm to facilitate automatic de novo macromolecular structure determination in single-wavelength anomalous diffraction By journals.iucr.org Published On :: 2024-06-21 The success of experimental phasing in macromolecular crystallography relies primarily on the accurate locations of heavy atoms bound to the target crystal. To improve the process of substructure determination, a modified phase-retrieval algorithm built on the framework of the relaxed alternating averaged reflection (RAAR) algorithm has been developed. Importantly, the proposed algorithm features a combination of the π-half phase perturbation for weak reflections and enforces the direct-method-based tangent formula for strong reflections in reciprocal space. The proposed algorithm is extensively demonstrated on a total of 100 single-wavelength anomalous diffraction (SAD) experimental datasets, comprising both protein and nucleic acid structures of different qualities. Compared with the standard RAAR algorithm, the modified phase-retrieval algorithm exhibits significantly improved effectiveness and accuracy in SAD substructure determination, highlighting the importance of additional constraints for algorithmic performance. Furthermore, the proposed algorithm can be performed without human intervention under most conditions owing to the self-adaptive property of the input parameters, thus making it convenient to be integrated into the structural determination pipeline. In conjunction with the IPCAS software suite, we demonstrated experimentally that automatic de novo structure determination is possible on the basis of our proposed algorithm. Full Article text
algorithm A predicted model-aided one-step classification–multireconstruction algorithm for X-ray free-electron laser single-particle imaging By journals.iucr.org Published On :: 2024-08-28 Ultrafast, high-intensity X-ray free-electron lasers can perform diffraction imaging of single protein molecules. Various algorithms have been developed to determine the orientation of each single-particle diffraction pattern and reconstruct the 3D diffraction intensity. Most of these algorithms rely on the premise that all diffraction patterns originate from identical protein molecules. However, in actual experiments, diffraction patterns from multiple different molecules may be collected simultaneously. Here, we propose a predicted model-aided one-step classification–multireconstruction algorithm that can handle mixed diffraction patterns from various molecules. The algorithm uses predicted structures of different protein molecules as templates to classify diffraction patterns based on correlation coefficients and determines orientations using a correlation maximization method. Tests on simulated data demonstrated high accuracy and efficiency in classification and reconstruction. Full Article text
algorithm Review and experimental comparison of speckle-tracking algorithms for X-ray phase contrast imaging By journals.iucr.org Published On :: This review focuses on low-dose near-field X-ray speckle phase imaging in the differential mode introducing the existing algorithms with their specifications and comparing their performances under various experimental conditions. Full Article text
algorithm TORO Indexer: a PyTorch-based indexing algorithm for kilohertz serial crystallography By journals.iucr.org Published On :: 2024-06-18 Serial crystallography (SX) involves combining observations from a very large number of diffraction patterns coming from crystals in random orientations. To compile a complete data set, these patterns must be indexed (i.e. their orientation determined), integrated and merged. Introduced here is TORO (Torch-powered robust optimization) Indexer, a robust and adaptable indexing algorithm developed using the PyTorch framework. TORO is capable of operating on graphics processing units (GPUs), central processing units (CPUs) and other hardware accelerators supported by PyTorch, ensuring compatibility with a wide variety of computational setups. In tests, TORO outpaces existing solutions, indexing thousands of frames per second when running on GPUs, which positions it as an attractive candidate to produce real-time indexing and user feedback. The algorithm streamlines some of the ideas introduced by previous indexers like DIALS real-space grid search [Gildea, Waterman, Parkhurst, Axford, Sutton, Stuart, Sauter, Evans & Winter (2014). Acta Cryst. D70, 2652–2666] and XGandalf [Gevorkov, Yefanov, Barty, White, Mariani, Brehm, Tolstikova, Grigat & Chapman (2019). Acta Cryst. A75, 694–704] and refines them using faster and principled robust optimization techniques which result in a concise code base consisting of less than 500 lines. On the basis of evaluations across four proteins, TORO consistently matches, and in certain instances outperforms, established algorithms such as XGandalf and MOSFLM [Powell (1999). Acta Cryst. D55, 1690–1695], occasionally amplifying the quality of the consolidated data while achieving superior indexing speed. The inherent modularity of TORO and the versatility of PyTorch code bases facilitate its deployment into a wide array of architectures, software platforms and bespoke applications, highlighting its prospective significance in SX. Full Article text