deep_learning

Machine learning and deep learning techniques for detecting and mitigating cyber threats in IoT-enabled smart grids: a comprehensive review

The confluence of the internet of things (IoT) with smart grids has ushered in a paradigm shift in energy management, promising unparalleled efficiency, economic robustness and unwavering reliability. However, this integrative evolution has concurrently amplified the grid's susceptibility to cyber intrusions, casting shadows on its foundational security and structural integrity. Machine learning (ML) and deep learning (DL) emerge as beacons in this landscape, offering robust methodologies to navigate the intricate cybersecurity labyrinth of IoT-infused smart grids. While ML excels at sifting through voluminous data to identify and classify looming threats, DL delves deeper, crafting sophisticated models equipped to counteract avant-garde cyber offensives. Both of these techniques are united in their objective of leveraging intricate data patterns to provide real-time, actionable security intelligence. Yet, despite the revolutionary potential of ML and DL, the battle against the ceaselessly morphing cyber threat landscape is relentless. The pursuit of an impervious smart grid continues to be a collective odyssey. In this review, we embark on a scholarly exploration of ML and DL's indispensable contributions to enhancing cybersecurity in IoT-centric smart grids. We meticulously dissect predominant cyber threats, critically assess extant security paradigms, and spotlight research frontiers yearning for deeper inquiry and innovation.




deep_learning

A Deep Learning Based Model to Assist Blind People in Their Navigation

Aim/Purpose: This paper proposes a new approach to developing a deep learning-based prototyping wearable model which can assist blind and visually disabled people to recognize their environments and navigate through them. As a result, visually impaired people will be able to manage day-to-day activities and navigate through the world around them more easily. Background: In recent decades, the development of navigational devices has posed challenges for researchers to design smart guidance systems for visually impaired and blind individuals in navigating through known or unknown environments. Efforts need to be made to analyze the existing research from a historical perspective. Early studies of electronic travel aids should be integrated with the use of assistive technology-based artificial vision models for visually impaired persons. Methodology: This paper is an advancement of our previous research work, where we performed a sensor-based navigation system. In this research, the navigation of the visually disabled person is carried out with a vision-based 3D-designed wearable model and a vision-based smart stick. The wearable model used a neural network-based You Only Look Once (YOLO) algorithm to detect the course of the navigational path which is augmented by a GPS-based smart Stick. Over 100 images of each of the three classes, namely straight path, left path and right path, are being trained using supervised learning. The model accurately predicts a straight path with 79% mean average precision (mAP), the right path with 83% mAP, and the left path with 85% mAP. The average accuracy of the wearable model is 82.33% and that of the smart stick is 96.14% which combined gives an overall accuracy of 89.24%. Contribution: This research contributes to the design of a low-cost navigational standalone system that will be handy to use and help people to navigate safely in real-time scenarios. The challenging self-built dataset of various paths is generated and transfer learning is performed on the YOLO-v5 model after augmentation and manual annotation. To analyze and evaluate the model, various metrics, such as model losses, recall value, precision, and maP, are used. Findings: These were the main findings of the study: • To detect objects, the deep learning model uses a higher version of YOLO, i.e., a YOLOv5 detector, that may help those with visual im-pairments to improve their quality of navigational mobilities in known or unknown environments. • The developed standalone model has an option to be integrated into any other assistive applications like Electronic Travel Aids (ETAs) • It is the single neural network technology that allows the model to achieve high levels of detection accuracy of around 0.823 mAP with a custom dataset as compared to 0.895 with the COCO dataset. Due to its lightning-speed of 45 FPS object detection technology, it has become popular. Recommendations for Practitioners: Practitioners can help the model’s efficiency by increasing the sample size and classes used in training the model. Recommendation for Researchers: To detect objects in an image or live cam, there are various algorithms, e.g., R-CNN, Retina Net, Single Shot Detector (SSD), YOLO. Researchers can choose to use the YOLO version owing to its superior performance. Moreover, one of the YOLO versions, YOLOv5, outperforms its other versions such as YOLOv3 and YOLOv4 in terms of speed and accuracy. Impact on Society: We discuss new low-cost technologies that enable visually impaired people to navigate effectively in indoor environments. Future Research: The future of deep learning could incorporate recurrent neural networks on a larger set of data with special AI-based processors to avoid latency.




deep_learning

A forensic approach: identification of source printer through deep learning

Forensic document forgery investigations have elevated the need for source identification for printed documents during the past few years. It is necessary to create a reliable and acceptable safety testing instrument to determine the credibility of printed materials. The proposed system in this study uses a neural network to detect the original printer used in forensic document forgery investigations. The study uses a deep neural network method, which relies on the quality, texture, and accuracy of images printed by various models of Canon and HP printers. The datasets were trained and tested to predict the accuracy using logical function, with the goal of creating a reliable and acceptable safety testing instrument for determining the credibility of printed materials. The technique classified the model with 95.1% accuracy. The proposed method for identifying the source of the printer is a non-destructive technique.




deep_learning

Bi-LSTM GRU-based deep learning architecture for export trade forecasting

To assess a country's economic outlook and achieve higher economic growth, econometric models and prediction techniques are significant tools. Policymakers are always concerned with the correct future estimates of economic variables to take the right economic decisions, design better policies and effectively implement them. Therefore, there is a need to improve the predictive accuracy of the existing models and to use more sophisticated and superior algorithms for accurate forecasting. Deep learning models like recurrent neural networks are considered superior for forecasting as they provide better predictive results as compared to many of the econometric models. Against this backdrop, this paper presents the feasibility of using different deep-learning neural network architectures for trade forecasting. It predicts export trade using different recurrent neural architectures such as 'vanilla recurrent neural network (VRNN)', 'bi-directional long short-term memory network (Bi-LSTM)', 'bi-directional gated recurrent unit (Bi-GRU)' and a hybrid 'bi-directional LSTM and GRU neural network'. The performances of these models are evaluated and compared using different performance metrics such as Mean Square Error (MSE), Mean Absolute Error (MAE) Root Mean Squared Error (RMSE), Root Mean Squared Logarithmic Error (RMSLE) and coefficient of determination <em>R</em>-squared (<em>R</em>²). The results validated the effective export prediction for India.




deep_learning

Intelligence assistant using deep learning: use case in crop disease prediction

In India, 70% of the Indian population is dependent on agriculture, yet agriculture generates only 13% of the country's gross domestic product. Several factors contribute to high levels of stress among farmers in India, such as increased input costs, draughts, and reduced revenues. The problem lies in the absence of an integrated farm advisory system. A farmer needs help to bridge this information gap, and they need it early in the crop's lifecycle to prevent it from being destroyed by pests or diseases. This research involves developing deep learning algorithms such as <i>ResNet18</i> and <i>DenseNet121</i> to help farmers diagnose crop diseases earlier and take corrective actions. By using deep learning techniques to detect these crop diseases with images farmers can scan or click with their smartphones, we can fill in the knowledge gap. To facilitate the use of the models by farmers, they are deployed in Android-based smartphones.




deep_learning

The performance evaluation of teaching reform based on hierarchical multi-task deep learning

The research goal is to solve the problems of low accuracy and long time existing in traditional teaching reform performance evaluation methods, a performance evaluation method of teaching reform based on hierarchical multi-task deep learning is proposed. Under the principle of constructing the evaluation index system, the evaluation indicator system should be constructed. The weight of the evaluation index is calculated through the analytic hierarchy process, and the calculation result of the evaluation weight is taken as the model input sample. A hierarchical multi-task deep learning model for teaching reform performance evaluation is built, and the final teaching reform performance score is obtained. Through relevant experiments, it is proved that compared with the experimental comparison method, this method has the advantages of high evaluation accuracy and short time, and can be further applied in relevant fields.




deep_learning

Improving the Accuracy of Facial Micro-Expression Recognition: Spatio-Temporal Deep Learning with Enhanced Data Augmentation and Class Balancing

Aim/Purpose: This study presents a novel deep learning-based framework designed to enhance spontaneous micro-expression recognition by effectively increasing the amount and variety of data and balancing the class distribution to improve recognition accuracy. Background: Micro-expression recognition using deep learning requires large amounts of data. Micro-expression datasets are relatively small, and their class distribution is not balanced. Methodology: This study developed a framework using a deep learning-based model to recognize spontaneous micro-expressions on a person’s face. The framework also includes several technical stages, including image and data preprocessing. In data preprocessing, data augmentation is carried out to increase the amount and variety of data and class balancing to balance the distribution of sample classes in the dataset. Contribution: This study’s essential contribution lies in enhancing the accuracy of micro-expression recognition and overcoming the limited amount of data and imbalanced class distribution that typically leads to overfitting. Findings: The results indicate that the proposed framework, with its data preprocessing stages and deep learning model, significantly increases the accuracy of micro-expression recognition by overcoming dataset limitations and producing a balanced class distribution. This leads to improved micro-expression recognition accuracy using deep learning techniques. Recommendations for Practitioners: Practitioners can utilize the model produced by the proposed framework, which was developed to recognize spontaneous micro-expressions on a person’s face, by implementing it as an emotional analysis application based on facial micro-expressions. Recommendation for Researchers: Researchers involved in the development of a spontaneous micro-expression recognition framework for analyzing hidden emotions from a person’s face are playing an essential role in advancing this field and continue to search for more innovative deep learning-based solutions that continue to explore techniques to increase the amount and variety of data and find solutions to balancing the number of sample classes in various micro-expression datasets. They can further improvise to develop deep learning model architectures that are more suitable and relevant according to the needs of recognition tasks and the various characteristics of different datasets. Impact on Society: The proposed framework could significantly impact society by providing a reliable model for recognizing spontaneous micro-expressions in real-world applications, ranging from security systems and criminal investigations to healthcare and emotional analysis. Future Research: Developing a spontaneous micro-expression recognition framework based on spatial and temporal flow requires the learning model to classify optimal features. Our future work will focus more on exploring micro-expression features by developing various alternative learning models and increasing the weights of spatial and temporal features.




deep_learning

IRNN-SS: deep learning for optimised protein secondary structure prediction through PROMOTIF and DSSP annotation fusion

DSSP stands as a foundational tool in the domain of protein secondary structure prediction, yet it encounters notable challenges in accurately annotating irregular structures, such as β-turns and γ-turns, which constitute approximately 25%-30% and 10%-15% of protein turns, respectively. This limitation arises from DSSP's reliance on hydrogen-bond analysis, resulting in annotation gaps and reduced consensus on irregular structures. Alternatively, PROMOTIF excels at identifying these irregular structure annotations using phi-psi information. Despite their complementary strengths, previous methodologies utilised DSSP and PROMOTIF separately, leading to disparate prediction methods for protein secondary structures, hampering comprehensive structure analysis crucial for drug development. In this work, we bridge this gap using an annotation fusion approach, combining DSSP structures with beta, and gamma turns. We introduce IRNN-SS, a model employing deep inception and bidirectional gated recurrent neural networks, achieving 77.4% prediction accuracy on benchmark datasets, outpacing current models.




deep_learning

Optimisation with deep learning for leukaemia classification in federated learning

The most common kind of blood cancer in people of all ages is leukaemia. The fractional mayfly optimisation (FMO) based DenseNet is proposed for the identification and classification of leukaemia in federated learning (FL). Initially, the input image is pre-processed by adaptive median filter (AMF). Then, cell segmentation is done using the Scribble2label. After that, image augmentation is accomplished. Finally, leukaemia classification is accomplished utilising DenseNet, which is trained using the FMO. Here, the FMO is devised by merging the mayfly algorithm (MA) and the fractional concept (FC). Following local training, the server performs local updating and aggregation using a weighted average by RV coefficient. The results showed that FMO-DenseNet attained maximum accuracy, true negative rate (TNR) and true positive rate (TPR) of 94.3%, 96.5% and 95.3%. Moreover, FMO-DenseNet gained minimum mean squared error (MSE) and root mean squared error (RMSE) of 5.7%, 9.2% and 30.4%.




deep_learning

Deep learning-based lung cancer detection using CT images

This work demonstrates a hybrid deep learning (DL) model for lung cancer (LC) detection using CT images. Firstly, the input image is passed to the pre-processing stage, where the input image is filtered using a BF and the obtained filtered image is subjected to lung lobe segmentation, where segmentation is done using squeeze U-SegNet. Feature extraction is performed, where features including entropy with fuzzy local binary patterns (EFLBP), local optimal oriented pattern (LOOP), and grey level co-occurrence matrix (GLCM) features are mined. After completing the extracting of features, LC is detected utilising the hybrid efficient-ShuffleNet (HES-Net) method, wherein the HES-Net is established by the incorporation of EfficientNet and ShuffleNet. The presented HES-Net for LC detection is investigated for its performance concerning TNR, and TPR, and accuracy is established to have acquired values of 92.1%, 93.1%, and 91.3%.




deep_learning

Loss Function for Deep Learning to Model Dynamical Systems

Takahito YOSHIDA,Takaharu YAGUCHI,Takashi MATSUBARA, Vol.E107-D, No.11, pp.1458-1462
Accurately simulating physical systems is essential in various fields. In recent years, deep learning has been used to automatically build models of such systems by learning from data. One such method is the neural ordinary differential equation (neural ODE), which treats the output of a neural network as the time derivative of the system states. However, while this and related methods have shown promise, their training strategies still require further development. Inspired by error analysis techniques in numerical analysis while replacing numerical errors with modeling errors, we propose the error-analytic strategy to address this issue. Therefore, our strategy can capture long-term errors and thus improve the accuracy of long-term predictions.
Publication Date: 2024/11/01




deep_learning

What's going on? Developing reflexivity in the management classroom: From surface to deep learning and everything else in between.

'What's going on?' Within the context of our critically-informed teaching practice, we see moments of deep learning and reflexivity in classroom discussions and assessments. Yet, these moments of criticality are interspersed with surface learning and reflection. We draw on dichotomous, linear developmental, and messy explanations of learning processes to empirically explore the learning journeys of 20 international Chinese and 42 domestic New Zealand students. We find contradictions within our own data, and between our findings and the extant literature. We conclude that expressions of surface learning and reflection are considerably more complex than they first appear. Moreover, developing critical reflexivity is a far more subtle, messy, and emotional experience than previously understood. We present the theoretical and pedagogical significance of these findings when we consider the implications for the learning process and the practice of management education.




deep_learning

3xLOGIC’s to debut its X-Series edge based deep learning analytics cameras at ISC West

3xLOGIC, a provider of integrated and intelligent security and business solutions, will debut its recently launched edge based deep learning analytics cameras at ISC West 2024, Booth #23059.




deep_learning

Deep learning to overcome Zernike phase-contrast nanoCT artifacts for automated micro-nano porosity segmentation in bone

Bone material contains a hierarchical network of micro- and nano-cavities and channels, known as the lacuna-canalicular network (LCN), that is thought to play an important role in mechanobiology and turnover. The LCN comprises micrometer-sized lacunae, voids that house osteocytes, and submicrometer-sized canaliculi that connect bone cells. Characterization of this network in three dimensions is crucial for many bone studies. To quantify X-ray Zernike phase-contrast nanotomography data, deep learning is used to isolate and assess porosity in artifact-laden tomographies of zebrafish bones. A technical solution is proposed to overcome the halo and shade-off domains in order to reliably obtain the distribution and morphology of the LCN in the tomographic data. Convolutional neural network (CNN) models are utilized with increasing numbers of images, repeatedly validated by `error loss' and `accuracy' metrics. U-Net and Sensor3D CNN models were trained on data obtained from two different synchrotron Zernike phase-contrast transmission X-ray microscopes, the ANATOMIX beamline at SOLEIL (Paris, France) and the P05 beamline at PETRA III (Hamburg, Germany). The Sensor3D CNN model with a smaller batch size of 32 and a training data size of 70 images showed the best performance (accuracy 0.983 and error loss 0.032). The analysis procedures, validated by comparison with human-identified ground-truth images, correctly identified the voids within the bone matrix. This proposed approach may have further application to classify structures in volumetric images that contain non-linear artifacts that degrade image quality and hinder feature identification.




deep_learning

X-ray lens figure errors retrieved by deep learning from several beam intensity images

The phase problem in the context of focusing synchrotron beams with X-ray lenses is addressed. The feasibility of retrieving the surface error of a lens system by using only the intensity of the propagated beam at several distances is demonstrated. A neural network, trained with a few thousand simulations using random errors, can predict accurately the lens error profile that accounts for all aberrations. It demonstrates the feasibility of routinely measuring the aberrations induced by an X-ray lens, or another optical system, using only a few intensity images.




deep_learning

Deep-learning map segmentation for protein X-ray crystallographic structure determination

When solving a structure of a protein from single-wavelength anomalous diffraction X-ray data, the initial phases obtained by phasing from an anomalously scattering substructure usually need to be improved by an iterated electron-density modification. In this manuscript, the use of convolutional neural networks (CNNs) for segmentation of the initial experimental phasing electron-density maps is proposed. The results reported demonstrate that a CNN with U-net architecture, trained on several thousands of electron-density maps generated mainly using X-ray data from the Protein Data Bank in a supervised learning, can improve current density-modification methods.




deep_learning

CHiMP: deep-learning tools trained on protein crystallization micrographs to enable automation of experiments

A group of three deep-learning tools, referred to collectively as CHiMP (Crystal Hits in My Plate), were created for analysis of micrographs of protein crystallization experiments at the Diamond Light Source (DLS) synchrotron, UK. The first tool, a classification network, assigns images into categories relating to experimental outcomes. The other two tools are networks that perform both object detection and instance segmentation, resulting in masks of individual crystals in the first case and masks of crystallization droplets in addition to crystals in the second case, allowing the positions and sizes of these entities to be recorded. The creation of these tools used transfer learning, where weights from a pre-trained deep-learning network were used as a starting point and repurposed by further training on a relatively small set of data. Two of the tools are now integrated at the VMXi macromolecular crystallography beamline at DLS, where they have the potential to absolve the need for any user input, both for monitoring crystallization experiments and for triggering in situ data collections. The third is being integrated into the XChem fragment-based drug-discovery screening platform, also at DLS, to allow the automatic targeting of acoustic compound dispensing into crystallization droplets.




deep_learning

Dynamic X-ray speckle-tracking imaging with high-accuracy phase retrieval based on deep learning

Speckle-tracking X-ray imaging is an attractive candidate for dynamic X-ray imaging owing to its flexible setup and simultaneous yields of phase, transmission and scattering images. However, traditional speckle-tracking imaging methods suffer from phase distortion at locations with abrupt changes in density, which is always the case for real samples, limiting the applications of the speckle-tracking X-ray imaging method. In this paper, we report a deep-learning based method which can achieve dynamic X-ray speckle-tracking imaging with high-accuracy phase retrieval. The calibration results of a phantom show that the profile of the retrieved phase is highly consistent with the theoretical one. Experiments of polyurethane foaming demonstrated that the proposed method revealed the evolution of the complicated microstructure of the bubbles accurately. The proposed method is a promising solution for dynamic X-ray imaging with high-accuracy phase retrieval, and has extensive applications in metrology and quantitative analysis of dynamics in material science, physics, chemistry and biomedicine.




deep_learning

The prediction of single-molecule magnet properties via deep learning

This paper uses deep learning to present a proof-of-concept for data-driven chemistry in single-molecule magnets (SMMs). Previous discussions within SMM research have proposed links between molecular structures (crystal structures) and single-molecule magnetic properties; however, these have only interpreted the results. Therefore, this study introduces a data-driven approach to predict the properties of SMM structures using deep learning. The deep-learning model learns the structural features of the SMM molecules by extracting the single-molecule magnetic properties from the 3D coordinates presented in this paper. The model accurately determined whether a molecule was a single-molecule magnet, with an accuracy rate of approximately 70% in predicting the SMM properties. The deep-learning model found SMMs from 20 000 metal complexes extracted from the Cambridge Structural Database. Using deep-learning models for predicting SMM properties and guiding the design of novel molecules is promising.




deep_learning

Using deep-learning predictions reveals a large number of register errors in PDB depositions

The accuracy of the information in the Protein Data Bank (PDB) is of great importance for the myriad downstream applications that make use of protein structural information. Despite best efforts, the occasional introduction of errors is inevitable, especially where the experimental data are of limited resolution. A novel protein structure validation approach based on spotting inconsistencies between the residue contacts and distances observed in a structural model and those computationally predicted by methods such as AlphaFold2 has previously been established. It is particularly well suited to the detection of register errors. Importantly, this new approach is orthogonal to traditional methods based on stereochemistry or map–model agreement, and is resolution independent. Here, thousands of likely register errors are identified by scanning 3–5 Å resolution structures in the PDB. Unlike most methods, the application of this approach yields suggested corrections to the register of affected regions, which it is shown, even by limited implementation, lead to improved refinement statistics in the vast majority of cases. A few limitations and confounding factors such as fold-switching proteins are characterized, but this approach is expected to have broad application in spotting potential issues in current accessions and, through its implementation and distribution in CCP4, helping to ensure the accuracy of future depositions.




deep_learning

DLSIA: Deep Learning for Scientific Image Analysis

DLSIA (Deep Learning for Scientific Image Analysis) is a Python-based machine learning library that empowers scientists and researchers across diverse scientific domains with a range of customizable convolutional neural network (CNN) architectures for a wide variety of tasks in image analysis to be used in downstream data processing. DLSIA features easy-to-use architectures, such as autoencoders, tunable U-Nets and parameter-lean mixed-scale dense networks (MSDNets). Additionally, this article introduces sparse mixed-scale networks (SMSNets), generated using random graphs, sparse connections and dilated convolutions connecting different length scales. For verification, several DLSIA-instantiated networks and training scripts are employed in multiple applications, including inpainting for X-ray scattering data using U-Nets and MSDNets, segmenting 3D fibers in X-ray tomographic reconstructions of concrete using an ensemble of SMSNets, and leveraging autoencoder latent spaces for data compression and clustering. As experimental data continue to grow in scale and complexity, DLSIA provides accessible CNN construction and abstracts CNN complexities, allowing scientists to tailor their machine learning approaches, accelerate discoveries, foster interdisciplinary collaboration and advance research in scientific image analysis.




deep_learning

Patching-based deep-learning model for the inpainting of Bragg coherent diffraction patterns affected by detector gaps

A deep-learning algorithm is proposed for the inpainting of Bragg coherent diffraction imaging (BCDI) patterns affected by detector gaps. These regions of missing intensity can compromise the accuracy of reconstruction algorithms, inducing artefacts in the final result. It is thus desirable to restore the intensity in these regions in order to ensure more reliable reconstructions. The key aspect of the method lies in the choice of training the neural network with cropped sections of diffraction data and subsequently patching the predictions generated by the model along the gap, thus completing the full diffraction peak. This approach enables access to a greater amount of experimental data for training and offers the ability to average overlapping sections during patching. As a result, it produces robust and dependable predictions for experimental data arrays of any size. It is shown that the method is able to remove gap-induced artefacts on the reconstructed objects for both simulated and experimental data, which becomes essential in the case of high-resolution BCDI experiments.




deep_learning

Ptychographic phase retrieval via a deep-learning-assisted iterative algorithm

Ptychography is a powerful computational imaging technique with microscopic imaging capability and adaptability to various specimens. To obtain an imaging result, it requires a phase-retrieval algorithm whose performance directly determines the imaging quality. Recently, deep neural network (DNN)-based phase retrieval has been proposed to improve the imaging quality from the ordinary model-based iterative algorithms. However, the DNN-based methods have some limitations because of the sensitivity to changes in experimental conditions and the difficulty of collecting enough measured specimen images for training the DNN. To overcome these limitations, a ptychographic phase-retrieval algorithm that combines model-based and DNN-based approaches is proposed. This method exploits a DNN-based denoiser to assist an iterative algorithm like ePIE in finding better reconstruction images. This combination of DNN and iterative algorithms allows the measurement model to be explicitly incorporated into the DNN-based approach, improving its robustness to changes in experimental conditions. Furthermore, to circumvent the difficulty of collecting the training data, it is proposed that the DNN-based denoiser be trained without using actual measured specimen images but using a formula-driven supervised approach that systemically generates synthetic images. In experiments using simulation based on a hard X-ray ptychographic measurement system, the imaging capability of the proposed method was evaluated by comparing it with ePIE and rPIE. These results demonstrated that the proposed method was able to reconstruct higher-spatial-resolution images with half the number of iterations required by ePIE and rPIE, even for data with low illumination intensity. Also, the proposed method was shown to be robust to its hyperparameters. In addition, the proposed method was applied to ptychographic datasets of a Simens star chart and ink toner particles measured at SPring-8 BL24XU, which confirmed that it can successfully reconstruct images from measurement scans with a lower overlap ratio of the illumination regions than is required by ePIE and rPIE.




deep_learning

4S Mapper to Participate as an Exhibitor at "AI EXPO KOREA 2023" to showcase its deep-learning & spatial info-based Geo AI solutions!

CfSM, designated as a venture startup innovative product on Venture Nara serviced by the Public Procurement Service in July 2022, automatically removes vehicle images from streets using deep learning and drones.





deep_learning

Episode 391: Jeremy Howard on Deep Learning and fast.ai

Jeremy Howard from fast.ai explains deep learning from concept to implementation. Thanks to transfer learning, individuals and small organizations can get state-of-the-art results on machine learning problems using the open source fastai library...




deep_learning

Episode 549: William Falcon Optimizing Deep Learning Models

William Falcon of Lighting AI discusses how to optimize deep learning models using the Lightning platform, optimization is a necessary step towards creating a production application. Philip Winston spoke with Falcon about PyTorch, PyTorch Lightning...




deep_learning

SE Radio 594: Sean Moriarity on Deep Learning with Elixir and Axon

Sean Moriarity, creator of the Axon deep learning framework, co-creator of the Nx library, and author of Machine Learning in Elixir and Genetic Algorithms in Elixir, published by the Pragmatic Bookshelf, speaks with SE Radio host Gavin Henry about what deep learning (neural networks) means today. Using a practical example with deep learning for fraud detection, they explore what Axon is and why it was created. Moriarity describes why the Beam is ideal for machine learning, and why he dislikes the term “neural network.” They discuss the need for deep learning, its history, how it offers a good fit for many of today’s complex problems, where it shines and when not to use it. Moriarity goes into depth on a range of topics, including how to get datasets in shape, supervised and unsupervised learning, feed-forward neural networks, Nx.serving, decision trees, gradient descent, linear regression, logistic regression, support vector machines, and random forests. The episode considers what a model looks like, what training is, labeling, classification, regression tasks, hardware resources needed, EXGBoost, Jax, PyIgnite, and Explorer. Finally, they look at what’s involved in the ongoing lifecycle or operational side of Axon once a workflow is put into production, so you can safely back it all up and feed in new data. Brought to you by IEEE Computer Society and IEEE Software magazine. This episode sponsored by Miro.




deep_learning

[ F.748.12 (06/21) ] - Deep learning software framework evaluation methodology

Deep learning software framework evaluation methodology




deep_learning

Gilad Gressel On Why You Should Watch His Newest Course: Deep Learning With Python

Hi, my name is Gilad Gressel and I’d like to tell you about my new course: Deep Learning with Python. Deep learning is an old technology that has recently been sweeping through the field of machine learning and artificial intelligence. Deep learning powers many of the cutting edge technologies that appear to be “magic” in [...]




deep_learning

Zebra Technologies adds new deep learning tools to Aurora machine vision software

Zebra Technologies Corporation – the digital solution provider enabling businesses to intelligently connect data, assets and people – has introduced a series of advanced AI features enhancing its Aurora machine vision software to provide deep learning capabilities for complex visual inspection use cases.




deep_learning

Deep learning tool helps NASA discover 301 exoplanets

NASA scientists used a neural network called ExoMiner to examine data from Kepler, increasing the total tally of confirmed exoplanets in the universe.




deep_learning

AI / Deep Learning applications course – with hands-on experience

    New workshop in London / remote Link - Enterprise AI workshop – Sep 2018 – in London or remote     AI / Deep Learning applications course/ mentoring program – hands-on experience with limited spaces I am pleased to announce a new course on AI Applications The course combines elements of teaching, coaching and [...]




deep_learning

G-Protein Signaling in Alzheimer's Disease: Spatial Expression Validation of Semi-supervised Deep Learning-Based Computational Framework

Systemic study of pathogenic pathways and interrelationships underlying genes associated with Alzheimer's disease (AD) facilitates the identification of new targets for effective treatments. Recently available large-scale multiomics datasets provide opportunities to use computational approaches for such studies. Here, we devised a novel disease gene identification (digID) computational framework that consists of a semi-supervised deep learning classifier to predict AD-associated genes and a protein–protein interaction (PPI) network-based analysis to prioritize the importance of these predicted genes in AD. digID predicted 1,529 AD-associated genes and revealed potentially new AD molecular mechanisms and therapeutic targets including GNAI1 and GNB1, two G-protein subunits that regulate cell signaling, and KNG1, an upstream modulator of CDC42 small G-protein signaling and mediator of inflammation and candidate coregulator of amyloid precursor protein (APP). Analysis of mRNA expression validated their dysregulation in AD brains but further revealed the significant spatial patterns in different brain regions as well as among different subregions of the frontal cortex and hippocampi. Super-resolution STochastic Optical Reconstruction Microscopy (STORM) further demonstrated their subcellular colocalization and molecular interactions with APP in a transgenic mouse model of both sexes with AD-like mutations. These studies support the predictions made by digID while highlighting the importance of concurrent biological validation of computationally identified gene clusters as potential new AD therapeutic targets.




deep_learning

Deep Learning-Based Reconstruction of 3D T1 SPACE Vessel Wall Imaging Provides Improved Image Quality with Reduced Scan Times: A Preliminary Study [ARTIFICIAL INTELLIGENCE]

BACKGROUND AND PURPOSE:

Intracranial vessel wall imaging is technically challenging to implement, given the simultaneous requirements of high spatial resolution, excellent blood and CSF signal suppression, and clinically acceptable gradient times. Herein, we present our preliminary findings on the evaluation of a deep learning–optimized sequence using T1-weighted imaging.

MATERIALS AND METHODS:

Clinical and optimized deep learning–based image reconstruction T1 3D Sampling Perfection with Application optimized Contrast using different flip angle Evolution (SPACE) were evaluated, comparing noncontrast sequences in 10 healthy controls and postcontrast sequences in 5 consecutive patients. Images were reviewed on a Likert-like scale by 4 fellowship-trained neuroradiologists. Scores (range, 1–4) were separately assigned for 11 vessel segments in terms of vessel wall and lumen delineation. Additionally, images were evaluated in terms of overall background noise, image sharpness, and homogeneous CSF signal. Segment-wise scores were compared using paired samples t tests.

RESULTS:

The scan time for the clinical and deep learning–based image reconstruction sequences were 7:26 minutes and 5:23 minutes respectively. Deep learning–based image reconstruction images showed consistently higher wall signal and lumen visualization scores, with the differences being statistically significant in most vessel segments on both pre- and postcontrast images. Deep learning–based image reconstruction had lower background noise, higher image sharpness, and uniform CSF signal. Depiction of intracranial pathologies was better or similar on the deep learning–based image reconstruction.

CONCLUSIONS:

Our preliminary findings suggest that deep learning–based image reconstruction–optimized intracranial vessel wall imaging sequences may be helpful in achieving shorter gradient times with improved vessel wall visualization and overall image quality. These improvements may help with wider adoption of intracranial vessel wall imaging in clinical practice and should be further validated on a larger cohort.





deep_learning

Canon Launches EOS R1 and EOS R5 Mark II With High-Speed Burst Shooting, Deep Learning AF, & More

Canon, a known face in digital imaging solutions, recently announced the introduction of two highly anticipated cameras to its EOS R series: the EOS R1 and EOS R5 Mark II. These cameras boast next-generation intelligent features, superior quality, impressive speed, and




deep_learning

Screening for Urothelial Carcinoma Cells in Urine Based on Digital Holographic Flow Cytometry through Machine Learning and Deep Learning Method

Lab Chip, 2024, Accepted Manuscript
DOI: 10.1039/D3LC00854A, Paper
Lu Xin, Xi Xiao, Wen Xiao, Ran Peng, Hao Wang, Feng Pan
The incidence of urothelial carcinoma continue to rise annually, particularly among the elderly. Prompt diagnosis and treatment can significantly enhance patient survival and quality of life. Urine cytology remains a...
The content of this RSS Feed (c) The Royal Society of Chemistry




deep_learning

Deep learning-enabled detection of rare circulating tumor cell clusters in whole blood using label-free, flow cytometry

Lab Chip, 2024, 24,2237-2252
DOI: 10.1039/D3LC00694H, Paper
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Nilay Vora, Prashant Shekar, Taras Hanulia, Michael Esmail, Abani Patra, Irene Georgakoudi
We present a deep-learning enabled, label-free flow cytometry platform for identifying circulating tumor cell clusters in whole blood based on the endogenous scattering detected at three wavelengths. The method has potential for in vivo translation.
The content of this RSS Feed (c) The Royal Society of Chemistry




deep_learning

Hydrogen bond network structures of protonated 2,2,2-trifluoroethanol/ethanol mixed clusters probed by infrared spectroscopy combined with a deep-learning structure sampling approach: the origin of the linear type network preference in protonated fluoroal

Phys. Chem. Chem. Phys., 2024, 26,27751-27762
DOI: 10.1039/D4CP03534H, Paper
Po-Jen Hsu, Atsuya Mizuide, Jer-Lai Kuo, Asuka Fujii
Infrared spectroscopy combined with a deep-learning structure sampling approach reveals the origin of the unusual structure preference in protonated fluorinated alcohol clusters.
The content of this RSS Feed (c) The Royal Society of Chemistry




deep_learning

When the U.S. catches a cold, Canada sneezes: a lower-bound tale told by deep learning [electronic journal].




deep_learning

Predicting Consumer Default: A Deep Learning Approach [electronic journal].

National Bureau of Economic Research




deep_learning

PharmacoNet: deep learning-guided pharmacophore modeling for ultra-large-scale virtual screening

Chem. Sci., 2024, Advance Article
DOI: 10.1039/D4SC04854G, Edge Article
Open Access
Seonghwan Seo, Woo Youn Kim
PharmacoNet is developed for virtual screening, including deep learning-guided protein-based pharmacophore modeling, a parameterized analytical scoring function, and coarse-grained pose alignment. It is extremely fast yet reasonably accurate.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




deep_learning

Deep Learning Enabled Ultra-high Quality NMR Chemical Shift Resolved Spectra

Chem. Sci., 2024, Accepted Manuscript
DOI: 10.1039/D4SC04742G, Edge Article
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Zhengxian Yang, Weigang Cai, Wen Zhu, Xiaoxu Zheng, Xiaoqi Shi, Mengjie Qiu, Zhong Chen, Maili Liu, Yanqin Lin
High quality chemical shift resolved spectra have long been pursued in nuclear magnetic resonance (NMR). In order to obtain chemical shift information with high resolution and sensitivity, a neural network...
The content of this RSS Feed (c) The Royal Society of Chemistry




deep_learning

Morphological analysis of Pd/C nanoparticles using SEM imaging and advanced deep learning

RSC Adv., 2024, 14,35172-35183
DOI: 10.1039/D4RA06113F, Paper
Open Access
Nguyen Duc Thuan, Hoang Manh Cuong, Nguyen Hoang Nam, Nguyen Thi Lan Huong, Hoang Si Hong
In this study, we present a comprehensive approach for the morphological analysis of palladium on carbon (Pd/C) nanoparticles utilizing scanning electron microscopy (SEM) imaging and advanced deep learning techniques.
The content of this RSS Feed (c) The Royal Society of Chemistry




deep_learning

Limited angle tomography for transmission X-ray microscopy using deep learning

In transmission X-ray microscopy (TXM) systems, the rotation of a scanned sample might be restricted to a limited angular range to avoid collision with other system parts or high attenuation at certain tilting angles. Image reconstruction from such limited angle data suffers from artifacts because of missing data. In this work, deep learning is applied to limited angle reconstruction in TXMs for the first time. With the challenge to obtain sufficient real data for training, training a deep neural network from synthetic data is investigated. In particular, U-Net, the state-of-the-art neural network in biomedical imaging, is trained from synthetic ellipsoid data and multi-category data to reduce artifacts in filtered back-projection (FBP) reconstruction images. The proposed method is evaluated on synthetic data and real scanned chlorella data in 100° limited angle tomography. For synthetic test data, U-Net significantly reduces the root-mean-square error (RMSE) from 2.55 × 10−3 µm−1 in the FBP reconstruction to 1.21 × 10−3 µm−1 in the U-Net reconstruction and also improves the structural similarity (SSIM) index from 0.625 to 0.920. With penalized weighted least-square denoising of measured projections, the RMSE and SSIM are further improved to 1.16 × 10−3 µm−1 and 0.932, respectively. For real test data, the proposed method remarkably improves the 3D visualization of the subcellular structures in the chlorella cell, which indicates its important value for nanoscale imaging in biology, nanoscience and materials science.




deep_learning

DeepRes: a new deep-learning- and aspect-based local resolution method for electron-microscopy maps

In this article, a method is presented to estimate a new local quality measure for 3D cryoEM maps that adopts the form of a `local resolution' type of information. The algorithm (DeepRes) is based on deep-learning 3D feature detection. DeepRes is fully automatic and parameter-free, and avoids the issues of most current methods, such as their insensitivity to enhancements owing to B-factor sharpening (unless the 3D mask is changed), among others, which is an issue that has been virtually neglected in the cryoEM field until now. In this way, DeepRes can be applied to any map, detecting subtle changes in local quality after applying enhancement processes such as isotropic filters or substantially more complex procedures, such as model-based local sharpening, non-model-based methods or denoising, that may be very difficult to follow using current methods. It performs as a human observer expects. The comparison with traditional local resolution indicators is also addressed.




deep_learning

NuWave Solutions to Co-host Sentiment Analysis Workshop on Deep Learning, Machine Learning, and Lexicon Based

Would you like to know what your customers, users, contacts, or relatives really think? NuWave Solutions' Executive Vice President, Brian Frutchey, leads participants as they build their own sentiment analysis application with KNIME Analytics.




deep_learning

Livestream Deep Learning World from your Home Office!

Livestream Deep Learning World Munich 2020 from the comfort and safety of your home on 11-12 May 2020.




deep_learning

KDnuggets™ News 20:n16, Apr 22: Scaling Pandas with Dask for Big Data; Dive Into Deep Learning: The Free eBook

4 Steps to ensure your AI/Machine Learning system survives COVID-19; State of the Machine Learning and AI Industry; A Key Missing Part of the Machine Learning Stack; 5 Papers on CNNs Every Data Scientist Should Read