deep learning

Machine learning and deep learning techniques for detecting and mitigating cyber threats in IoT-enabled smart grids: a comprehensive review

The confluence of the internet of things (IoT) with smart grids has ushered in a paradigm shift in energy management, promising unparalleled efficiency, economic robustness and unwavering reliability. However, this integrative evolution has concurrently amplified the grid's susceptibility to cyber intrusions, casting shadows on its foundational security and structural integrity. Machine learning (ML) and deep learning (DL) emerge as beacons in this landscape, offering robust methodologies to navigate the intricate cybersecurity labyrinth of IoT-infused smart grids. While ML excels at sifting through voluminous data to identify and classify looming threats, DL delves deeper, crafting sophisticated models equipped to counteract avant-garde cyber offensives. Both of these techniques are united in their objective of leveraging intricate data patterns to provide real-time, actionable security intelligence. Yet, despite the revolutionary potential of ML and DL, the battle against the ceaselessly morphing cyber threat landscape is relentless. The pursuit of an impervious smart grid continues to be a collective odyssey. In this review, we embark on a scholarly exploration of ML and DL's indispensable contributions to enhancing cybersecurity in IoT-centric smart grids. We meticulously dissect predominant cyber threats, critically assess extant security paradigms, and spotlight research frontiers yearning for deeper inquiry and innovation.




deep learning

A Deep Learning Based Model to Assist Blind People in Their Navigation

Aim/Purpose: This paper proposes a new approach to developing a deep learning-based prototyping wearable model which can assist blind and visually disabled people to recognize their environments and navigate through them. As a result, visually impaired people will be able to manage day-to-day activities and navigate through the world around them more easily. Background: In recent decades, the development of navigational devices has posed challenges for researchers to design smart guidance systems for visually impaired and blind individuals in navigating through known or unknown environments. Efforts need to be made to analyze the existing research from a historical perspective. Early studies of electronic travel aids should be integrated with the use of assistive technology-based artificial vision models for visually impaired persons. Methodology: This paper is an advancement of our previous research work, where we performed a sensor-based navigation system. In this research, the navigation of the visually disabled person is carried out with a vision-based 3D-designed wearable model and a vision-based smart stick. The wearable model used a neural network-based You Only Look Once (YOLO) algorithm to detect the course of the navigational path which is augmented by a GPS-based smart Stick. Over 100 images of each of the three classes, namely straight path, left path and right path, are being trained using supervised learning. The model accurately predicts a straight path with 79% mean average precision (mAP), the right path with 83% mAP, and the left path with 85% mAP. The average accuracy of the wearable model is 82.33% and that of the smart stick is 96.14% which combined gives an overall accuracy of 89.24%. Contribution: This research contributes to the design of a low-cost navigational standalone system that will be handy to use and help people to navigate safely in real-time scenarios. The challenging self-built dataset of various paths is generated and transfer learning is performed on the YOLO-v5 model after augmentation and manual annotation. To analyze and evaluate the model, various metrics, such as model losses, recall value, precision, and maP, are used. Findings: These were the main findings of the study: • To detect objects, the deep learning model uses a higher version of YOLO, i.e., a YOLOv5 detector, that may help those with visual im-pairments to improve their quality of navigational mobilities in known or unknown environments. • The developed standalone model has an option to be integrated into any other assistive applications like Electronic Travel Aids (ETAs) • It is the single neural network technology that allows the model to achieve high levels of detection accuracy of around 0.823 mAP with a custom dataset as compared to 0.895 with the COCO dataset. Due to its lightning-speed of 45 FPS object detection technology, it has become popular. Recommendations for Practitioners: Practitioners can help the model’s efficiency by increasing the sample size and classes used in training the model. Recommendation for Researchers: To detect objects in an image or live cam, there are various algorithms, e.g., R-CNN, Retina Net, Single Shot Detector (SSD), YOLO. Researchers can choose to use the YOLO version owing to its superior performance. Moreover, one of the YOLO versions, YOLOv5, outperforms its other versions such as YOLOv3 and YOLOv4 in terms of speed and accuracy. Impact on Society: We discuss new low-cost technologies that enable visually impaired people to navigate effectively in indoor environments. Future Research: The future of deep learning could incorporate recurrent neural networks on a larger set of data with special AI-based processors to avoid latency.




deep learning

A forensic approach: identification of source printer through deep learning

Forensic document forgery investigations have elevated the need for source identification for printed documents during the past few years. It is necessary to create a reliable and acceptable safety testing instrument to determine the credibility of printed materials. The proposed system in this study uses a neural network to detect the original printer used in forensic document forgery investigations. The study uses a deep neural network method, which relies on the quality, texture, and accuracy of images printed by various models of Canon and HP printers. The datasets were trained and tested to predict the accuracy using logical function, with the goal of creating a reliable and acceptable safety testing instrument for determining the credibility of printed materials. The technique classified the model with 95.1% accuracy. The proposed method for identifying the source of the printer is a non-destructive technique.




deep learning

Bi-LSTM GRU-based deep learning architecture for export trade forecasting

To assess a country's economic outlook and achieve higher economic growth, econometric models and prediction techniques are significant tools. Policymakers are always concerned with the correct future estimates of economic variables to take the right economic decisions, design better policies and effectively implement them. Therefore, there is a need to improve the predictive accuracy of the existing models and to use more sophisticated and superior algorithms for accurate forecasting. Deep learning models like recurrent neural networks are considered superior for forecasting as they provide better predictive results as compared to many of the econometric models. Against this backdrop, this paper presents the feasibility of using different deep-learning neural network architectures for trade forecasting. It predicts export trade using different recurrent neural architectures such as 'vanilla recurrent neural network (VRNN)', 'bi-directional long short-term memory network (Bi-LSTM)', 'bi-directional gated recurrent unit (Bi-GRU)' and a hybrid 'bi-directional LSTM and GRU neural network'. The performances of these models are evaluated and compared using different performance metrics such as Mean Square Error (MSE), Mean Absolute Error (MAE) Root Mean Squared Error (RMSE), Root Mean Squared Logarithmic Error (RMSLE) and coefficient of determination <em>R</em>-squared (<em>R</em>²). The results validated the effective export prediction for India.




deep learning

Intelligence assistant using deep learning: use case in crop disease prediction

In India, 70% of the Indian population is dependent on agriculture, yet agriculture generates only 13% of the country's gross domestic product. Several factors contribute to high levels of stress among farmers in India, such as increased input costs, draughts, and reduced revenues. The problem lies in the absence of an integrated farm advisory system. A farmer needs help to bridge this information gap, and they need it early in the crop's lifecycle to prevent it from being destroyed by pests or diseases. This research involves developing deep learning algorithms such as <i>ResNet18</i> and <i>DenseNet121</i> to help farmers diagnose crop diseases earlier and take corrective actions. By using deep learning techniques to detect these crop diseases with images farmers can scan or click with their smartphones, we can fill in the knowledge gap. To facilitate the use of the models by farmers, they are deployed in Android-based smartphones.




deep learning

The performance evaluation of teaching reform based on hierarchical multi-task deep learning

The research goal is to solve the problems of low accuracy and long time existing in traditional teaching reform performance evaluation methods, a performance evaluation method of teaching reform based on hierarchical multi-task deep learning is proposed. Under the principle of constructing the evaluation index system, the evaluation indicator system should be constructed. The weight of the evaluation index is calculated through the analytic hierarchy process, and the calculation result of the evaluation weight is taken as the model input sample. A hierarchical multi-task deep learning model for teaching reform performance evaluation is built, and the final teaching reform performance score is obtained. Through relevant experiments, it is proved that compared with the experimental comparison method, this method has the advantages of high evaluation accuracy and short time, and can be further applied in relevant fields.




deep learning

Improving the Accuracy of Facial Micro-Expression Recognition: Spatio-Temporal Deep Learning with Enhanced Data Augmentation and Class Balancing

Aim/Purpose: This study presents a novel deep learning-based framework designed to enhance spontaneous micro-expression recognition by effectively increasing the amount and variety of data and balancing the class distribution to improve recognition accuracy. Background: Micro-expression recognition using deep learning requires large amounts of data. Micro-expression datasets are relatively small, and their class distribution is not balanced. Methodology: This study developed a framework using a deep learning-based model to recognize spontaneous micro-expressions on a person’s face. The framework also includes several technical stages, including image and data preprocessing. In data preprocessing, data augmentation is carried out to increase the amount and variety of data and class balancing to balance the distribution of sample classes in the dataset. Contribution: This study’s essential contribution lies in enhancing the accuracy of micro-expression recognition and overcoming the limited amount of data and imbalanced class distribution that typically leads to overfitting. Findings: The results indicate that the proposed framework, with its data preprocessing stages and deep learning model, significantly increases the accuracy of micro-expression recognition by overcoming dataset limitations and producing a balanced class distribution. This leads to improved micro-expression recognition accuracy using deep learning techniques. Recommendations for Practitioners: Practitioners can utilize the model produced by the proposed framework, which was developed to recognize spontaneous micro-expressions on a person’s face, by implementing it as an emotional analysis application based on facial micro-expressions. Recommendation for Researchers: Researchers involved in the development of a spontaneous micro-expression recognition framework for analyzing hidden emotions from a person’s face are playing an essential role in advancing this field and continue to search for more innovative deep learning-based solutions that continue to explore techniques to increase the amount and variety of data and find solutions to balancing the number of sample classes in various micro-expression datasets. They can further improvise to develop deep learning model architectures that are more suitable and relevant according to the needs of recognition tasks and the various characteristics of different datasets. Impact on Society: The proposed framework could significantly impact society by providing a reliable model for recognizing spontaneous micro-expressions in real-world applications, ranging from security systems and criminal investigations to healthcare and emotional analysis. Future Research: Developing a spontaneous micro-expression recognition framework based on spatial and temporal flow requires the learning model to classify optimal features. Our future work will focus more on exploring micro-expression features by developing various alternative learning models and increasing the weights of spatial and temporal features.




deep learning

IRNN-SS: deep learning for optimised protein secondary structure prediction through PROMOTIF and DSSP annotation fusion

DSSP stands as a foundational tool in the domain of protein secondary structure prediction, yet it encounters notable challenges in accurately annotating irregular structures, such as β-turns and γ-turns, which constitute approximately 25%-30% and 10%-15% of protein turns, respectively. This limitation arises from DSSP's reliance on hydrogen-bond analysis, resulting in annotation gaps and reduced consensus on irregular structures. Alternatively, PROMOTIF excels at identifying these irregular structure annotations using phi-psi information. Despite their complementary strengths, previous methodologies utilised DSSP and PROMOTIF separately, leading to disparate prediction methods for protein secondary structures, hampering comprehensive structure analysis crucial for drug development. In this work, we bridge this gap using an annotation fusion approach, combining DSSP structures with beta, and gamma turns. We introduce IRNN-SS, a model employing deep inception and bidirectional gated recurrent neural networks, achieving 77.4% prediction accuracy on benchmark datasets, outpacing current models.




deep learning

Optimisation with deep learning for leukaemia classification in federated learning

The most common kind of blood cancer in people of all ages is leukaemia. The fractional mayfly optimisation (FMO) based DenseNet is proposed for the identification and classification of leukaemia in federated learning (FL). Initially, the input image is pre-processed by adaptive median filter (AMF). Then, cell segmentation is done using the Scribble2label. After that, image augmentation is accomplished. Finally, leukaemia classification is accomplished utilising DenseNet, which is trained using the FMO. Here, the FMO is devised by merging the mayfly algorithm (MA) and the fractional concept (FC). Following local training, the server performs local updating and aggregation using a weighted average by RV coefficient. The results showed that FMO-DenseNet attained maximum accuracy, true negative rate (TNR) and true positive rate (TPR) of 94.3%, 96.5% and 95.3%. Moreover, FMO-DenseNet gained minimum mean squared error (MSE) and root mean squared error (RMSE) of 5.7%, 9.2% and 30.4%.




deep learning

Deep learning-based lung cancer detection using CT images

This work demonstrates a hybrid deep learning (DL) model for lung cancer (LC) detection using CT images. Firstly, the input image is passed to the pre-processing stage, where the input image is filtered using a BF and the obtained filtered image is subjected to lung lobe segmentation, where segmentation is done using squeeze U-SegNet. Feature extraction is performed, where features including entropy with fuzzy local binary patterns (EFLBP), local optimal oriented pattern (LOOP), and grey level co-occurrence matrix (GLCM) features are mined. After completing the extracting of features, LC is detected utilising the hybrid efficient-ShuffleNet (HES-Net) method, wherein the HES-Net is established by the incorporation of EfficientNet and ShuffleNet. The presented HES-Net for LC detection is investigated for its performance concerning TNR, and TPR, and accuracy is established to have acquired values of 92.1%, 93.1%, and 91.3%.




deep learning

Loss Function for Deep Learning to Model Dynamical Systems

Takahito YOSHIDA,Takaharu YAGUCHI,Takashi MATSUBARA, Vol.E107-D, No.11, pp.1458-1462
Accurately simulating physical systems is essential in various fields. In recent years, deep learning has been used to automatically build models of such systems by learning from data. One such method is the neural ordinary differential equation (neural ODE), which treats the output of a neural network as the time derivative of the system states. However, while this and related methods have shown promise, their training strategies still require further development. Inspired by error analysis techniques in numerical analysis while replacing numerical errors with modeling errors, we propose the error-analytic strategy to address this issue. Therefore, our strategy can capture long-term errors and thus improve the accuracy of long-term predictions.
Publication Date: 2024/11/01




deep learning

What's going on? Developing reflexivity in the management classroom: From surface to deep learning and everything else in between.

'What's going on?' Within the context of our critically-informed teaching practice, we see moments of deep learning and reflexivity in classroom discussions and assessments. Yet, these moments of criticality are interspersed with surface learning and reflection. We draw on dichotomous, linear developmental, and messy explanations of learning processes to empirically explore the learning journeys of 20 international Chinese and 42 domestic New Zealand students. We find contradictions within our own data, and between our findings and the extant literature. We conclude that expressions of surface learning and reflection are considerably more complex than they first appear. Moreover, developing critical reflexivity is a far more subtle, messy, and emotional experience than previously understood. We present the theoretical and pedagogical significance of these findings when we consider the implications for the learning process and the practice of management education.




deep learning

3xLOGIC’s to debut its X-Series edge based deep learning analytics cameras at ISC West

3xLOGIC, a provider of integrated and intelligent security and business solutions, will debut its recently launched edge based deep learning analytics cameras at ISC West 2024, Booth #23059.




deep learning

Deep learning to overcome Zernike phase-contrast nanoCT artifacts for automated micro-nano porosity segmentation in bone

Bone material contains a hierarchical network of micro- and nano-cavities and channels, known as the lacuna-canalicular network (LCN), that is thought to play an important role in mechanobiology and turnover. The LCN comprises micrometer-sized lacunae, voids that house osteocytes, and submicrometer-sized canaliculi that connect bone cells. Characterization of this network in three dimensions is crucial for many bone studies. To quantify X-ray Zernike phase-contrast nanotomography data, deep learning is used to isolate and assess porosity in artifact-laden tomographies of zebrafish bones. A technical solution is proposed to overcome the halo and shade-off domains in order to reliably obtain the distribution and morphology of the LCN in the tomographic data. Convolutional neural network (CNN) models are utilized with increasing numbers of images, repeatedly validated by `error loss' and `accuracy' metrics. U-Net and Sensor3D CNN models were trained on data obtained from two different synchrotron Zernike phase-contrast transmission X-ray microscopes, the ANATOMIX beamline at SOLEIL (Paris, France) and the P05 beamline at PETRA III (Hamburg, Germany). The Sensor3D CNN model with a smaller batch size of 32 and a training data size of 70 images showed the best performance (accuracy 0.983 and error loss 0.032). The analysis procedures, validated by comparison with human-identified ground-truth images, correctly identified the voids within the bone matrix. This proposed approach may have further application to classify structures in volumetric images that contain non-linear artifacts that degrade image quality and hinder feature identification.




deep learning

X-ray lens figure errors retrieved by deep learning from several beam intensity images

The phase problem in the context of focusing synchrotron beams with X-ray lenses is addressed. The feasibility of retrieving the surface error of a lens system by using only the intensity of the propagated beam at several distances is demonstrated. A neural network, trained with a few thousand simulations using random errors, can predict accurately the lens error profile that accounts for all aberrations. It demonstrates the feasibility of routinely measuring the aberrations induced by an X-ray lens, or another optical system, using only a few intensity images.




deep learning

Dynamic X-ray speckle-tracking imaging with high-accuracy phase retrieval based on deep learning

Speckle-tracking X-ray imaging is an attractive candidate for dynamic X-ray imaging owing to its flexible setup and simultaneous yields of phase, transmission and scattering images. However, traditional speckle-tracking imaging methods suffer from phase distortion at locations with abrupt changes in density, which is always the case for real samples, limiting the applications of the speckle-tracking X-ray imaging method. In this paper, we report a deep-learning based method which can achieve dynamic X-ray speckle-tracking imaging with high-accuracy phase retrieval. The calibration results of a phantom show that the profile of the retrieved phase is highly consistent with the theoretical one. Experiments of polyurethane foaming demonstrated that the proposed method revealed the evolution of the complicated microstructure of the bubbles accurately. The proposed method is a promising solution for dynamic X-ray imaging with high-accuracy phase retrieval, and has extensive applications in metrology and quantitative analysis of dynamics in material science, physics, chemistry and biomedicine.




deep learning

The prediction of single-molecule magnet properties via deep learning

This paper uses deep learning to present a proof-of-concept for data-driven chemistry in single-molecule magnets (SMMs). Previous discussions within SMM research have proposed links between molecular structures (crystal structures) and single-molecule magnetic properties; however, these have only interpreted the results. Therefore, this study introduces a data-driven approach to predict the properties of SMM structures using deep learning. The deep-learning model learns the structural features of the SMM molecules by extracting the single-molecule magnetic properties from the 3D coordinates presented in this paper. The model accurately determined whether a molecule was a single-molecule magnet, with an accuracy rate of approximately 70% in predicting the SMM properties. The deep-learning model found SMMs from 20 000 metal complexes extracted from the Cambridge Structural Database. Using deep-learning models for predicting SMM properties and guiding the design of novel molecules is promising.




deep learning

DLSIA: Deep Learning for Scientific Image Analysis

DLSIA (Deep Learning for Scientific Image Analysis) is a Python-based machine learning library that empowers scientists and researchers across diverse scientific domains with a range of customizable convolutional neural network (CNN) architectures for a wide variety of tasks in image analysis to be used in downstream data processing. DLSIA features easy-to-use architectures, such as autoencoders, tunable U-Nets and parameter-lean mixed-scale dense networks (MSDNets). Additionally, this article introduces sparse mixed-scale networks (SMSNets), generated using random graphs, sparse connections and dilated convolutions connecting different length scales. For verification, several DLSIA-instantiated networks and training scripts are employed in multiple applications, including inpainting for X-ray scattering data using U-Nets and MSDNets, segmenting 3D fibers in X-ray tomographic reconstructions of concrete using an ensemble of SMSNets, and leveraging autoencoder latent spaces for data compression and clustering. As experimental data continue to grow in scale and complexity, DLSIA provides accessible CNN construction and abstracts CNN complexities, allowing scientists to tailor their machine learning approaches, accelerate discoveries, foster interdisciplinary collaboration and advance research in scientific image analysis.





deep learning

Episode 391: Jeremy Howard on Deep Learning and fast.ai

Jeremy Howard from fast.ai explains deep learning from concept to implementation. Thanks to transfer learning, individuals and small organizations can get state-of-the-art results on machine learning problems using the open source fastai library...




deep learning

Episode 549: William Falcon Optimizing Deep Learning Models

William Falcon of Lighting AI discusses how to optimize deep learning models using the Lightning platform, optimization is a necessary step towards creating a production application. Philip Winston spoke with Falcon about PyTorch, PyTorch Lightning...




deep learning

SE Radio 594: Sean Moriarity on Deep Learning with Elixir and Axon

Sean Moriarity, creator of the Axon deep learning framework, co-creator of the Nx library, and author of Machine Learning in Elixir and Genetic Algorithms in Elixir, published by the Pragmatic Bookshelf, speaks with SE Radio host Gavin Henry about what deep learning (neural networks) means today. Using a practical example with deep learning for fraud detection, they explore what Axon is and why it was created. Moriarity describes why the Beam is ideal for machine learning, and why he dislikes the term “neural network.” They discuss the need for deep learning, its history, how it offers a good fit for many of today’s complex problems, where it shines and when not to use it. Moriarity goes into depth on a range of topics, including how to get datasets in shape, supervised and unsupervised learning, feed-forward neural networks, Nx.serving, decision trees, gradient descent, linear regression, logistic regression, support vector machines, and random forests. The episode considers what a model looks like, what training is, labeling, classification, regression tasks, hardware resources needed, EXGBoost, Jax, PyIgnite, and Explorer. Finally, they look at what’s involved in the ongoing lifecycle or operational side of Axon once a workflow is put into production, so you can safely back it all up and feed in new data. Brought to you by IEEE Computer Society and IEEE Software magazine. This episode sponsored by Miro.




deep learning

[ F.748.12 (06/21) ] - Deep learning software framework evaluation methodology

Deep learning software framework evaluation methodology




deep learning

Gilad Gressel On Why You Should Watch His Newest Course: Deep Learning With Python

Hi, my name is Gilad Gressel and I’d like to tell you about my new course: Deep Learning with Python. Deep learning is an old technology that has recently been sweeping through the field of machine learning and artificial intelligence. Deep learning powers many of the cutting edge technologies that appear to be “magic” in [...]




deep learning

Zebra Technologies adds new deep learning tools to Aurora machine vision software

Zebra Technologies Corporation – the digital solution provider enabling businesses to intelligently connect data, assets and people – has introduced a series of advanced AI features enhancing its Aurora machine vision software to provide deep learning capabilities for complex visual inspection use cases.




deep learning

Deep learning tool helps NASA discover 301 exoplanets

NASA scientists used a neural network called ExoMiner to examine data from Kepler, increasing the total tally of confirmed exoplanets in the universe.




deep learning

AI / Deep Learning applications course – with hands-on experience

    New workshop in London / remote Link - Enterprise AI workshop – Sep 2018 – in London or remote     AI / Deep Learning applications course/ mentoring program – hands-on experience with limited spaces I am pleased to announce a new course on AI Applications The course combines elements of teaching, coaching and [...]




deep learning

G-Protein Signaling in Alzheimer's Disease: Spatial Expression Validation of Semi-supervised Deep Learning-Based Computational Framework

Systemic study of pathogenic pathways and interrelationships underlying genes associated with Alzheimer's disease (AD) facilitates the identification of new targets for effective treatments. Recently available large-scale multiomics datasets provide opportunities to use computational approaches for such studies. Here, we devised a novel disease gene identification (digID) computational framework that consists of a semi-supervised deep learning classifier to predict AD-associated genes and a protein–protein interaction (PPI) network-based analysis to prioritize the importance of these predicted genes in AD. digID predicted 1,529 AD-associated genes and revealed potentially new AD molecular mechanisms and therapeutic targets including GNAI1 and GNB1, two G-protein subunits that regulate cell signaling, and KNG1, an upstream modulator of CDC42 small G-protein signaling and mediator of inflammation and candidate coregulator of amyloid precursor protein (APP). Analysis of mRNA expression validated their dysregulation in AD brains but further revealed the significant spatial patterns in different brain regions as well as among different subregions of the frontal cortex and hippocampi. Super-resolution STochastic Optical Reconstruction Microscopy (STORM) further demonstrated their subcellular colocalization and molecular interactions with APP in a transgenic mouse model of both sexes with AD-like mutations. These studies support the predictions made by digID while highlighting the importance of concurrent biological validation of computationally identified gene clusters as potential new AD therapeutic targets.




deep learning

Deep Learning-Based Reconstruction of 3D T1 SPACE Vessel Wall Imaging Provides Improved Image Quality with Reduced Scan Times: A Preliminary Study [ARTIFICIAL INTELLIGENCE]

BACKGROUND AND PURPOSE:

Intracranial vessel wall imaging is technically challenging to implement, given the simultaneous requirements of high spatial resolution, excellent blood and CSF signal suppression, and clinically acceptable gradient times. Herein, we present our preliminary findings on the evaluation of a deep learning–optimized sequence using T1-weighted imaging.

MATERIALS AND METHODS:

Clinical and optimized deep learning–based image reconstruction T1 3D Sampling Perfection with Application optimized Contrast using different flip angle Evolution (SPACE) were evaluated, comparing noncontrast sequences in 10 healthy controls and postcontrast sequences in 5 consecutive patients. Images were reviewed on a Likert-like scale by 4 fellowship-trained neuroradiologists. Scores (range, 1–4) were separately assigned for 11 vessel segments in terms of vessel wall and lumen delineation. Additionally, images were evaluated in terms of overall background noise, image sharpness, and homogeneous CSF signal. Segment-wise scores were compared using paired samples t tests.

RESULTS:

The scan time for the clinical and deep learning–based image reconstruction sequences were 7:26 minutes and 5:23 minutes respectively. Deep learning–based image reconstruction images showed consistently higher wall signal and lumen visualization scores, with the differences being statistically significant in most vessel segments on both pre- and postcontrast images. Deep learning–based image reconstruction had lower background noise, higher image sharpness, and uniform CSF signal. Depiction of intracranial pathologies was better or similar on the deep learning–based image reconstruction.

CONCLUSIONS:

Our preliminary findings suggest that deep learning–based image reconstruction–optimized intracranial vessel wall imaging sequences may be helpful in achieving shorter gradient times with improved vessel wall visualization and overall image quality. These improvements may help with wider adoption of intracranial vessel wall imaging in clinical practice and should be further validated on a larger cohort.





deep learning

Canon Launches EOS R1 and EOS R5 Mark II With High-Speed Burst Shooting, Deep Learning AF, & More

Canon, a known face in digital imaging solutions, recently announced the introduction of two highly anticipated cameras to its EOS R series: the EOS R1 and EOS R5 Mark II. These cameras boast next-generation intelligent features, superior quality, impressive speed, and




deep learning

Screening for Urothelial Carcinoma Cells in Urine Based on Digital Holographic Flow Cytometry through Machine Learning and Deep Learning Method

Lab Chip, 2024, Accepted Manuscript
DOI: 10.1039/D3LC00854A, Paper
Lu Xin, Xi Xiao, Wen Xiao, Ran Peng, Hao Wang, Feng Pan
The incidence of urothelial carcinoma continue to rise annually, particularly among the elderly. Prompt diagnosis and treatment can significantly enhance patient survival and quality of life. Urine cytology remains a...
The content of this RSS Feed (c) The Royal Society of Chemistry




deep learning

Deep learning-enabled detection of rare circulating tumor cell clusters in whole blood using label-free, flow cytometry

Lab Chip, 2024, 24,2237-2252
DOI: 10.1039/D3LC00694H, Paper
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Nilay Vora, Prashant Shekar, Taras Hanulia, Michael Esmail, Abani Patra, Irene Georgakoudi
We present a deep-learning enabled, label-free flow cytometry platform for identifying circulating tumor cell clusters in whole blood based on the endogenous scattering detected at three wavelengths. The method has potential for in vivo translation.
The content of this RSS Feed (c) The Royal Society of Chemistry




deep learning

When the U.S. catches a cold, Canada sneezes: a lower-bound tale told by deep learning [electronic journal].




deep learning

Predicting Consumer Default: A Deep Learning Approach [electronic journal].

National Bureau of Economic Research




deep learning

PharmacoNet: deep learning-guided pharmacophore modeling for ultra-large-scale virtual screening

Chem. Sci., 2024, Advance Article
DOI: 10.1039/D4SC04854G, Edge Article
Open Access
Seonghwan Seo, Woo Youn Kim
PharmacoNet is developed for virtual screening, including deep learning-guided protein-based pharmacophore modeling, a parameterized analytical scoring function, and coarse-grained pose alignment. It is extremely fast yet reasonably accurate.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




deep learning

Deep Learning Enabled Ultra-high Quality NMR Chemical Shift Resolved Spectra

Chem. Sci., 2024, Accepted Manuscript
DOI: 10.1039/D4SC04742G, Edge Article
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Zhengxian Yang, Weigang Cai, Wen Zhu, Xiaoxu Zheng, Xiaoqi Shi, Mengjie Qiu, Zhong Chen, Maili Liu, Yanqin Lin
High quality chemical shift resolved spectra have long been pursued in nuclear magnetic resonance (NMR). In order to obtain chemical shift information with high resolution and sensitivity, a neural network...
The content of this RSS Feed (c) The Royal Society of Chemistry




deep learning

Morphological analysis of Pd/C nanoparticles using SEM imaging and advanced deep learning

RSC Adv., 2024, 14,35172-35183
DOI: 10.1039/D4RA06113F, Paper
Open Access
Nguyen Duc Thuan, Hoang Manh Cuong, Nguyen Hoang Nam, Nguyen Thi Lan Huong, Hoang Si Hong
In this study, we present a comprehensive approach for the morphological analysis of palladium on carbon (Pd/C) nanoparticles utilizing scanning electron microscopy (SEM) imaging and advanced deep learning techniques.
The content of this RSS Feed (c) The Royal Society of Chemistry




deep learning

Limited angle tomography for transmission X-ray microscopy using deep learning

In transmission X-ray microscopy (TXM) systems, the rotation of a scanned sample might be restricted to a limited angular range to avoid collision with other system parts or high attenuation at certain tilting angles. Image reconstruction from such limited angle data suffers from artifacts because of missing data. In this work, deep learning is applied to limited angle reconstruction in TXMs for the first time. With the challenge to obtain sufficient real data for training, training a deep neural network from synthetic data is investigated. In particular, U-Net, the state-of-the-art neural network in biomedical imaging, is trained from synthetic ellipsoid data and multi-category data to reduce artifacts in filtered back-projection (FBP) reconstruction images. The proposed method is evaluated on synthetic data and real scanned chlorella data in 100° limited angle tomography. For synthetic test data, U-Net significantly reduces the root-mean-square error (RMSE) from 2.55 × 10−3 µm−1 in the FBP reconstruction to 1.21 × 10−3 µm−1 in the U-Net reconstruction and also improves the structural similarity (SSIM) index from 0.625 to 0.920. With penalized weighted least-square denoising of measured projections, the RMSE and SSIM are further improved to 1.16 × 10−3 µm−1 and 0.932, respectively. For real test data, the proposed method remarkably improves the 3D visualization of the subcellular structures in the chlorella cell, which indicates its important value for nanoscale imaging in biology, nanoscience and materials science.




deep learning

NuWave Solutions to Co-host Sentiment Analysis Workshop on Deep Learning, Machine Learning, and Lexicon Based

Would you like to know what your customers, users, contacts, or relatives really think? NuWave Solutions' Executive Vice President, Brian Frutchey, leads participants as they build their own sentiment analysis application with KNIME Analytics.




deep learning

Livestream Deep Learning World from your Home Office!

Livestream Deep Learning World Munich 2020 from the comfort and safety of your home on 11-12 May 2020.




deep learning

KDnuggets™ News 20:n16, Apr 22: Scaling Pandas with Dask for Big Data; Dive Into Deep Learning: The Free eBook

4 Steps to ensure your AI/Machine Learning system survives COVID-19; State of the Machine Learning and AI Industry; A Key Missing Part of the Machine Learning Stack; 5 Papers on CNNs Every Data Scientist Should Read




deep learning

Math and Architectures of Deep Learning

This hands-on book bridges the gap between theory and practice, showing you the math of deep learning algorithms side by side with an implementation in PyTorch. You can save 40% off Math and Architectures of Deep Learning until May 13! Just enter the code nlkdarch40 at checkout when you buy from manning.com.




deep learning

Fighting Coronavirus With AI: Improving Testing with Deep Learning and Computer Vision

This post will cover how testing is done for the coronavirus, why it's important in battling the pandemic, and how deep learning tools for medical imaging can help us improve the quality of COVID-19 testing.





deep learning

Top 10 Toolkits and Libraries for Deep Learning in 2020

Deep Learning is a branch of artificial intelligence and a subset of machine learning that focuses on networks capable of, usually, unsupervised learning from unstructured and other forms of data. It is also known as deep structured learning or differential programming. Architectures inspired by deep learning find use in a range of fields, such as...




deep learning

Lake Ice Detection from Sentinel-1 SAR with Deep Learning. (arXiv:2002.07040v2 [eess.IV] UPDATED)

Lake ice, as part of the Essential Climate Variable (ECV) lakes, is an important indicator to monitor climate change and global warming. The spatio-temporal extent of lake ice cover, along with the timings of key phenological events such as freeze-up and break-up, provide important cues about the local and global climate. We present a lake ice monitoring system based on the automatic analysis of Sentinel-1 Synthetic Aperture Radar (SAR) data with a deep neural network. In previous studies that used optical satellite imagery for lake ice monitoring, frequent cloud cover was a main limiting factor, which we overcome thanks to the ability of microwave sensors to penetrate clouds and observe the lakes regardless of the weather and illumination conditions. We cast ice detection as a two class (frozen, non-frozen) semantic segmentation problem and solve it using a state-of-the-art deep convolutional network (CNN). We report results on two winters ( 2016 - 17 and 2017 - 18 ) and three alpine lakes in Switzerland. The proposed model reaches mean Intersection-over-Union (mIoU) scores >90% on average, and >84% even for the most difficult lake. Additionally, we perform cross-validation tests and show that our algorithm generalises well across unseen lakes and winters.




deep learning

Novel Deep Learning Framework for Wideband Spectrum Characterization at Sub-Nyquist Rate. (arXiv:1912.05255v2 [eess.SP] UPDATED)

Introduction of spectrum-sharing in 5G and subsequent generation networks demand base-station(s) with the capability to characterize the wideband spectrum spanned over licensed, shared and unlicensed non-contiguous frequency bands. Spectrum characterization involves the identification of vacant bands along with center frequency and parameters (energy, modulation, etc.) of occupied bands. Such characterization at Nyquist sampling is area and power-hungry due to the need for high-speed digitization. Though sub-Nyquist sampling (SNS) offers an excellent alternative when the spectrum is sparse, it suffers from poor performance at low signal to noise ratio (SNR) and demands careful design and integration of digital reconstruction, tunable channelizer and characterization algorithms. In this paper, we propose a novel deep-learning framework via a single unified pipeline to accomplish two tasks: 1)~Reconstruct the signal directly from sub-Nyquist samples, and 2)~Wideband spectrum characterization. The proposed approach eliminates the need for complex signal conditioning between reconstruction and characterization and does not need complex tunable channelizers. We extensively compare the performance of our framework for a wide range of modulation schemes, SNR and channel conditions. We show that the proposed framework outperforms existing SNS based approaches and characterization performance approaches to Nyquist sampling-based framework with an increase in SNR. Easy to design and integrate along with a single unified deep learning framework make the proposed architecture a good candidate for reconfigurable platforms.




deep learning

Biologic and Prognostic Feature Scores from Whole-Slide Histology Images Using Deep Learning. (arXiv:1910.09100v4 [q-bio.QM] UPDATED)

Histopathology is a reflection of the molecular changes and provides prognostic phenotypes representing the disease progression. In this study, we introduced feature scores generated from hematoxylin and eosin histology images based on deep learning (DL) models developed for prostate pathology. We demonstrated that these feature scores were significantly prognostic for time to event endpoints (biochemical recurrence and cancer-specific survival) and had simultaneously molecular biologic associations to relevant genomic alterations and molecular subtypes using already trained DL models that were not previously exposed to the datasets of the current study. Further, we discussed the potential of such feature scores to improve the current tumor grading system and the challenges that are associated with tumor heterogeneity and the development of prognostic models from histology images. Our findings uncover the potential of feature scores from histology images as digital biomarkers in precision medicine and as an expanding utility for digital pathology.




deep learning

Deep Learning based Person Re-identification. (arXiv:2005.03293v1 [cs.CV])

Automated person re-identification in a multi-camera surveillance setup is very important for effective tracking and monitoring crowd movement. In the recent years, few deep learning based re-identification approaches have been developed which are quite accurate but time-intensive, and hence not very suitable for practical purposes. In this paper, we propose an efficient hierarchical re-identification approach in which color histogram based comparison is first employed to find the closest matches in the gallery set, and next deep feature based comparison is carried out using Siamese network. Reduction in search space after the first level of matching helps in achieving a fast response time as well as improving the accuracy of prediction by the Siamese network by eliminating vastly dissimilar elements. A silhouette part-based feature extraction scheme is adopted in each level of hierarchy to preserve the relative locations of the different body structures and make the appearance descriptors more discriminating in nature. The proposed approach has been evaluated on five public data sets and also a new data set captured by our team in our laboratory. Results reveal that it outperforms most state-of-the-art approaches in terms of overall accuracy.