deep learning

Multi-Target Deep Learning for Algal Detection and Classification. (arXiv:2005.03232v1 [cs.CV])

Water quality has a direct impact on industry, agriculture, and public health. Algae species are common indicators of water quality. It is because algal communities are sensitive to changes in their habitats, giving valuable knowledge on variations in water quality. However, water quality analysis requires professional inspection of algal detection and classification under microscopes, which is very time-consuming and tedious. In this paper, we propose a novel multi-target deep learning framework for algal detection and classification. Extensive experiments were carried out on a large-scale colored microscopic algal dataset. Experimental results demonstrate that the proposed method leads to the promising performance on algal detection, class identification and genus identification.




deep learning

Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines. (arXiv:2005.03106v1 [cs.CV])

Smart meters enable remote and automatic electricity, water and gas consumption reading and are being widely deployed in developed countries. Nonetheless, there is still a huge number of non-smart meters in operation. Image-based Automatic Meter Reading (AMR) focuses on dealing with this type of meter readings. We estimate that the Energy Company of Paran'a (Copel), in Brazil, performs more than 850,000 readings of dial meters per month. Those meters are the focus of this work. Our main contributions are: (i) a public real-world dial meter dataset (shared upon request) called UFPR-ADMR; (ii) a deep learning-based recognition baseline on the proposed dataset; and (iii) a detailed error analysis of the main issues present in AMR for dial meters. To the best of our knowledge, this is the first work to introduce deep learning approaches to multi-dial meter reading, and perform experiments on unconstrained images. We achieved a 100.0% F1-score on the dial detection stage with both Faster R-CNN and YOLO, while the recognition rates reached 93.6% for dials and 75.25% for meters using Faster R-CNN (ResNext-101).




deep learning

CovidCTNet: An Open-Source Deep Learning Approach to Identify Covid-19 Using CT Image. (arXiv:2005.03059v1 [eess.IV])

Coronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method, however, its accuracy in detection is only ~70-75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80-98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source set of algorithms called CovidCTNet that successfully differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 90% compared to radiologists (70%). The model is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. In order to facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and parametric details in an open-source format. Open-source sharing of our CovidCTNet enables developers to rapidly improve and optimize services, while preserving user privacy and data ownership.




deep learning

IBM Machine Vision Technology Advances Early Detection of Diabetic Eye Disease Using Deep Learning

The IBM Research findings achieve the highest recorded accuracy of 86 percent by using deep learning and pathology insights to identify the severity of diabetic retinopathy.




deep learning

Projection-space implementation of deep learning-guided low-dose brain PET imaging improves performance over implementation in image-space

Purpose: To assess the performance of full dose (FD) positron emission tomography (PET) image synthesis in both image and projection space from low-dose (LD) PET images/sinograms without sacrificing diagnostic quality using deep learning techniques. Methods: Clinical brain PET/CT studies of 140 patients were retrospectively employed for LD to FD PET conversion. 5% of the events were randomly selected from the FD list-mode PET data to simulate a realistic LD acquisition. A modified 3D U-Net model was implemented to predict FD sinograms in the projection-space (PSS) and FD images in image-space (PIS) from their corresponding LD sinograms/images, respectively. The quality of the predicted PET images was assessed by two nuclear medicine specialists using a five-point grading scheme. Quantitative analysis using established metrics including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), region-wise standardized uptake value (SUV) bias, as well as first-, second- and high-order texture radiomic features in 83 brain regions for the test and evaluation dataset was also performed. Results: All PSS images were scored 4 or higher (good to excellent) by the nuclear medicine specialists. PSNR and SSIM values of 0.96 ± 0.03, 0.97 ± 0.02 and 31.70 ± 0.75, 37.30 ± 0.71 were obtained for PIS and PSS, respectively. The average SUV bias calculated over all brain regions was 0.24 ± 0.96% and 1.05 ± 1.44% for PSS and PIS, respectively. The Bland-Altman plots reported the lowest SUV bias (0.02) and variance (95% CI: -0.92, +0.84) for PSS compared with the reference FD images. The relative error of the homogeneity radiomic feature belonging to the Grey Level Co-occurrence Matrix category was -1.07 ± 1.77 and 0.28 ± 1.4 for PIS and PSS, respectively Conclusion: The qualitative assessment and quantitative analysis demonstrated that the FD PET prediction in projection space led to superior performance, resulting in higher image quality and lower SUV bias and variance compared to FD PET prediction in the image domain.




deep learning

Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies




deep learning

GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing

We present GluonCV and GluonNLP, the deep learning toolkits for computer vision and natural language processing based on Apache MXNet (incubating). These toolkits provide state-of-the-art pre-trained models, training scripts, and training logs, to facilitate rapid prototyping and promote reproducible research. We also provide modular APIs with flexible building blocks to enable efficient customization. Leveraging the MXNet ecosystem, the deep learning models in GluonCV and GluonNLP can be deployed onto a variety of platforms with different programming languages. The Apache 2.0 license has been adopted by GluonCV and GluonNLP to allow for software distribution, modification, and usage.




deep learning

Generating Thermal Image Data Samples using 3D Facial Modelling Techniques and Deep Learning Methodologies. (arXiv:2005.01923v2 [cs.CV] UPDATED)

Methods for generating synthetic data have become of increasing importance to build large datasets required for Convolution Neural Networks (CNN) based deep learning techniques for a wide range of computer vision applications. In this work, we extend existing methodologies to show how 2D thermal facial data can be mapped to provide 3D facial models. For the proposed research work we have used tufts datasets for generating 3D varying face poses by using a single frontal face pose. The system works by refining the existing image quality by performing fusion based image preprocessing operations. The refined outputs have better contrast adjustments, decreased noise level and higher exposedness of the dark regions. It makes the facial landmarks and temperature patterns on the human face more discernible and visible when compared to original raw data. Different image quality metrics are used to compare the refined version of images with original images. In the next phase of the proposed study, the refined version of images is used to create 3D facial geometry structures by using Convolution Neural Networks (CNN). The generated outputs are then imported in blender software to finally extract the 3D thermal facial outputs of both males and females. The same technique is also used on our thermal face data acquired using prototype thermal camera (developed under Heliaus EU project) in an indoor lab environment which is then used for generating synthetic 3D face data along with varying yaw face angles and lastly facial depth map is generated.




deep learning

Deep Learning on Point Clouds for False Positive Reduction at Nodule Detection in Chest CT Scans. (arXiv:2005.03654v1 [eess.IV])

The paper focuses on a novel approach for false-positive reduction (FPR) of nodule candidates in Computer-aided detection (CADe) system after suspicious lesions proposing stage. Unlike common decisions in medical image analysis, the proposed approach considers input data not as 2d or 3d image, but as a point cloud and uses deep learning models for point clouds. We found out that models for point clouds require less memory and are faster on both training and inference than traditional CNN 3D, achieves better performance and does not impose restrictions on the size of the input image, thereby the size of the nodule candidate. We propose an algorithm for transforming 3d CT scan data to point cloud. In some cases, the volume of the nodule candidate can be much smaller than the surrounding context, for example, in the case of subpleural localization of the nodule. Therefore, we developed an algorithm for sampling points from a point cloud constructed from a 3D image of the candidate region. The algorithm guarantees to capture both context and candidate information as part of the point cloud of the nodule candidate. An experiment with creating a dataset from an open LIDC-IDRI database for a feature of the FPR task was accurately designed, set up and described in detail. The data augmentation technique was applied to avoid overfitting and as an upsampling method. Experiments are conducted with PointNet, PointNet++ and DGCNN. We show that the proposed approach outperforms baseline CNN 3D models and demonstrates 85.98 FROC versus 77.26 FROC for baseline models.




deep learning

Transfer Learning for sEMG-based Hand Gesture Classification using Deep Learning in a Master-Slave Architecture. (arXiv:2005.03460v1 [eess.SP])

Recent advancements in diagnostic learning and development of gesture-based human machine interfaces have driven surface electromyography (sEMG) towards significant importance. Analysis of hand gestures requires an accurate assessment of sEMG signals. The proposed work presents a novel sequential master-slave architecture consisting of deep neural networks (DNNs) for classification of signs from the Indian sign language using signals recorded from multiple sEMG channels. The performance of the master-slave network is augmented by leveraging additional synthetic feature data generated by long short term memory networks. Performance of the proposed network is compared to that of a conventional DNN prior to and after the addition of synthetic data. Up to 14% improvement is observed in the conventional DNN and up to 9% improvement in master-slave network on addition of synthetic data with an average accuracy value of 93.5% asserting the suitability of the proposed approach.




deep learning

Deep learning of physical laws from scarce data. (arXiv:2005.03448v1 [cs.LG])

Harnessing data to discover the underlying governing laws or equations that describe the behavior of complex physical systems can significantly advance our modeling, simulation and understanding of such systems in various science and engineering disciplines. Recent advances in sparse identification show encouraging success in distilling closed-form governing equations from data for a wide range of nonlinear dynamical systems. However, the fundamental bottleneck of this approach lies in the robustness and scalability with respect to data scarcity and noise. This work introduces a novel physics-informed deep learning framework to discover governing partial differential equations (PDEs) from scarce and noisy data for nonlinear spatiotemporal systems. In particular, this approach seamlessly integrates the strengths of deep neural networks for rich representation learning, automatic differentiation and sparse regression to approximate the solution of system variables, compute essential derivatives, as well as identify the key derivative terms and parameters that form the structure and explicit expression of the PDEs. The efficacy and robustness of this method are demonstrated on discovering a variety of PDE systems with different levels of data scarcity and noise. The resulting computational framework shows the potential for closed-form model discovery in practical applications where large and accurate datasets are intractable to capture.




deep learning

Deep Learning Framework for Detecting Ground Deformation in the Built Environment using Satellite InSAR data. (arXiv:2005.03221v1 [cs.CV])

The large volumes of Sentinel-1 data produced over Europe are being used to develop pan-national ground motion services. However, simple analysis techniques like thresholding cannot detect and classify complex deformation signals reliably making providing usable information to a broad range of non-expert stakeholders a challenge. Here we explore the applicability of deep learning approaches by adapting a pre-trained convolutional neural network (CNN) to detect deformation in a national-scale velocity field. For our proof-of-concept, we focus on the UK where previously identified deformation is associated with coal-mining, ground water withdrawal, landslides and tunnelling. The sparsity of measurement points and the presence of spike noise make this a challenging application for deep learning networks, which involve calculations of the spatial convolution between images. Moreover, insufficient ground truth data exists to construct a balanced training data set, and the deformation signals are slower and more localised than in previous applications. We propose three enhancement methods to tackle these problems: i) spatial interpolation with modified matrix completion, ii) a synthetic training dataset based on the characteristics of real UK velocity map, and iii) enhanced over-wrapping techniques. Using velocity maps spanning 2015-2019, our framework detects several areas of coal mining subsidence, uplift due to dewatering, slate quarries, landslides and tunnel engineering works. The results demonstrate the potential applicability of the proposed framework to the development of automated ground motion analysis systems.




deep learning

Deep learning in medical image analysis : challenges and applications

9783030331283 (electronic bk.)




deep learning

On deep learning as a remedy for the curse of dimensionality in nonparametric regression

Benedikt Bauer, Michael Kohler.

Source: The Annals of Statistics, Volume 47, Number 4, 2261--2285.

Abstract:
Assuming that a smoothness condition and a suitable restriction on the structure of the regression function hold, it is shown that least squares estimates based on multilayer feedforward neural networks are able to circumvent the curse of dimensionality in nonparametric regression. The proof is based on new approximation results concerning multilayer feedforward neural networks with bounded weights and a bounded number of hidden neurons. The estimates are compared with various other approaches by using simulated data.




deep learning

Earth to AI: Three Startups Using Deep Learning for Environmental Monitoring

Sometimes it takes an elevated view to appreciate the big picture. NASA’s iconic “Blue Marble,” taken in 1972, helped inspire the modern environmental movement by capturing the finite and fragile nature of Earth for the first time. Today, aerial imagery from satellites and drones powers a range of efforts to monitor and protect our planet Read article >

The post Earth to AI: Three Startups Using Deep Learning for Environmental Monitoring appeared first on The Official NVIDIA Blog.




deep learning

NVIDIA Deep Learning Institute Instructor-Led Training Now Available Remotely

Starting this month, NVIDIA’s Deep Learning Institute is offering instructor-led workshops that are delivered remotely via a virtual classroom. DLI provides hands-on training in AI, accelerated computing and accelerated data science to help developers, data scientists and other professionals solve their most challenging problems. These in-depth classes are taught by experts in their respective fields, Read article >

The post NVIDIA Deep Learning Institute Instructor-Led Training Now Available Remotely appeared first on The Official NVIDIA Blog.




deep learning

NVIDIA Deep Learning Institute Instructor-Led Training Now Available Remotely

Starting this month, NVIDIA’s Deep Learning Institute is offering instructor-led workshops that are delivered remotely via a virtual classroom. DLI provides hands-on training in AI, accelerated computing and accelerated data science to help developers, data scientists and other professionals solve their most challenging problems. These in-depth classes are taught by experts in their respective fields, Read article >

The post NVIDIA Deep Learning Institute Instructor-Led Training Now Available Remotely appeared first on The Official NVIDIA Blog.




deep learning

Deep learning in healthcare: paradigms and applications / Yen-Wei Chen, Lakhmi C. Jain, editors

Online Resource




deep learning

Deep learning aided rational design of oxide glasses

Mater. Horiz., 2020, Advance Article
DOI: 10.1039/D0MH00162G, Communication
R. Ravinder, Karthikeya H. Sridhara, Suresh Bishnoi, Hargun Singh Grover, Mathieu Bauchy, Jayadeva, Hariprasad Kodamana, N. M. Anoop Krishnan
Designing new glasses requires a priori knowledge of how the composition of a glass dictates its properties such as stiffness, density, or processability. Developing multi-property design charts, namely, glass selection charts, using deep learning can enable discovery of novel glasses with targeted properties.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




deep learning

Deep Learning in the Browser / by Xavier Bourry, Kai Sasaki, Christoph K??R, Reiichiro Nakano

Online Resource




deep learning

Deep learning in a disorienting world / Jon F. Wergin

Dewey Library - BF318.W47 2020




deep learning

Next-Generation Machine Learning with Spark: Covers XGBoost, LightGBM, Spark NLP, Distributed Deep Learning with Keras, and More / Butch Quinto

Online Resource




deep learning

A high-throughput system combining microfluidic hydrogel droplets with deep learning for screening the antisolvent-crystallization conditions of active pharmaceutical ingredient

Lab Chip, 2020, Accepted Manuscript
DOI: 10.1039/D0LC00153H, Paper
Zhening Su, Jinxu He, Peipei Zhou, Lu Huang, Jianhua Zhou
Crystallization of active pharmaceutical ingredients (APIs) is a crucial process in the pharmaceutical industry due to its great impact in drug efficacy. However, conventional approaches for screening the optimal crystallization...
The content of this RSS Feed (c) The Royal Society of Chemistry




deep learning

[ASAP] Combining Docking Pose Rank and Structure with Deep Learning Improves Protein–Ligand Binding Mode Prediction over a Baseline Docking Approach

Journal of Chemical Information and Modeling
DOI: 10.1021/acs.jcim.9b00927




deep learning

[ASAP] Evaluating Scalable Uncertainty Estimation Methods for Deep Learning-Based Molecular Property Prediction

Journal of Chemical Information and Modeling
DOI: 10.1021/acs.jcim.9b00975




deep learning

A deep learning approach to identify association of disease–gene using information of disease symptoms and protein sequences

Anal. Methods, 2020, 12,2016-2026
DOI: 10.1039/C9AY02333J, Paper
Xingyu Chen, Qixing Huang, Yang Wang, Jinlong Li, Haiyan Liu, Yun Xie, Zong Dai, Xiaoyong Zou, Zhanchao Li
Prediction of disease–gene association based on a deep convolutional neural network.
The content of this RSS Feed (c) The Royal Society of Chemistry




deep learning

Handbook of research on machine and deep learning applications for cyber security / [edited by] Padmavathi Ganapathi and D. Shanmugapriya





deep learning

Deep learning in medical image analysis: challenges and applications / Gobert Lee, Hiroshi Fujita, editors

Online Resource