neural network

Capturing and Explaining Trajectory Singularities using Composite Signal Neural Networks. (arXiv:2003.10810v2 [cs.LG] UPDATED)

Spatial trajectories are ubiquitous and complex signals. Their analysis is crucial in many research fields, from urban planning to neuroscience. Several approaches have been proposed to cluster trajectories. They rely on hand-crafted features, which struggle to capture the spatio-temporal complexity of the signal, or on Artificial Neural Networks (ANNs) which can be more efficient but less interpretable. In this paper we present a novel ANN architecture designed to capture the spatio-temporal patterns characteristic of a set of trajectories, while taking into account the demographics of the navigators. Hence, our model extracts markers linked to both behaviour and demographics. We propose a composite signal analyser (CompSNN) combining three simple ANN modules. Each of these modules uses different signal representations of the trajectory while remaining interpretable. Our CompSNN performs significantly better than its modules taken in isolation and allows to visualise which parts of the signal were most useful to discriminate the trajectories.




neural network

A priori generalization error for two-layer ReLU neural network through minimum norm solution. (arXiv:1912.03011v3 [cs.LG] UPDATED)

We focus on estimating emph{a priori} generalization error of two-layer ReLU neural networks (NNs) trained by mean squared error, which only depends on initial parameters and the target function, through the following research line. We first estimate emph{a priori} generalization error of finite-width two-layer ReLU NN with constraint of minimal norm solution, which is proved by cite{zhang2019type} to be an equivalent solution of a linearized (w.r.t. parameter) finite-width two-layer NN. As the width goes to infinity, the linearized NN converges to the NN in Neural Tangent Kernel (NTK) regime citep{jacot2018neural}. Thus, we can derive the emph{a priori} generalization error of two-layer ReLU NN in NTK regime. The distance between NN in a NTK regime and a finite-width NN with gradient training is estimated by cite{arora2019exact}. Based on the results in cite{arora2019exact}, our work proves an emph{a priori} generalization error bound of two-layer ReLU NNs. This estimate uses the intrinsic implicit bias of the minimum norm solution without requiring extra regularity in the loss function. This emph{a priori} estimate also implies that NN does not suffer from curse of dimensionality, and a small generalization error can be achieved without requiring exponentially large number of neurons. In addition the research line proposed in this paper can also be used to study other properties of the finite-width network, such as the posterior generalization error.




neural network

Differentiable Sparsification for Deep Neural Networks. (arXiv:1910.03201v2 [cs.LG] UPDATED)

A deep neural network has relieved the burden of feature engineering by human experts, but comparable efforts are instead required to determine an effective architecture. On the other hands, as the size of a network has over-grown, a lot of resources are also invested to reduce its size. These problems can be addressed by sparsification of an over-complete model, which removes redundant parameters or connections by pruning them away after training or encouraging them to become zero during training. In general, however, these approaches are not fully differentiable and interrupt an end-to-end training process with the stochastic gradient descent in that they require either a parameter selection or a soft-thresholding step. In this paper, we propose a fully differentiable sparsification method for deep neural networks, which allows parameters to be exactly zero during training, and thus can learn the sparsified structure and the weights of networks simultaneously using the stochastic gradient descent. We apply the proposed method to various popular models in order to show its effectiveness.




neural network

FNNC: Achieving Fairness through Neural Networks. (arXiv:1811.00247v3 [cs.LG] UPDATED)

In classification models fairness can be ensured by solving a constrained optimization problem. We focus on fairness constraints like Disparate Impact, Demographic Parity, and Equalized Odds, which are non-decomposable and non-convex. Researchers define convex surrogates of the constraints and then apply convex optimization frameworks to obtain fair classifiers. Surrogates serve only as an upper bound to the actual constraints, and convexifying fairness constraints might be challenging.

We propose a neural network-based framework, emph{FNNC}, to achieve fairness while maintaining high accuracy in classification. The above fairness constraints are included in the loss using Lagrangian multipliers. We prove bounds on generalization errors for the constrained losses which asymptotically go to zero. The network is optimized using two-step mini-batch stochastic gradient descent. Our experiments show that FNNC performs as good as the state of the art, if not better. The experimental evidence supplements our theoretical guarantees. In summary, we have an automated solution to achieve fairness in classification, which is easily extendable to many fairness constraints.




neural network

Physics-informed neural network for ultrasound nondestructive quantification of surface breaking cracks. (arXiv:2005.03596v1 [cs.LG])

We introduce an optimized physics-informed neural network (PINN) trained to solve the problem of identifying and characterizing a surface breaking crack in a metal plate. PINNs are neural networks that can combine data and physics in the learning process by adding the residuals of a system of Partial Differential Equations to the loss function. Our PINN is supervised with realistic ultrasonic surface acoustic wave data acquired at a frequency of 5 MHz. The ultrasonic surface wave data is represented as a surface deformation on the top surface of a metal plate, measured by using the method of laser vibrometry. The PINN is physically informed by the acoustic wave equation and its convergence is sped up using adaptive activation functions. The adaptive activation function uses a scalable hyperparameter in the activation function, which is optimized to achieve best performance of the network as it changes dynamically the topology of the loss function involved in the optimization process. The usage of adaptive activation function significantly improves the convergence, notably observed in the current study. We use PINNs to estimate the speed of sound of the metal plate, which we do with an error of 1\%, and then, by allowing the speed of sound to be space dependent, we identify and characterize the crack as the positions where the speed of sound has decreased. Our study also shows the effect of sub-sampling of the data on the sensitivity of sound speed estimates. More broadly, the resulting model shows a promising deep neural network model for ill-posed inverse problems.




neural network

Reducing Communication in Graph Neural Network Training. (arXiv:2005.03300v1 [cs.LG])

Graph Neural Networks (GNNs) are powerful and flexible neural networks that use the naturally sparse connectivity information of the data. GNNs represent this connectivity as sparse matrices, which have lower arithmetic intensity and thus higher communication costs compared to dense matrices, making GNNs harder to scale to high concurrencies than convolutional or fully-connected neural networks.

We present a family of parallel algorithms for training GNNs. These algorithms are based on their counterparts in dense and sparse linear algebra, but they had not been previously applied to GNN training. We show that they can asymptotically reduce communication compared to existing parallel GNN training methods. We implement a promising and practical version that is based on 2D sparse-dense matrix multiplication using torch.distributed. Our implementation parallelizes over GPU-equipped clusters. We train GNNs on up to a hundred GPUs on datasets that include a protein network with over a billion edges.




neural network

An Empirical Study of Incremental Learning in Neural Network with Noisy Training Set. (arXiv:2005.03266v1 [cs.LG])

The notion of incremental learning is to train an ANN algorithm in stages, as and when newer training data arrives. Incremental learning is becoming widespread in recent times with the advent of deep learning. Noise in the training data reduces the accuracy of the algorithm. In this paper, we make an empirical study of the effect of noise in the training phase. We numerically show that the accuracy of the algorithm is dependent more on the location of the error than the percentage of error. Using Perceptron, Feed Forward Neural Network and Radial Basis Function Neural Network, we show that for the same percentage of error, the accuracy of the algorithm significantly varies with the location of error. Furthermore, our results show that the dependence of the accuracy with the location of error is independent of the algorithm. However, the slope of the degradation curve decreases with more sophisticated algorithms




neural network

Efficient Characterization of Dynamic Response Variation Using Multi-Fidelity Data Fusion through Composite Neural Network. (arXiv:2005.03213v1 [stat.ML])

Uncertainties in a structure is inevitable, which generally lead to variation in dynamic response predictions. For a complex structure, brute force Monte Carlo simulation for response variation analysis is infeasible since one single run may already be computationally costly. Data driven meta-modeling approaches have thus been explored to facilitate efficient emulation and statistical inference. The performance of a meta-model hinges upon both the quality and quantity of training dataset. In actual practice, however, high-fidelity data acquired from high-dimensional finite element simulation or experiment are generally scarce, which poses significant challenge to meta-model establishment. In this research, we take advantage of the multi-level response prediction opportunity in structural dynamic analysis, i.e., acquiring rapidly a large amount of low-fidelity data from reduced-order modeling, and acquiring accurately a small amount of high-fidelity data from full-scale finite element analysis. Specifically, we formulate a composite neural network fusion approach that can fully utilize the multi-level, heterogeneous datasets obtained. It implicitly identifies the correlation of the low- and high-fidelity datasets, which yields improved accuracy when compared with the state-of-the-art. Comprehensive investigations using frequency response variation characterization as case example are carried out to demonstrate the performance.




neural network

Model Reduction and Neural Networks for Parametric PDEs. (arXiv:2005.03180v1 [math.NA])

We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature.




neural network

Phase imaging with an untrained neural network




neural network

Advances in Computational Intelligence: 15th International Work-Conference on Artificial Neural Networks, IWANN 2019, Gran Canaria, Spain, June 12-14, 2019, Proceedings, Part I / edited by Ignacio Rojas, Gonzalo Joya, Andreu Catala

Online Resource




neural network

Advances in Computational Intelligence: 15th International Work-Conference on Artificial Neural Networks, IWANN 2019, Gran Canaria, Spain, June 12-14, 2019, Proceedings, Part II / edited by Ignacio Rojas, Gonzalo Joya, Andreu Catala

Online Resource




neural network

Raman spectrum and polarizability of liquid water from deep neural networks

Phys. Chem. Chem. Phys., 2020, Advance Article
DOI: 10.1039/D0CP01893G, Paper
Grace M. Sommers, Marcos F. Calegari Andrade, Linfeng Zhang, Han Wang, Roberto Car
Using deep neural networks to model the polarizability and potential energy surfaces, we compute the Raman spectrum of liquid water at several temperatures with ab initio molecular dynamics accuracy.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




neural network

Active learning a coarse-grained neural network model for bulk water from sparse training data

Mol. Syst. Des. Eng., 2020, Advance Article
DOI: 10.1039/C9ME00184K, Communication
Troy D. Loeffler, Tarak K. Patra, Henry Chan, Subramanian K. R. S. Sankaranarayanan
Active learning scheme to train neural network potentials for molecular simulations.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




neural network

Mining structure–property relationships in polymer nanocomposites using data driven finite element analysis and multi-task convolutional neural networks

Mol. Syst. Des. Eng., 2020, Advance Article
DOI: 10.1039/D0ME00020E, Paper
Yixing Wang, Min Zhang, Anqi Lin, Akshay Iyer, Aditya Shanker Prasad, Xiaolin Li, Yichi Zhang, Linda S. Schadler, Wei Chen, L. Catherine Brinson
In this paper, a data driven and deep learning approach for modeling structure–property relationship of polymer nanocomposites is demonstrated. This method is applicable to understand other material mechanisms and guide the design of material with targeted performance.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




neural network

[ASAP] Artificial Neural Networks Applied as Molecular Wave Function Solvers

Journal of Chemical Theory and Computation
DOI: 10.1021/acs.jctc.9b01132




neural network

Neural networks for control.

Online Resource




neural network

Real-Time IoT Imaging with Deep Neural Networks: Using Java on the Raspberry Pi 4 / Modrzyk, Nicolas

Online Resource




neural network

[ASAP] Flexible Fitting of Small Molecules into Electron Microscopy Maps Using Molecular Dynamics Simulations with Neural Network Potentials

Journal of Chemical Information and Modeling
DOI: 10.1021/acs.jcim.9b01167




neural network

Engineering applications of neural networks : 20th international conference, EANN 2019, Xersonisos, Crete, Greece, May 24-26, 2019 : proceedings / John Macintyre, Lazaros Iliadis, Ilias Maglogiannis, Chrisina Jayne (eds.)

EANN (Conference) (20th : 2019 : Xersonisos, Crete, Greece),




neural network

Evolutionary algorithms and neural networks : theory and applications / Seyedali Mirjalili

Mirjalili, Seyedali, author




neural network

Neural network PC tools : a practical guide / edited by Russell C. Eberhart and Roy W. Dobbins ; with a foreword by Bernard Widrow




neural network

Neural network measures gas below a sensor's limit

A deep learning algorithm can find signals buried in noise




neural network

The ghost in the shell: global neural network / edited by Alejandro Arbona and Ben Applegate

Barker Library - PN6720.G46 2018




neural network

An application of artificial neural networks in freeway incident detection




neural network

Prediction of commuter choice behavior using neural networks




neural network

A comparative study of artificial neural networks and info fuzzy networks on their use in software testing




neural network

A radial basis neural network for the analysis of transportation data




neural network

Application of support vector machines and neural networks in digital mammography




neural network

Road crack condition performance modeling using recurrent Markov chains and artificial neural networks




neural network

Neural network application for predicting the impact of trip reduction strategies




neural network

A primer on neural networks in transportation