neural networks

Artificial neural networks for demand forecasting of the Canadian forest products industry

The supply chains of the Canadian forest products industry are largely dependent on accurate demand forecasts. The USA is the major export market for the Canadian forest products industry, although some Canadian provinces are also exporting forest products to other global markets. However, it is very difficult for each province to develop accurate demand forecasts, given the number of factors determining the demand of the forest products in the global markets. We develop multi-layer feed-forward artificial neural network (ANN) models for demand forecasting of the Canadian forest products industry. We find that the ANN models have lower prediction errors and higher threshold statistics as compared to that of the traditional models for predicting the demand of the Canadian forest products. Accurate future demand forecasts will not only help in improving the short-term profitability of the Canadian forest products industry, but also their long-term competitiveness in the global markets.




neural networks

Psychological intervention of college students with unsupervised learning neural networks

To better explore the application of unsupervised learning neural networks in psychological interventions for college students, this study investigates the relationships among latent psychological variables from the perspective of neural networks. Firstly, college students' psychological crisis and intervention systems are analysed, identifying several shortcomings in traditional psychological interventions, such as a lack of knowledge dissemination and imperfect management systems. Secondly, employing the Human-Computer Interaction (HCI) approach, a structural equation model is constructed for unsupervised learning neural networks. Finally, this study further confirms the effectiveness of unsupervised learning neural networks in psychological interventions for college students. The results indicate that in psychological intervention for college students. Additionally, the weightings of the indicators at the criterion level are calculated to be 0.35, 0.27, 0.19, 0.11 and 0.1. Based on the results of HCI, an emergency response system for college students' psychological crises is established, and several intervention measures are proposed.




neural networks

Employing Artificial Neural Networks and Multiple Discriminant Analysis to Evaluate the Impact of the COVID-19 Pandemic on the Financial Status of Jordanian Companies

Aim/Purpose: This paper aims to empirically quantify the financial distress caused by the COVID-19 pandemic on companies listed on Amman Stock Exchange (ASE). The paper also aims to identify the most important predictors of financial distress pre- and mid-pandemic. Background: The COVID-19 pandemic has had a huge toll, not only on human lives but also on many businesses. This provided the impetus to assess the impact of the pandemic on the financial status of Jordanian companies. Methodology: The initial sample comprised 165 companies, which was cleansed and reduced to 84 companies as per data availability. Financial data pertaining to the 84 companies were collected over a two-year period, 2019 and 2020, to empirically quantify the impact of the pandemic on companies in the dataset. Two approaches were employed. The first approach involved using Multiple Discriminant Analysis (MDA) based on Altman’s (1968) model to obtain the Z-score of each company over the investigation period. The second approach involved developing models using Artificial Neural Networks (ANNs) with 15 standard financial ratios to find out the most important variables in predicting financial distress and create an accurate Financial Distress Prediction (FDP) model. Contribution: This research contributes by providing a better understanding of how financial distress predictors perform during dynamic and risky times. The research confirmed that in spite of the negative impact of COVID-19 on the financial health of companies, the main predictors of financial distress remained relatively steadfast. This indicates that standard financial distress predictors can be regarded as being impervious to extraneous financial and/or health calamities. Findings: Results using MDA indicated that more than 63% of companies in the dataset have a lower Z-score in 2020 when compared to 2019. There was also an 8% increase in distressed companies in 2020, and around 6% of companies came to be no longer healthy. As for the models built using ANNs, results show that the most important variable in predicting financial distress is the Return on Capital. The predictive accuracy for the 2019 and 2020 models measured using the area under the Receiver Operating Characteristic (ROC) graph was 87.5% and 97.6%, respectively. Recommendations for Practitioners: Decision makers and top management are encouraged to focus on the identified highly liquid ratios to make thoughtful decisions and initiate preemptive actions to avoid organizational failure. Recommendation for Researchers: This research can be considered a stepping stone to investigating the impact of COVID-19 on the financial status of companies. Researchers are recommended to replicate the methods used in this research across various business sectors to understand the financial dynamics of companies during uncertain times. Impact on Society: Stakeholders in Jordanian-listed companies should concentrate on the list of most important predictors of financial distress as presented in this study. Future Research: Future research may focus on expanding the scope of this study by including other geographical locations to check for the generalisability of the results. Future research may also include post-COVID-19 data to check for changes in results.




neural networks

Neural networks for rapid phase quantification of cultural heritage X-ray powder diffraction data

Recent developments in synchrotron radiation facilities have increased the amount of data generated during acquisitions considerably, requiring fast and efficient data processing techniques. Here, the application of dense neural networks (DNNs) to data treatment of X-ray diffraction computed tomography (XRD-CT) experiments is presented. Processing involves mapping the phases in a tomographic slice by predicting the phase fraction in each individual pixel. DNNs were trained on sets of calculated XRD patterns generated using a Python algorithm developed in-house. An initial Rietveld refinement of the tomographic slice sum pattern provides additional information (peak widths and integrated intensities for each phase) to improve the generation of simulated patterns and make them closer to real data. A grid search was used to optimize the network architecture and demonstrated that a single fully connected dense layer was sufficient to accurately determine phase proportions. This DNN was used on the XRD-CT acquisition of a mock-up and a historical sample of highly heterogeneous multi-layered decoration of a late medieval statue, called `applied brocade'. The phase maps predicted by the DNN were in good agreement with other methods, such as non-negative matrix factorization and serial Rietveld refinements performed with TOPAS, and outperformed them in terms of speed and efficiency. The method was evaluated by regenerating experimental patterns from predictions and using the R-weighted profile as the agreement factor. This assessment allowed us to confirm the accuracy of the results.




neural networks

[ F.748.19 (12/22) ] - Framework for audio structuralizing based on deep neural networks

Framework for audio structuralizing based on deep neural networks




neural networks

Canary Speech Receives Patent for Paired Neural Networks Technology

Canary Speech's Paired Neural Networks Technology is a form of voice biomarker technology that identifies subtle shifts in an individual's voice by analyzing it against previous samples from the same person.




neural networks

Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream

Umut Güçlü
Jul 8, 2015; 35:10005-10014
BehavioralSystemsCognitive




neural networks

Advanced algorithm for step detection in single-entity electrochemistry: a comparative study of wavelet transforms and convolutional neural networks

Faraday Discuss., 2024, Advance Article
DOI: 10.1039/D4FD00130C, Paper
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Ziwen Zhao, Arunava Naha, Nikolaos Kostopoulos, Alina Sekretareva
In this study, two approaches for step detection in single-entity electrochemistry data are developed and compared: discrete wavelet transforms and convolutional neural networks.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




neural networks

Polymer chemistry informed neural networks (PCINNs) for data-driven modelling of polymerization processes

Polym. Chem., 2024, 15,4580-4590
DOI: 10.1039/D4PY00995A, Paper
Nicholas Ballard, Jon Larrañaga, Kiarash Farajzadehahary, José M. Asua
A method for training neural networks to predict the outcome of polymerization processes is described that incorporates fundamental chemical knowledge. This permits generation of data-driven predictive models with limited datasets.
The content of this RSS Feed (c) The Royal Society of Chemistry




neural networks

Self-supervised graph neural networks for polymer property prediction

Mol. Syst. Des. Eng., 2024, 9,1130-1143
DOI: 10.1039/D4ME00088A, Paper
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Qinghe Gao, Tammo Dukker, Artur M. Schweidtmann, Jana M. Weber
Self-supervised learning for polymer property prediction in scarce data domains.
The content of this RSS Feed (c) The Royal Society of Chemistry




neural networks

Enhancing soil geographic recognition through LIBS technology: integrating the joint skewness algorithm with back-propagation neural networks

J. Anal. At. Spectrom., 2024, Advance Article
DOI: 10.1039/D4JA00251B, Paper
Weinan Zheng, Xun Gao, Kaishan Song, Hailong Yu, Qiuyun Wang, Lianbo Guo, Jingquan Lin
The meticulous task of soil region classification is fundamental to the effective management of soil resources and the development of accurate soil classification systems.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




neural networks

Fast fitting of reflectivity data of growing thin films using neural networks

X-ray reflectivity (XRR) is a powerful and popular scattering technique that can give valuable insight into the growth behavior of thin films. This study shows how a simple artificial neural network model can be used to determine the thickness, roughness and density of thin films of different organic semiconductors [diindenoperylene, copper(II) phthalocyanine and α-sexithiophene] on silica from their XRR data with millisecond computation time and with minimal user input or a priori knowledge. For a large experimental data set of 372 XRR curves, it is shown that a simple fully connected model can provide good results with a mean absolute percentage error of 8–18% when compared with the results obtained by a genetic least mean squares fit using the classical Parratt formalism. Furthermore, current drawbacks and prospects for improvement are discussed.




neural networks

Multi-task pre-training of deep neural networks for digital pathology. (arXiv:2005.02561v2 [eess.IV] UPDATED)

In this work, we investigate multi-task learning as a way of pre-training models for classification tasks in digital pathology. It is motivated by the fact that many small and medium-size datasets have been released by the community over the years whereas there is no large scale dataset similar to ImageNet in the domain. We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images. Then, we propose a simple architecture and training scheme for creating a transferable model and a robust evaluation and selection protocol in order to evaluate our method. Depending on the target task, we show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance. Fine-tuning improves performance over feature extraction and is able to recover the lack of specificity of ImageNet features, as both pre-training sources yield comparable performance.




neural networks

Efficient Exact Verification of Binarized Neural Networks. (arXiv:2005.03597v1 [cs.AI])

We present a new system, EEV, for verifying binarized neural networks (BNNs). We formulate BNN verification as a Boolean satisfiability problem (SAT) with reified cardinality constraints of the form $y = (x_1 + cdots + x_n le b)$, where $x_i$ and $y$ are Boolean variables possibly with negation and $b$ is an integer constant. We also identify two properties, specifically balanced weight sparsity and lower cardinality bounds, that reduce the verification complexity of BNNs. EEV contains both a SAT solver enhanced to handle reified cardinality constraints natively and novel training strategies designed to reduce verification complexity by delivering networks with improved sparsity properties and cardinality bounds. We demonstrate the effectiveness of EEV by presenting the first exact verification results for $ell_{infty}$-bounded adversarial robustness of nontrivial convolutional BNNs on the MNIST and CIFAR10 datasets. Our results also show that, depending on the dataset and network architecture, our techniques verify BNNs between a factor of ten to ten thousand times faster than the best previous exact verification techniques for either binarized or real-valued networks.




neural networks

DMCP: Differentiable Markov Channel Pruning for Neural Networks. (arXiv:2005.03354v1 [cs.CV])

Recent works imply that the channel pruning can be regarded as searching optimal sub-structure from unpruned networks.

However, existing works based on this observation require training and evaluating a large number of structures, which limits their application.

In this paper, we propose a novel differentiable method for channel pruning, named Differentiable Markov Channel Pruning (DMCP), to efficiently search the optimal sub-structure.

Our method is differentiable and can be directly optimized by gradient descent with respect to standard task loss and budget regularization (e.g. FLOPs constraint).

In DMCP, we model the channel pruning as a Markov process, in which each state represents for retaining the corresponding channel during pruning, and transitions between states denote the pruning process.

In the end, our method is able to implicitly select the proper number of channels in each layer by the Markov process with optimized transitions. To validate the effectiveness of our method, we perform extensive experiments on Imagenet with ResNet and MobilenetV2.

Results show our method can achieve consistent improvement than state-of-the-art pruning methods in various FLOPs settings. The code is available at https://github.com/zx55/dmcp




neural networks

Constructing Accurate and Efficient Deep Spiking Neural Networks with Double-threshold and Augmented Schemes. (arXiv:2005.03231v1 [cs.NE])

Spiking neural networks (SNNs) are considered as a potential candidate to overcome current challenges such as the high-power consumption encountered by artificial neural networks (ANNs), however there is still a gap between them with respect to the recognition accuracy on practical tasks. A conversion strategy was thus introduced recently to bridge this gap by mapping a trained ANN to an SNN. However, it is still unclear that to what extent this obtained SNN can benefit both the accuracy advantage from ANN and high efficiency from the spike-based paradigm of computation. In this paper, we propose two new conversion methods, namely TerMapping and AugMapping. The TerMapping is a straightforward extension of a typical threshold-balancing method with a double-threshold scheme, while the AugMapping additionally incorporates a new scheme of augmented spike that employs a spike coefficient to carry the number of typical all-or-nothing spikes occurring at a time step. We examine the performance of our methods based on MNIST, Fashion-MNIST and CIFAR10 datasets. The results show that the proposed double-threshold scheme can effectively improve accuracies of the converted SNNs. More importantly, the proposed AugMapping is more advantageous for constructing accurate, fast and efficient deep SNNs as compared to other state-of-the-art approaches. Our study therefore provides new approaches for further integration of advanced techniques in ANNs to improve the performance of SNNs, which could be of great merit to applied developments with spike-based neuromorphic computing.




neural networks

ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context. (arXiv:2005.03191v1 [eess.AS])

Convolutional neural networks (CNN) have shown promising results for end-to-end speech recognition, albeit still behind other state-of-the-art methods in performance. In this paper, we study how to bridge this gap and go beyond with a novel CNN-RNN-transducer architecture, which we call ContextNet. ContextNet features a fully convolutional encoder that incorporates global context information into convolution layers by adding squeeze-and-excitation modules. In addition, we propose a simple scaling method that scales the widths of ContextNet that achieves good trade-off between computation and accuracy. We demonstrate that on the widely used LibriSpeech benchmark, ContextNet achieves a word error rate (WER) of 2.1\%/4.6\% without external language model (LM), 1.9\%/4.1\% with LM and 2.9\%/7.0\% with only 10M parameters on the clean/noisy LibriSpeech test sets. This compares to the previous best published system of 2.0\%/4.6\% with LM and 3.9\%/11.3\% with 20M parameters. The superiority of the proposed ContextNet model is also verified on a much larger internal dataset.




neural networks

Evaluation, Tuning and Interpretation of Neural Networks for Meteorological Applications. (arXiv:2005.03126v1 [physics.ao-ph])

Neural networks have opened up many new opportunities to utilize remotely sensed images in meteorology. Common applications include image classification, e.g., to determine whether an image contains a tropical cyclone, and image translation, e.g., to emulate radar imagery for satellites that only have passive channels. However, there are yet many open questions regarding the use of neural networks in meteorology, such as best practices for evaluation, tuning and interpretation. This article highlights several strategies and practical considerations for neural network development that have not yet received much attention in the meteorological community, such as the concept of effective receptive fields, underutilized meteorological performance measures, and methods for NN interpretation, such as synthetic experiments and layer-wise relevance propagation. We also consider the process of neural network interpretation as a whole, recognizing it as an iterative scientist-driven discovery process, and breaking it down into individual steps that researchers can take. Finally, while most work on neural network interpretation in meteorology has so far focused on networks for image classification tasks, we expand the focus to also include networks for image translation.




neural networks

Learning, transferring, and recommending performance knowledge with Monte Carlo tree search and neural networks. (arXiv:2005.03063v1 [cs.LG])

Making changes to a program to optimize its performance is an unscalable task that relies entirely upon human intuition and experience. In addition, companies operating at large scale are at a stage where no single individual understands the code controlling its systems, and for this reason, making changes to improve performance can become intractably difficult. In this paper, a learning system is introduced that provides AI assistance for finding recommended changes to a program. Specifically, it is shown how the evaluative feedback, delayed-reward performance programming domain can be effectively formulated via the Monte Carlo tree search (MCTS) framework. It is then shown that established methods from computational games for using learning to expedite tree-search computation can be adapted to speed up computing recommended program alterations. Estimates of expected utility from MCTS trees built for previous problems are used to learn a sampling policy that remains effective across new problems, thus demonstrating transferability of optimization knowledge. This formulation is applied to the Apache Spark distributed computing environment, and a preliminary result is observed that the time required to build a search tree for finding recommendations is reduced by up to a factor of 10x.




neural networks

Target Propagation in Recurrent Neural Networks

Recurrent Neural Networks have been widely used to process sequence data, but have long been criticized for their biological implausibility and training difficulties related to vanishing and exploding gradients. This paper presents a novel algorithm for training recurrent networks, target propagation through time (TPTT), that outperforms standard backpropagation through time (BPTT) on four out of the five problems used for testing. The proposed algorithm is initially tested and compared to BPTT on four synthetic time lag tasks, and its performance is also measured using the sequential MNIST data set. In addition, as TPTT uses target propagation, it allows for discrete nonlinearities and could potentially mitigate the credit assignment problem in more complex recurrent architectures.




neural networks

Capturing and Explaining Trajectory Singularities using Composite Signal Neural Networks. (arXiv:2003.10810v2 [cs.LG] UPDATED)

Spatial trajectories are ubiquitous and complex signals. Their analysis is crucial in many research fields, from urban planning to neuroscience. Several approaches have been proposed to cluster trajectories. They rely on hand-crafted features, which struggle to capture the spatio-temporal complexity of the signal, or on Artificial Neural Networks (ANNs) which can be more efficient but less interpretable. In this paper we present a novel ANN architecture designed to capture the spatio-temporal patterns characteristic of a set of trajectories, while taking into account the demographics of the navigators. Hence, our model extracts markers linked to both behaviour and demographics. We propose a composite signal analyser (CompSNN) combining three simple ANN modules. Each of these modules uses different signal representations of the trajectory while remaining interpretable. Our CompSNN performs significantly better than its modules taken in isolation and allows to visualise which parts of the signal were most useful to discriminate the trajectories.




neural networks

Differentiable Sparsification for Deep Neural Networks. (arXiv:1910.03201v2 [cs.LG] UPDATED)

A deep neural network has relieved the burden of feature engineering by human experts, but comparable efforts are instead required to determine an effective architecture. On the other hands, as the size of a network has over-grown, a lot of resources are also invested to reduce its size. These problems can be addressed by sparsification of an over-complete model, which removes redundant parameters or connections by pruning them away after training or encouraging them to become zero during training. In general, however, these approaches are not fully differentiable and interrupt an end-to-end training process with the stochastic gradient descent in that they require either a parameter selection or a soft-thresholding step. In this paper, we propose a fully differentiable sparsification method for deep neural networks, which allows parameters to be exactly zero during training, and thus can learn the sparsified structure and the weights of networks simultaneously using the stochastic gradient descent. We apply the proposed method to various popular models in order to show its effectiveness.




neural networks

FNNC: Achieving Fairness through Neural Networks. (arXiv:1811.00247v3 [cs.LG] UPDATED)

In classification models fairness can be ensured by solving a constrained optimization problem. We focus on fairness constraints like Disparate Impact, Demographic Parity, and Equalized Odds, which are non-decomposable and non-convex. Researchers define convex surrogates of the constraints and then apply convex optimization frameworks to obtain fair classifiers. Surrogates serve only as an upper bound to the actual constraints, and convexifying fairness constraints might be challenging.

We propose a neural network-based framework, emph{FNNC}, to achieve fairness while maintaining high accuracy in classification. The above fairness constraints are included in the loss using Lagrangian multipliers. We prove bounds on generalization errors for the constrained losses which asymptotically go to zero. The network is optimized using two-step mini-batch stochastic gradient descent. Our experiments show that FNNC performs as good as the state of the art, if not better. The experimental evidence supplements our theoretical guarantees. In summary, we have an automated solution to achieve fairness in classification, which is easily extendable to many fairness constraints.




neural networks

Model Reduction and Neural Networks for Parametric PDEs. (arXiv:2005.03180v1 [math.NA])

We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature.




neural networks

Advances in Computational Intelligence: 15th International Work-Conference on Artificial Neural Networks, IWANN 2019, Gran Canaria, Spain, June 12-14, 2019, Proceedings, Part I / edited by Ignacio Rojas, Gonzalo Joya, Andreu Catala

Online Resource




neural networks

Advances in Computational Intelligence: 15th International Work-Conference on Artificial Neural Networks, IWANN 2019, Gran Canaria, Spain, June 12-14, 2019, Proceedings, Part II / edited by Ignacio Rojas, Gonzalo Joya, Andreu Catala

Online Resource




neural networks

Raman spectrum and polarizability of liquid water from deep neural networks

Phys. Chem. Chem. Phys., 2020, Advance Article
DOI: 10.1039/D0CP01893G, Paper
Grace M. Sommers, Marcos F. Calegari Andrade, Linfeng Zhang, Han Wang, Roberto Car
Using deep neural networks to model the polarizability and potential energy surfaces, we compute the Raman spectrum of liquid water at several temperatures with ab initio molecular dynamics accuracy.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




neural networks

Mining structure–property relationships in polymer nanocomposites using data driven finite element analysis and multi-task convolutional neural networks

Mol. Syst. Des. Eng., 2020, Advance Article
DOI: 10.1039/D0ME00020E, Paper
Yixing Wang, Min Zhang, Anqi Lin, Akshay Iyer, Aditya Shanker Prasad, Xiaolin Li, Yichi Zhang, Linda S. Schadler, Wei Chen, L. Catherine Brinson
In this paper, a data driven and deep learning approach for modeling structure–property relationship of polymer nanocomposites is demonstrated. This method is applicable to understand other material mechanisms and guide the design of material with targeted performance.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




neural networks

[ASAP] Artificial Neural Networks Applied as Molecular Wave Function Solvers

Journal of Chemical Theory and Computation
DOI: 10.1021/acs.jctc.9b01132




neural networks

Neural networks for control.

Online Resource




neural networks

Real-Time IoT Imaging with Deep Neural Networks: Using Java on the Raspberry Pi 4 / Modrzyk, Nicolas

Online Resource




neural networks

Engineering applications of neural networks : 20th international conference, EANN 2019, Xersonisos, Crete, Greece, May 24-26, 2019 : proceedings / John Macintyre, Lazaros Iliadis, Ilias Maglogiannis, Chrisina Jayne (eds.)

EANN (Conference) (20th : 2019 : Xersonisos, Crete, Greece),




neural networks

Evolutionary algorithms and neural networks : theory and applications / Seyedali Mirjalili

Mirjalili, Seyedali, author




neural networks

An application of artificial neural networks in freeway incident detection




neural networks

Prediction of commuter choice behavior using neural networks




neural networks

A comparative study of artificial neural networks and info fuzzy networks on their use in software testing




neural networks

Application of support vector machines and neural networks in digital mammography




neural networks

Road crack condition performance modeling using recurrent Markov chains and artificial neural networks




neural networks

A primer on neural networks in transportation