lear

Statistical aspects of nuclear mass models. (arXiv:2002.04151v3 [nucl-th] UPDATED)

We study the information content of nuclear masses from the perspective of global models of nuclear binding energies. To this end, we employ a number of statistical methods and diagnostic tools, including Bayesian calibration, Bayesian model averaging, chi-square correlation analysis, principal component analysis, and empirical coverage probability. Using a Bayesian framework, we investigate the structure of the 4-parameter Liquid Drop Model by considering discrepant mass domains for calibration. We then use the chi-square correlation framework to analyze the 14-parameter Skyrme energy density functional calibrated using homogeneous and heterogeneous datasets. We show that a quite dramatic parameter reduction can be achieved in both cases. The advantage of Bayesian model averaging for improving uncertainty quantification is demonstrated. The statistical approaches used are pedagogically described; in this context this work can serve as a guide for future applications.




lear

Cyclic Boosting -- an explainable supervised machine learning algorithm. (arXiv:2002.03425v2 [cs.LG] UPDATED)

Supervised machine learning algorithms have seen spectacular advances and surpassed human level performance in a wide range of specific applications. However, using complex ensemble or deep learning algorithms typically results in black box models, where the path leading to individual predictions cannot be followed in detail. In order to address this issue, we propose the novel "Cyclic Boosting" machine learning algorithm, which allows to efficiently perform accurate regression and classification tasks while at the same time allowing a detailed understanding of how each individual prediction was made.




lear

On the impact of selected modern deep-learning techniques to the performance and celerity of classification models in an experimental high-energy physics use case. (arXiv:2002.01427v3 [physics.data-an] UPDATED)

Beginning from a basic neural-network architecture, we test the potential benefits offered by a range of advanced techniques for machine learning, in particular deep learning, in the context of a typical classification problem encountered in the domain of high-energy physics, using a well-studied dataset: the 2014 Higgs ML Kaggle dataset. The advantages are evaluated in terms of both performance metrics and the time required to train and apply the resulting models. Techniques examined include domain-specific data-augmentation, learning rate and momentum scheduling, (advanced) ensembling in both model-space and weight-space, and alternative architectures and connection methods.

Following the investigation, we arrive at a model which achieves equal performance to the winning solution of the original Kaggle challenge, whilst being significantly quicker to train and apply, and being suitable for use with both GPU and CPU hardware setups. These reductions in timing and hardware requirements potentially allow the use of more powerful algorithms in HEP analyses, where models must be retrained frequently, sometimes at short notice, by small groups of researchers with limited hardware resources. Additionally, a new wrapper library for PyTorch called LUMINis presented, which incorporates all of the techniques studied.




lear

Learned Step Size Quantization. (arXiv:1902.08153v3 [cs.LG] UPDATED)

Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy. Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured. Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters. This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.




lear

Deep Learning on Point Clouds for False Positive Reduction at Nodule Detection in Chest CT Scans. (arXiv:2005.03654v1 [eess.IV])

The paper focuses on a novel approach for false-positive reduction (FPR) of nodule candidates in Computer-aided detection (CADe) system after suspicious lesions proposing stage. Unlike common decisions in medical image analysis, the proposed approach considers input data not as 2d or 3d image, but as a point cloud and uses deep learning models for point clouds. We found out that models for point clouds require less memory and are faster on both training and inference than traditional CNN 3D, achieves better performance and does not impose restrictions on the size of the input image, thereby the size of the nodule candidate. We propose an algorithm for transforming 3d CT scan data to point cloud. In some cases, the volume of the nodule candidate can be much smaller than the surrounding context, for example, in the case of subpleural localization of the nodule. Therefore, we developed an algorithm for sampling points from a point cloud constructed from a 3D image of the candidate region. The algorithm guarantees to capture both context and candidate information as part of the point cloud of the nodule candidate. An experiment with creating a dataset from an open LIDC-IDRI database for a feature of the FPR task was accurately designed, set up and described in detail. The data augmentation technique was applied to avoid overfitting and as an upsampling method. Experiments are conducted with PointNet, PointNet++ and DGCNN. We show that the proposed approach outperforms baseline CNN 3D models and demonstrates 85.98 FROC versus 77.26 FROC for baseline models.




lear

Plan2Vec: Unsupervised Representation Learning by Latent Plans. (arXiv:2005.03648v1 [cs.LG])

In this paper we introduce plan2vec, an unsupervised representation learning approach that is inspired by reinforcement learning. Plan2vec constructs a weighted graph on an image dataset using near-neighbor distances, and then extrapolates this local metric to a global embedding by distilling path-integral over planned path. When applied to control, plan2vec offers a way to learn goal-conditioned value estimates that are accurate over long horizons that is both compute and sample efficient. We demonstrate the effectiveness of plan2vec on one simulated and two challenging real-world image datasets. Experimental results show that plan2vec successfully amortizes the planning cost, enabling reactive planning that is linear in memory and computation complexity rather than exhaustive over the entire state space.




lear

Generative Feature Replay with Orthogonal Weight Modification for Continual Learning. (arXiv:2005.03490v1 [cs.LG])

The ability of intelligent agents to learn and remember multiple tasks sequentially is crucial to achieving artificial general intelligence. Many continual learning (CL) methods have been proposed to overcome catastrophic forgetting. Catastrophic forgetting notoriously impedes the sequential learning of neural networks as the data of previous tasks are unavailable. In this paper we focus on class incremental learning, a challenging CL scenario, in which classes of each task are disjoint and task identity is unknown during test. For this scenario, generative replay is an effective strategy which generates and replays pseudo data for previous tasks to alleviate catastrophic forgetting. However, it is not trivial to learn a generative model continually for relatively complex data. Based on recently proposed orthogonal weight modification (OWM) algorithm which can keep previously learned input-output mappings invariant approximately when learning new tasks, we propose to directly generate and replay feature. Empirical results on image and text datasets show our method can improve OWM consistently by a significant margin while conventional generative replay always results in a negative effect. Our method also beats a state-of-the-art generative replay method and is competitive with a strong baseline based on real data storage.




lear

Transfer Learning for sEMG-based Hand Gesture Classification using Deep Learning in a Master-Slave Architecture. (arXiv:2005.03460v1 [eess.SP])

Recent advancements in diagnostic learning and development of gesture-based human machine interfaces have driven surface electromyography (sEMG) towards significant importance. Analysis of hand gestures requires an accurate assessment of sEMG signals. The proposed work presents a novel sequential master-slave architecture consisting of deep neural networks (DNNs) for classification of signs from the Indian sign language using signals recorded from multiple sEMG channels. The performance of the master-slave network is augmented by leveraging additional synthetic feature data generated by long short term memory networks. Performance of the proposed network is compared to that of a conventional DNN prior to and after the addition of synthetic data. Up to 14% improvement is observed in the conventional DNN and up to 9% improvement in master-slave network on addition of synthetic data with an average accuracy value of 93.5% asserting the suitability of the proposed approach.




lear

Deep learning of physical laws from scarce data. (arXiv:2005.03448v1 [cs.LG])

Harnessing data to discover the underlying governing laws or equations that describe the behavior of complex physical systems can significantly advance our modeling, simulation and understanding of such systems in various science and engineering disciplines. Recent advances in sparse identification show encouraging success in distilling closed-form governing equations from data for a wide range of nonlinear dynamical systems. However, the fundamental bottleneck of this approach lies in the robustness and scalability with respect to data scarcity and noise. This work introduces a novel physics-informed deep learning framework to discover governing partial differential equations (PDEs) from scarce and noisy data for nonlinear spatiotemporal systems. In particular, this approach seamlessly integrates the strengths of deep neural networks for rich representation learning, automatic differentiation and sparse regression to approximate the solution of system variables, compute essential derivatives, as well as identify the key derivative terms and parameters that form the structure and explicit expression of the PDEs. The efficacy and robustness of this method are demonstrated on discovering a variety of PDE systems with different levels of data scarcity and noise. The resulting computational framework shows the potential for closed-form model discovery in practical applications where large and accurate datasets are intractable to capture.




lear

Curious Hierarchical Actor-Critic Reinforcement Learning. (arXiv:2005.03420v1 [cs.LG])

Hierarchical abstraction and curiosity-driven exploration are two common paradigms in current reinforcement learning approaches to break down difficult problems into a sequence of simpler ones and to overcome reward sparsity. However, there is a lack of approaches that combine these paradigms, and it is currently unknown whether curiosity also helps to perform the hierarchical abstraction. As a novelty and scientific contribution, we tackle this issue and develop a method that combines hierarchical reinforcement learning with curiosity. Herein, we extend a contemporary hierarchical actor-critic approach with a forward model to develop a hierarchical notion of curiosity. We demonstrate in several continuous-space environments that curiosity approximately doubles the learning performance and success rates for most of the investigated benchmarking problems.




lear

CARL: Controllable Agent with Reinforcement Learning for Quadruped Locomotion. (arXiv:2005.03288v1 [cs.LG])

Motion synthesis in a dynamic environment has been a long-standing problem for character animation. Methods using motion capture data tend to scale poorly in complex environments because of their larger capturing and labeling requirement. Physics-based controllers are effective in this regard, albeit less controllable. In this paper, we present CARL, a quadruped agent that can be controlled with high-level directives and react naturally to dynamic environments. Starting with an agent that can imitate individual animation clips, we use Generative Adversarial Networks to adapt high-level controls, such as speed and heading, to action distributions that correspond to the original animations. Further fine-tuning through the deep reinforcement learning enables the agent to recover from unseen external perturbations while producing smooth transitions. It then becomes straightforward to create autonomous agents in dynamic environments by adding navigation modules over the entire process. We evaluate our approach by measuring the agent's ability to follow user control and provide a visual analysis of the generated motion to show its effectiveness.




lear

An Empirical Study of Incremental Learning in Neural Network with Noisy Training Set. (arXiv:2005.03266v1 [cs.LG])

The notion of incremental learning is to train an ANN algorithm in stages, as and when newer training data arrives. Incremental learning is becoming widespread in recent times with the advent of deep learning. Noise in the training data reduces the accuracy of the algorithm. In this paper, we make an empirical study of the effect of noise in the training phase. We numerically show that the accuracy of the algorithm is dependent more on the location of the error than the percentage of error. Using Perceptron, Feed Forward Neural Network and Radial Basis Function Neural Network, we show that for the same percentage of error, the accuracy of the algorithm significantly varies with the location of error. Furthermore, our results show that the dependence of the accuracy with the location of error is independent of the algorithm. However, the slope of the degradation curve decreases with more sophisticated algorithms




lear

Collective Loss Function for Positive and Unlabeled Learning. (arXiv:2005.03228v1 [cs.LG])

People learn to discriminate between classes without explicit exposure to negative examples. On the contrary, traditional machine learning algorithms often rely on negative examples, otherwise the model would be prone to collapse and always-true predictions. Therefore, it is crucial to design the learning objective which leads the model to converge and to perform predictions unbiasedly without explicit negative signals. In this paper, we propose a Collectively loss function to learn from only Positive and Unlabeled data (cPU). We theoretically elicit the loss function from the setting of PU learning. We perform intensive experiments on the benchmark and real-world datasets. The results show that cPU consistently outperforms the current state-of-the-art PU learning methods.




lear

Learning on dynamic statistical manifolds. (arXiv:2005.03223v1 [math.ST])

Hyperbolic balance laws with uncertain (random) parameters and inputs are ubiquitous in science and engineering. Quantification of uncertainty in predictions derived from such laws, and reduction of predictive uncertainty via data assimilation, remain an open challenge. That is due to nonlinearity of governing equations, whose solutions are highly non-Gaussian and often discontinuous. To ameliorate these issues in a computationally efficient way, we use the method of distributions, which here takes the form of a deterministic equation for spatiotemporal evolution of the cumulative distribution function (CDF) of the random system state, as a means of forward uncertainty propagation. Uncertainty reduction is achieved by recasting the standard loss function, i.e., discrepancy between observations and model predictions, in distributional terms. This step exploits the equivalence between minimization of the square error discrepancy and the Kullback-Leibler divergence. The loss function is regularized by adding a Lagrangian constraint enforcing fulfillment of the CDF equation. Minimization is performed sequentially, progressively updating the parameters of the CDF equation as more measurements are assimilated.




lear

Deep Learning Framework for Detecting Ground Deformation in the Built Environment using Satellite InSAR data. (arXiv:2005.03221v1 [cs.CV])

The large volumes of Sentinel-1 data produced over Europe are being used to develop pan-national ground motion services. However, simple analysis techniques like thresholding cannot detect and classify complex deformation signals reliably making providing usable information to a broad range of non-expert stakeholders a challenge. Here we explore the applicability of deep learning approaches by adapting a pre-trained convolutional neural network (CNN) to detect deformation in a national-scale velocity field. For our proof-of-concept, we focus on the UK where previously identified deformation is associated with coal-mining, ground water withdrawal, landslides and tunnelling. The sparsity of measurement points and the presence of spike noise make this a challenging application for deep learning networks, which involve calculations of the spatial convolution between images. Moreover, insufficient ground truth data exists to construct a balanced training data set, and the deformation signals are slower and more localised than in previous applications. We propose three enhancement methods to tackle these problems: i) spatial interpolation with modified matrix completion, ii) a synthetic training dataset based on the characteristics of real UK velocity map, and iii) enhanced over-wrapping techniques. Using velocity maps spanning 2015-2019, our framework detects several areas of coal mining subsidence, uplift due to dewatering, slate quarries, landslides and tunnel engineering works. The results demonstrate the potential applicability of the proposed framework to the development of automated ground motion analysis systems.




lear

Active Learning with Multiple Kernels. (arXiv:2005.03188v1 [cs.LG])

Online multiple kernel learning (OMKL) has provided an attractive performance in nonlinear function learning tasks. Leveraging a random feature approximation, the major drawback of OMKL, known as the curse of dimensionality, has been recently alleviated. In this paper, we introduce a new research problem, termed (stream-based) active multiple kernel learning (AMKL), in which a learner is allowed to label selected data from an oracle according to a selection criterion. This is necessary in many real-world applications as acquiring true labels is costly or time-consuming. We prove that AMKL achieves an optimal sublinear regret, implying that the proposed selection criterion indeed avoids unuseful label-requests. Furthermore, we propose AMKL with an adaptive kernel selection (AMKL-AKS) in which irrelevant kernels can be excluded from a kernel dictionary 'on the fly'. This approach can improve the efficiency of active learning as well as the accuracy of a function approximation. Via numerical tests with various real datasets, it is demonstrated that AMKL-AKS yields a similar or better performance than the best-known OMKL, with a smaller number of labeled data.




lear

Machine learning in medicine : a complete overview

Cleophas, Ton J. M., author
9783030339708 (electronic bk.)




lear

Machine learning in aquaculture : hunger classification of Lates calcarifer

Mohd Razman, Mohd Azraai, author
9789811522376 (electronic bk.)




lear

Low-dose radiation effects on animals and ecosystems : long-term study on the Fukushima Nuclear Accident

9789811382185 (electronic bk.)




lear

Deep learning in medical image analysis : challenges and applications

9783030331283 (electronic bk.)




lear

Arctic plants of Svalbard : what we learn from the green in the treeless white world

Lee, Yoo Kyung, author
9783030345600 (electronic bk.)




lear

A handbook of nuclear applications in humans' lives

Tabbakh, Farshid, author.
9781527544512 (electronic bk.)




lear

Exact lower bounds for the agnostic probably-approximately-correct (PAC) machine learning model

Aryeh Kontorovich, Iosif Pinelis.

Source: The Annals of Statistics, Volume 47, Number 5, 2822--2854.

Abstract:
We provide an exact nonasymptotic lower bound on the minimax expected excess risk (EER) in the agnostic probably-approximately-correct (PAC) machine learning classification model and identify minimax learning algorithms as certain maximally symmetric and minimally randomized “voting” procedures. Based on this result, an exact asymptotic lower bound on the minimax EER is provided. This bound is of the simple form $c_{infty}/sqrt{ u}$ as $ u oinfty$, where $c_{infty}=0.16997dots$ is a universal constant, $ u=m/d$, $m$ is the size of the training sample and $d$ is the Vapnik–Chervonenkis dimension of the hypothesis class. It is shown that the differences between these asymptotic and nonasymptotic bounds, as well as the differences between these two bounds and the maximum EER of any learning algorithms that minimize the empirical risk, are asymptotically negligible, and all these differences are due to ties in the mentioned “voting” procedures. A few easy to compute nonasymptotic lower bounds on the minimax EER are also obtained, which are shown to be close to the exact asymptotic lower bound $c_{infty}/sqrt{ u}$ even for rather small values of the ratio $ u=m/d$. As an application of these results, we substantially improve existing lower bounds on the tail probability of the excess risk. Among the tools used are Bayes estimation and apparently new identities and inequalities for binomial distributions.




lear

On deep learning as a remedy for the curse of dimensionality in nonparametric regression

Benedikt Bauer, Michael Kohler.

Source: The Annals of Statistics, Volume 47, Number 4, 2261--2285.

Abstract:
Assuming that a smoothness condition and a suitable restriction on the structure of the regression function hold, it is shown that least squares estimates based on multilayer feedforward neural networks are able to circumvent the curse of dimensionality in nonparametric regression. The proof is based on new approximation results concerning multilayer feedforward neural networks with bounded weights and a bounded number of hidden neurons. The estimates are compared with various other approaches by using simulated data.




lear

Austin-Area District Looks for Digital/Blended Learning Program; Baltimore Seeks High School Literacy Program

The Round Rock Independent School District in Texas is looking for a digital curriculum and blended learning program. Baltimore is looking for a comprehensive high school literacy program.

The post Austin-Area District Looks for Digital/Blended Learning Program; Baltimore Seeks High School Literacy Program appeared first on Market Brief.



  • Purchasing Alert
  • Curriculum / Digital Curriculum
  • Educational Technology/Ed-Tech
  • Learning Management / Student Information Systems
  • Procurement / Purchasing / RFPs

lear

ACT and Teachers’ Union Partner to Provide Remote Learning Resources Amid Pandemic

ACT and the American Federation of Teachers are partnering to provide free resources as educators increasingly switch to distance learning amid the COVID-19 pandemic.

The post ACT and Teachers’ Union Partner to Provide Remote Learning Resources Amid Pandemic appeared first on Market Brief.




lear

Pearson K12 Spinoff Rebranded as ‘Savvas Learning Company’

Savvas Learning Company will continue to provide its K-12 products and services, and is working to support districts with their remote learning needs during school closures.

The post Pearson K12 Spinoff Rebranded as ‘Savvas Learning Company’ appeared first on Market Brief.




lear

Chaffetz: I don't understand why Adam Schiff continues to have a security clearance

Fox News contributor Jason Chaffetz and Andy McCarthy react to House Intelligence transcripts on Russia probe.





lear

Learning Semiparametric Regression with Missing Covariates Using Gaussian Process Models

Abhishek Bishoyi, Xiaojing Wang, Dipak K. Dey.

Source: Bayesian Analysis, Volume 15, Number 1, 215--239.

Abstract:
Missing data often appear as a practical problem while applying classical models in the statistical analysis. In this paper, we consider a semiparametric regression model in the presence of missing covariates for nonparametric components under a Bayesian framework. As it is known that Gaussian processes are a popular tool in nonparametric regression because of their flexibility and the fact that much of the ensuing computation is parametric Gaussian computation. However, in the absence of covariates, the most frequently used covariance functions of a Gaussian process will not be well defined. We propose an imputation method to solve this issue and perform our analysis using Bayesian inference, where we specify the objective priors on the parameters of Gaussian process models. Several simulations are conducted to illustrate effectiveness of our proposed method and further, our method is exemplified via two real datasets, one through Langmuir equation, commonly used in pharmacokinetic models, and another through Auto-mpg data taken from the StatLib library.




lear

Probability Based Independence Sampler for Bayesian Quantitative Learning in Graphical Log-Linear Marginal Models

Ioannis Ntzoufras, Claudia Tarantola, Monia Lupparelli.

Source: Bayesian Analysis, Volume 14, Number 3, 797--823.

Abstract:
We introduce a novel Bayesian approach for quantitative learning for graphical log-linear marginal models. These models belong to curved exponential families that are difficult to handle from a Bayesian perspective. The likelihood cannot be analytically expressed as a function of the marginal log-linear interactions, but only in terms of cell counts or probabilities. Posterior distributions cannot be directly obtained, and Markov Chain Monte Carlo (MCMC) methods are needed. Finally, a well-defined model requires parameter values that lead to compatible marginal probabilities. Hence, any MCMC should account for this important restriction. We construct a fully automatic and efficient MCMC strategy for quantitative learning for such models that handles these problems. While the prior is expressed in terms of the marginal log-linear interactions, we build an MCMC algorithm that employs a proposal on the probability parameter space. The corresponding proposal on the marginal log-linear interactions is obtained via parameter transformation. We exploit a conditional conjugate setup to build an efficient proposal on probability parameters. The proposed methodology is illustrated by a simulation study and a real dataset.




lear

Comment on “Automated Versus Do-It-Yourself Methods for Causal Inference: Lessons Learned from a Data Analysis Competition”

Susan Gruber, Mark J. van der Laan.

Source: Statistical Science, Volume 34, Number 1, 82--85.

Abstract:
Dorie and co-authors (DHSSC) are to be congratulated for initiating the ACIC Data Challenge. Their project engaged the community and accelerated research by providing a level playing field for comparing the performance of a priori specified algorithms. DHSSC identified themes concerning characteristics of the DGP, properties of the estimators, and inference. We discuss these themes in the context of targeted learning.




lear

A framework for mesencephalic dopamine systems based on predictive Hebbian learning

PR Montague
Mar 1, 1996; 16:1936-1947
Articles




lear

Adaptive representation of dynamics during learning of a motor task

R Shadmehr
May 1, 1994; 14:3208-3224
Articles




lear

Learning Debian GNU/Linux




lear

Noncoding Microdeletion in Mouse Hgf Disrupts Neural Crest Migration into the Stria Vascularis, Reduces the Endocochlear Potential, and Suggests the Neuropathology for Human Nonsyndromic Deafness DFNB39

Hepatocyte growth factor (HGF) is a multifunctional protein that signals through the MET receptor. HGF stimulates cell proliferation, cell dispersion, neuronal survival, and wound healing. In the inner ear, levels of HGF must be fine-tuned for normal hearing. In mice, a deficiency of HGF expression limited to the auditory system, or an overexpression of HGF, causes neurosensory deafness. In humans, noncoding variants in HGF are associated with nonsyndromic deafness DFNB39. However, the mechanism by which these noncoding variants causes deafness was unknown. Here, we reveal the cause of this deafness using a mouse model engineered with a noncoding intronic 10 bp deletion (del10) in Hgf. Male and female mice homozygous for del10 exhibit moderate-to-profound hearing loss at 4 weeks of age as measured by tone burst auditory brainstem responses. The wild type (WT) 80 mV endocochlear potential was significantly reduced in homozygous del10 mice compared with WT littermates. In normal cochlea, endocochlear potentials are dependent on ion homeostasis mediated by the stria vascularis (SV). Previous studies showed that developmental incorporation of neural crest cells into the SV depends on signaling from HGF/MET. We show by immunohistochemistry that, in del10 homozygotes, neural crest cells fail to infiltrate the developing SV intermediate layer. Phenotyping and RNAseq analyses reveal no other significant abnormalities in other tissues. We conclude that, in the inner ear, the noncoding del10 mutation in Hgf leads to developmental defects of the SV and consequently dysfunctional ion homeostasis and a reduction in the EP, recapitulating human DFNB39 nonsyndromic deafness.

SIGNIFICANCE STATEMENT Hereditary deafness is a common, clinically and genetically heterogeneous neurosensory disorder. Previously, we reported that human deafness DFNB39 is associated with noncoding variants in the 3'UTR of a short isoform of HGF encoding hepatocyte growth factor. For normal hearing, HGF levels must be fine-tuned as an excess or deficiency of HGF cause deafness in mouse. Using a Hgf mutant mouse with a small 10 bp deletion recapitulating a human DFNB39 noncoding variant, we demonstrate that neural crest cells fail to migrate into the stria vascularis intermediate layer, resulting in a significantly reduced endocochlear potential, the driving force for sound transduction by inner ear hair cells. HGF-associated deafness is a neurocristopathy but, unlike many other neurocristopathies, it is not syndromic.




lear

Modulations of Insular Projections by Prior Belief Mediate the Precision of Prediction Error during Tactile Learning

Awareness for surprising sensory events is shaped by prior belief inferred from past experience. Here, we combined hierarchical Bayesian modeling with fMRI on an associative learning task in 28 male human participants to characterize the effect of the prior belief of tactile events on connections mediating the outcome of perceptual decisions. Activity in anterior insular cortex (AIC), premotor cortex (PMd), and inferior parietal lobule (IPL) were modulated by prior belief on unexpected targets compared with expected targets. On expected targets, prior belief decreased the connection strength from AIC to IPL, whereas it increased the connection strength from AIC to PMd when targets were unexpected. Individual differences in the modulatory strength of prior belief on insular projections correlated with the precision that increases the influence of prediction errors on belief updating. These results suggest complementary effects of prior belief on insular-frontoparietal projections mediating the precision of prediction during probabilistic tactile learning.

SIGNIFICANCE STATEMENT In a probabilistic environment, the prior belief of sensory events can be inferred from past experiences. How this prior belief modulates effective brain connectivity for updating expectations for future decision-making remains unexplored. Combining hierarchical Bayesian modeling with fMRI, we show that during tactile associative learning, prior expectations modulate connections originating in the anterior insula cortex and targeting salience-related and attention-related frontoparietal areas (i.e., parietal and premotor cortex). These connections seem to be involved in updating evidence based on the precision of ascending inputs to guide future decision-making.




lear

Learn how cash transfer programmes improve lives in sub-Saharan Africa and share the infographics

Did you know that cash transfer (CT) programmes in countries of the sub-Saharan Africa actually have a significant impact? In Malawi, these programmes helped families invest in agricultural equipment and livestock to produce their own food and reduce levels of negative coping strategies, like begging and school drop-outs. In Kenya, secondary school attendance rose by 9 percent and access to [...]




lear

Learn how good food can improve your health

Have you ever wondered if you are getting adequate nutrients from the food you eat? It is a common misconception that malnutrition means not getting enough food. This is, however, incorrect! People who take in insufficient food can be malnourished, but also those who consume too much face the same risks. Malnutrition is defined as “An abnormal physiological condition caused by [...]




lear

5 critical things we learned from the latest IPCC report on climate change

Today leading international experts on climate change, the IPCC, presented their latest report on the impacts of climate change on humanity, and what we can do about it. It’s a lengthy report, so we’ve shrunk it down to Oxfam's five key takeaways on climate change and hunger. 1. Climate change: the impacts on crops are worse than we thought Climate change has [...]




lear

Recommended: 7 free e-learning courses to bookmark

E-learning was quite the buzzword a couple of decades ago – then when the internet started in earnest it became even more so. Today e-learning is mainstreamed in many organization, including FAO with more than 400 000 learners taking advantage of FAO’s offerings. FAO’s e-learning center offers free interactive courses – in English, French and Spanish - on topics ranging [...]




lear

Seven examples of nuclear technology improving food and agriculture

Some of the most innovative ways being used to improve agricultural practices involve nuclear technology. Nuclear applications in agriculture rely on the use of isotopes and radiation techniques to combat pests and diseases, increase crop production, protect land and water resources, ensure food safety and authenticity, and increase livestock production. FAO and the International Atomic Energy Agency (IAEA) have been expanding [...]




lear

Eight Things We’ve Learned About Moms Since the Last Mother's Day

From pregnancy to birth and beyond, mothers, both animal and human, show off some amazing skills




lear

This Week's Best Livestream Learning Opportunities

From doodle sessions to zoo tours, here's a week of online activities to keep your kids learning during the school shutdown




lear

LeVar Burton Reads Stories on Twitter and Other Livestream Learning Opportunities This Week

Learn hip-hop dance or do citizen science without leaving home this week, thanks to the internet's many intrepid artists and educators




lear

A Read-Along With Michelle Obama and Other Livestream Learning Opportunities

Schools are shuttered, but kids can dance with New York's Ballet Hispánico and listen to a story from a certain former First Lady




lear

The Best Places for Your Kids to Learn Real-Life Skills Online

Why not use quarantine as an opportunity to have your homeschoolers master woodworking or engine repair?




lear

Basel Committee and IOSCO announce deferral of final implementation phases of the margin requirements for non-centrally cleared derivatives

BCBS Press release "Basel Committee and IOSCO announce deferral of final implementation phases of the margin requirements for non-centrally cleared derivatives", 3 April 2020




lear

Learning the value of resilience and technology: the global financial system after Covid-19

Remarks by Benoît Cœuré, Head of the Bank for International Settlements Innovation Hub, at the Reinventing Bretton Woods Committee - Chamber of Digital Commerce webinar on "The world economy transformed", 17 April 2020.




lear

OPP officer who shot and killed charging man cleared

Ontario's police watchdog says an OPP officer didn't break the law when he shot and killed a man running at him with an aluminum bat in November 2019.



  • News/Canada/Ottawa

lear

Have to learn to live with Covid-19: Govt – Times of India

Have to learn to live with Covid-19: Govt  Times of IndiaHealth ministry says learn to live with coronavirus as India's COVID-19 count crosses 56,000  LivemintPeople must learn to live with the virus, follow prevention guidelines: G...



  • IMC News Feed