rai A young woman and a young man in the reign of King Charles II having a quarrel: they prepare to surrender each other's portrait miniature. Engraving by C. Heath, 1824, after G.S. Newton. By feedproxy.google.com Published On :: London (6 Seymour Place, Euston Square) : Charles Heath ; [London] (Poultry) : Robert Jennings, May 15 1827 ([London] : Printed by McQueen) Full Article
rai A castle (the Castello Odescalchi di Bracciano?), with a flock of sheep attended by a shepherd. Etching and mezzotint by L. Marvy after Claude Lorraine. By feedproxy.google.com Published On :: [Paris] : Calcographie du Louvre, Musées Imperiaux, [1849?] Full Article
rai Babylon: Nebuchadnezzar praises the greatness of the city. Coloured etching, 17--. By feedproxy.google.com Published On :: Se vend a Augsbourg [Augsburg] : Au Negoce com(m)un de l'Academie Imperiale d'Empire des Arts libereaux avec privilege de Sa Majesté Impériale et avec defense ni d'en faire ni de vendre les copies, [between 1700 and 1799] Full Article
rai For Educators Vying for State Office, Teachers' Union Offers 'Soup to Nuts' Campaign Training By feedproxy.google.com Published On :: Wed, 05 Sep 2018 00:00:00 +0000 In the aftermath of this spring's teacher protests, more educators are running for state office—and the National Education Association is seizing on the political moment. Full Article Illinois
rai In Illinois, New Budget Caps Raises and Limits Pensions for Teachers By feedproxy.google.com Published On :: Tue, 05 Jun 2018 00:00:00 +0000 The state's budget bill, which Republican Gov. Bruce Rauner signed into law this week, caps annual raises for end-of-career-teachers, lowering the pension they can receive. Full Article Illinois
rai Call for Racial Equity Training Leads to Threats to Superintendent, Resistance from Community By feedproxy.google.com Published On :: Thu, 20 Jun 2019 00:00:00 +0000 Controversy over an intiative aimed a reducing inequities in Lee's Summit, Mo., schools led the police department to provide security protection for the district's first African-American superintendent. Now the school board has reversed course. Full Article Missouri
rai How does my brain work? / by Stacey A. Bedwell. By search.wellcomelibrary.org Published On :: [Poland] : [Stacey A. Bedwell], 2016. Full Article
rai Daddy Dog and Papi Panda's rainbow family. By search.wellcomelibrary.org Published On :: Middletown, DE : Amazon, 2018. Full Article
rai Opiate receptor subtypes and brain function / editors, Roger M. Brown, Doris H. Clouet, David P. Friedman. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1986. Full Article
rai Addict aftercare : recovery training and self-help / Fred Zackon, William E. McAuliffe, James M.N. Ch'ien. By search.wellcomelibrary.org Published On :: Full Article
rai Brain on meds. By search.wellcomelibrary.org Published On :: [London] : [publisher not identified], [2019] Full Article
rai A survey of alcohol and drug abuse programs in the railroad industry / [Lyman C. Hitchcock, Mark S. Sanders ; Naval Weapons Support Center]. By search.wellcomelibrary.org Published On :: Washington, D.C. : Department of Transportation, Federal Railroad Administration, 1976. Full Article
rai 'A pioneer, a trailblazer' - Reaction to McGraw's retirement By sports.yahoo.com Published On :: Wed, 22 Apr 2020 21:42:02 GMT Notre Dame coach Muffet McGraw retired after 33 seasons Wednesday. What she did for me in those four years, I came in as a girl and left as a woman.'' - WNBA player Kayla McBride, who played for Notre Dame from 2010-14. Full Article article Sports
rai A Low Complexity Algorithm with O(√T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints By Published On :: 2020 This paper considers online convex optimization over a complicated constraint set, which typically consists of multiple functional constraints and a set constraint. The conventional online projection algorithm (Zinkevich, 2003) can be difficult to implement due to the potentially high computation complexity of the projection operation. In this paper, we relax the functional constraints by allowing them to be violated at each round but still requiring them to be satisfied in the long term. This type of relaxed online convex optimization (with long term constraints) was first considered in Mahdavi et al. (2012). That prior work proposes an algorithm to achieve $O(sqrt{T})$ regret and $O(T^{3/4})$ constraint violations for general problems and another algorithm to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when the constraint set can be described by a finite number of linear constraints. A recent extension in Jenatton et al. (2016) can achieve $O(T^{max{ heta,1- heta}})$ regret and $O(T^{1- heta/2})$ constraint violations where $ hetain (0,1)$. The current paper proposes a new simple algorithm that yields improved performance in comparison to prior works. The new algorithm achieves an $O(sqrt{T})$ regret bound with $O(1)$ constraint violations. Full Article
rai A Unified Framework for Structured Graph Learning via Spectral Constraints By Published On :: 2020 Graph learning from data is a canonical problem that has received substantial attention in the literature. Learning a structured graph is essential for interpretability and identification of the relationships among data. In general, learning a graph with a specific structure is an NP-hard combinatorial problem and thus designing a general tractable algorithm is challenging. Some useful structured graphs include connected, sparse, multi-component, bipartite, and regular graphs. In this paper, we introduce a unified framework for structured graph learning that combines Gaussian graphical model and spectral graph theory. We propose to convert combinatorial structural constraints into spectral constraints on graph matrices and develop an optimization framework based on block majorization-minimization to solve structured graph learning problem. The proposed algorithms are provably convergent and practically amenable for a number of graph based applications such as data clustering. Extensive numerical experiments with both synthetic and real data sets illustrate the effectiveness of the proposed algorithms. An open source R package containing the code for all the experiments is available at https://CRAN.R-project.org/package=spectralGraphTopology. Full Article
rai Tensor Train Decomposition on TensorFlow (T3F) By Published On :: 2020 Tensor Train decomposition is used across many branches of machine learning. We present T3F—a library for Tensor Train decomposition based on TensorFlow. T3F supports GPU execution, batch processing, automatic differentiation, and versatile functionality for the Riemannian optimization framework, which takes into account the underlying manifold structure to construct efficient optimization methods. The library makes it easier to implement machine learning papers that rely on the Tensor Train decomposition. T3F includes documentation, examples and 94% test coverage. Full Article
rai Self-paced Multi-view Co-training By Published On :: 2020 Co-training is a well-known semi-supervised learning approach which trains classifiers on two or more different views and exchanges pseudo labels of unlabeled instances in an iterative way. During the co-training process, pseudo labels of unlabeled instances are very likely to be false especially in the initial training, while the standard co-training algorithm adopts a 'draw without replacement' strategy and does not remove these wrongly labeled instances from training stages. Besides, most of the traditional co-training approaches are implemented for two-view cases, and their extensions in multi-view scenarios are not intuitive. These issues not only degenerate their performance as well as available application range but also hamper their fundamental theory. Moreover, there is no optimization model to explain the objective a co-training process manages to optimize. To address these issues, in this study we design a unified self-paced multi-view co-training (SPamCo) framework which draws unlabeled instances with replacement. Two specified co-regularization terms are formulated to develop different strategies for selecting pseudo-labeled instances during training. Both forms share the same optimization strategy which is consistent with the iteration process in co-training and can be naturally extended to multi-view scenarios. A distributed optimization strategy is also introduced to train the classifier of each view in parallel to further improve the efficiency of the algorithm. Furthermore, the SPamCo algorithm is proved to be PAC learnable, supporting its theoretical soundness. Experiments conducted on synthetic, text categorization, person re-identification, image recognition and object detection data sets substantiate the superiority of the proposed method. Full Article
rai Portraits of women in the collection By feedproxy.google.com Published On :: Thu, 20 Feb 2020 00:02:06 +0000 This NSW Women's Week (2–8 March) we're showcasing portraits and stories of 10 significant women from the Lib Full Article
rai An estimation method for latent traits and population parameters in Nominal Response Model By projecteuclid.org Published On :: Thu, 05 Aug 2010 15:41 EDT Caio L. N. Azevedo, Dalton F. AndradeSource: Braz. J. Probab. Stat., Volume 24, Number 3, 415--433.Abstract: The nominal response model (NRM) was proposed by Bock [ Psychometrika 37 (1972) 29–51] in order to improve the latent trait (ability) estimation in multiple choice tests with nominal items. When the item parameters are known, expectation a posteriori or maximum a posteriori methods are commonly employed to estimate the latent traits, considering a standard symmetric normal distribution as the latent traits prior density. However, when this item set is presented to a new group of examinees, it is not only necessary to estimate their latent traits but also the population parameters of this group. This article has two main purposes: first, to develop a Monte Carlo Markov Chain algorithm to estimate both latent traits and population parameters concurrently. This algorithm comprises the Metropolis–Hastings within Gibbs sampling algorithm (MHWGS) proposed by Patz and Junker [ Journal of Educational and Behavioral Statistics 24 (1999b) 346–366]. Second, to compare, in the latent trait recovering, the performance of this method with three other methods: maximum likelihood, expectation a posteriori and maximum a posteriori. The comparisons were performed by varying the total number of items (NI), the number of categories and the values of the mean and the variance of the latent trait distribution. The results showed that MHWGS outperforms the other methods concerning the latent traits estimation as well as it recoveries properly the population parameters. Furthermore, we found that NI accounts for the highest percentage of the variability in the accuracy of latent trait estimation. Full Article
rai Analyzing complex functional brain networks: Fusing statistics and network science to understand the brain By projecteuclid.org Published On :: Mon, 28 Oct 2013 09:06 EDT Sean L. Simpson, F. DuBois Bowman, Paul J. LaurientiSource: Statist. Surv., Volume 7, 1--36.Abstract: Complex functional brain network analyses have exploded over the last decade, gaining traction due to their profound clinical implications. The application of network science (an interdisciplinary offshoot of graph theory) has facilitated these analyses and enabled examining the brain as an integrated system that produces complex behaviors. While the field of statistics has been integral in advancing activation analyses and some connectivity analyses in functional neuroimaging research, it has yet to play a commensurate role in complex network analyses. Fusing novel statistical methods with network-based functional neuroimage analysis will engender powerful analytical tools that will aid in our understanding of normal brain function as well as alterations due to various brain disorders. Here we survey widely used statistical and network science tools for analyzing fMRI network data and discuss the challenges faced in filling some of the remaining methodological gaps. When applied and interpreted correctly, the fusion of network scientific and statistical methods has a chance to revolutionize the understanding of brain function. Full Article
rai Unsupervised Pre-trained Models from Healthy ADLs Improve Parkinson's Disease Classification of Gait Patterns. (arXiv:2005.02589v2 [cs.LG] UPDATED) By arxiv.org Published On :: Application and use of deep learning algorithms for different healthcare applications is gaining interest at a steady pace. However, use of such algorithms can prove to be challenging as they require large amounts of training data that capture different possible variations. This makes it difficult to use them in a clinical setting since in most health applications researchers often have to work with limited data. Less data can cause the deep learning model to over-fit. In this paper, we ask how can we use data from a different environment, different use-case, with widely differing data distributions. We exemplify this use case by using single-sensor accelerometer data from healthy subjects performing activities of daily living - ADLs (source dataset), to extract features relevant to multi-sensor accelerometer gait data (target dataset) for Parkinson's disease classification. We train the pre-trained model using the source dataset and use it as a feature extractor. We show that the features extracted for the target dataset can be used to train an effective classification model. Our pre-trained source model consists of a convolutional autoencoder, and the target classification model is a simple multi-layer perceptron model. We explore two different pre-trained source models, trained using different activity groups, and analyze the influence the choice of pre-trained model has over the task of Parkinson's disease classification. Full Article
rai How many modes can a constrained Gaussian mixture have?. (arXiv:2005.01580v2 [math.ST] UPDATED) By arxiv.org Published On :: We show, by an explicit construction, that a mixture of univariate Gaussians with variance 1 and means in $[-A,A]$ can have $Omega(A^2)$ modes. This disproves a recent conjecture of Dytso, Yagli, Poor and Shamai [IEEE Trans. Inform. Theory, Apr. 2020], who showed that such a mixture can have at most $O(A^2)$ modes and surmised that the upper bound could be improved to $O(A)$. Our result holds even if an additional variance constraint is imposed on the mixing distribution. Extending the result to higher dimensions, we exhibit a mixture of Gaussians in $mathbb{R}^d$, with identity covariances and means inside $[-A,A]^d$, that has $Omega(A^{2d})$ modes. Full Article
rai Mnemonics Training: Multi-Class Incremental Learning without Forgetting. (arXiv:2002.10211v3 [cs.CV] UPDATED) By arxiv.org Published On :: Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes. Full Article
rai Reducing Communication in Graph Neural Network Training. (arXiv:2005.03300v1 [cs.LG]) By arxiv.org Published On :: Graph Neural Networks (GNNs) are powerful and flexible neural networks that use the naturally sparse connectivity information of the data. GNNs represent this connectivity as sparse matrices, which have lower arithmetic intensity and thus higher communication costs compared to dense matrices, making GNNs harder to scale to high concurrencies than convolutional or fully-connected neural networks. We present a family of parallel algorithms for training GNNs. These algorithms are based on their counterparts in dense and sparse linear algebra, but they had not been previously applied to GNN training. We show that they can asymptotically reduce communication compared to existing parallel GNN training methods. We implement a promising and practical version that is based on 2D sparse-dense matrix multiplication using torch.distributed. Our implementation parallelizes over GPU-equipped clusters. We train GNNs on up to a hundred GPUs on datasets that include a protein network with over a billion edges. Full Article
rai An Empirical Study of Incremental Learning in Neural Network with Noisy Training Set. (arXiv:2005.03266v1 [cs.LG]) By arxiv.org Published On :: The notion of incremental learning is to train an ANN algorithm in stages, as and when newer training data arrives. Incremental learning is becoming widespread in recent times with the advent of deep learning. Noise in the training data reduces the accuracy of the algorithm. In this paper, we make an empirical study of the effect of noise in the training phase. We numerically show that the accuracy of the algorithm is dependent more on the location of the error than the percentage of error. Using Perceptron, Feed Forward Neural Network and Radial Basis Function Neural Network, we show that for the same percentage of error, the accuracy of the algorithm significantly varies with the location of error. Furthermore, our results show that the dependence of the accuracy with the location of error is independent of the algorithm. However, the slope of the degradation curve decreases with more sophisticated algorithms Full Article
rai Training and Classification using a Restricted Boltzmann Machine on the D-Wave 2000Q. (arXiv:2005.03247v1 [cs.LG]) By arxiv.org Published On :: Restricted Boltzmann Machine (RBM) is an energy based, undirected graphical model. It is commonly used for unsupervised and supervised machine learning. Typically, RBM is trained using contrastive divergence (CD). However, training with CD is slow and does not estimate exact gradient of log-likelihood cost function. In this work, the model expectation of gradient learning for RBM has been calculated using a quantum annealer (D-Wave 2000Q), which is much faster than Markov chain Monte Carlo (MCMC) used in CD. Training and classification results are compared with CD. The classification accuracy results indicate similar performance of both methods. Image reconstruction as well as log-likelihood calculations are used to compare the performance of quantum and classical algorithms for RBM training. It is shown that the samples obtained from quantum annealer can be used to train a RBM on a 64-bit `bars and stripes' data set with classification performance similar to a RBM trained with CD. Though training based on CD showed improved learning performance, training using a quantum annealer eliminates computationally expensive MCMC steps of CD. Full Article
rai Tumor microenvironments in organs : from the brain to the skin. By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030362140 (electronic bk.) Full Article
rai Mental Conditioning to Perform Common Operations in General Surgery Training By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319911649 978-3-319-91164-9 Full Article
rai Frailty and cardiovascular diseases : research into an elderly population By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030333300 (electronic bk.) Full Article
rai Quantile regression under memory constraint By projecteuclid.org Published On :: Wed, 30 Oct 2019 22:03 EDT Xi Chen, Weidong Liu, Yichen Zhang. Source: The Annals of Statistics, Volume 47, Number 6, 3244--3273.Abstract: This paper studies the inference problem in quantile regression (QR) for a large sample size $n$ but under a limited memory constraint, where the memory can only store a small batch of data of size $m$. A natural method is the naive divide-and-conquer approach, which splits data into batches of size $m$, computes the local QR estimator for each batch and then aggregates the estimators via averaging. However, this method only works when $n=o(m^{2})$ and is computationally expensive. This paper proposes a computationally efficient method, which only requires an initial QR estimator on a small batch of data and then successively refines the estimator via multiple rounds of aggregations. Theoretically, as long as $n$ grows polynomially in $m$, we establish the asymptotic normality for the obtained estimator and show that our estimator with only a few rounds of aggregations achieves the same efficiency as the QR estimator computed on all the data. Moreover, our result allows the case that the dimensionality $p$ goes to infinity. The proposed method can also be applied to address the QR problem under distributed computing environment (e.g., in a large-scale sensor network) or for real-time streaming data. Full Article
rai Estimating causal effects in studies of human brain function: New models, methods and estimands By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Michael E. Sobel, Martin A. Lindquist. Source: The Annals of Applied Statistics, Volume 14, Number 1, 452--472.Abstract: Neuroscientists often use functional magnetic resonance imaging (fMRI) to infer effects of treatments on neural activity in brain regions. In a typical fMRI experiment, each subject is observed at several hundred time points. At each point, the blood oxygenation level dependent (BOLD) response is measured at 100,000 or more locations (voxels). Typically, these responses are modeled treating each voxel separately, and no rationale for interpreting associations as effects is given. Building on Sobel and Lindquist ( J. Amer. Statist. Assoc. 109 (2014) 967–976), who used potential outcomes to define unit and average effects at each voxel and time point, we define and estimate both “point” and “cumulated” effects for brain regions. Second, we construct a multisubject, multivoxel, multirun whole brain causal model with explicit parameters for regions. We justify estimation using BOLD responses averaged over voxels within regions, making feasible estimation for all regions simultaneously, thereby also facilitating inferences about association between effects in different regions. We apply the model to a study of pain, finding effects in standard pain regions. We also observe more cerebellar activity than observed in previous studies using prevailing methods. Full Article
rai Network classification with applications to brain connectomics By projecteuclid.org Published On :: Wed, 16 Oct 2019 22:03 EDT Jesús D. Arroyo Relión, Daniel Kessler, Elizaveta Levina, Stephan F. Taylor. Source: The Annals of Applied Statistics, Volume 13, Number 3, 1648--1677.Abstract: While statistical analysis of a single network has received a lot of attention in recent years, with a focus on social networks, analysis of a sample of networks presents its own challenges which require a different set of analytic tools. Here we study the problem of classification of networks with labeled nodes, motivated by applications in neuroimaging. Brain networks are constructed from imaging data to represent functional connectivity between regions of the brain, and previous work has shown the potential of such networks to distinguish between various brain disorders, giving rise to a network classification problem. Existing approaches tend to either treat all edge weights as a long vector, ignoring the network structure, or focus on graph topology as represented by summary measures while ignoring the edge weights. Our goal is to design a classification method that uses both the individual edge information and the network structure of the data in a computationally efficient way, and that can produce a parsimonious and interpretable representation of differences in brain connectivity patterns between classes. We propose a graph classification method that uses edge weights as predictors but incorporates the network nature of the data via penalties that promote sparsity in the number of nodes, in addition to the usual sparsity penalties that encourage selection of edges. We implement the method via efficient convex optimization and provide a detailed analysis of data from two fMRI studies of schizophrenia. Full Article
rai Distributional regression forests for probabilistic precipitation forecasting in complex terrain By projecteuclid.org Published On :: Wed, 16 Oct 2019 22:03 EDT Lisa Schlosser, Torsten Hothorn, Reto Stauffer, Achim Zeileis. Source: The Annals of Applied Statistics, Volume 13, Number 3, 1564--1589.Abstract: To obtain a probabilistic model for a dependent variable based on some set of explanatory variables, a distributional approach is often adopted where the parameters of the distribution are linked to regressors. In many classical models this only captures the location of the distribution but over the last decade there has been increasing interest in distributional regression approaches modeling all parameters including location, scale and shape. Notably, so-called nonhomogeneous Gaussian regression (NGR) models both mean and variance of a Gaussian response and is particularly popular in weather forecasting. Moreover, generalized additive models for location, scale and shape (GAMLSS) provide a framework where each distribution parameter is modeled separately capturing smooth linear or nonlinear effects. However, when variable selection is required and/or there are nonsmooth dependencies or interactions (especially unknown or of high-order), it is challenging to establish a good GAMLSS. A natural alternative in these situations would be the application of regression trees or random forests but, so far, no general distributional framework is available for these. Therefore, a framework for distributional regression trees and forests is proposed that blends regression trees and random forests with classical distributions from the GAMLSS framework as well as their censored or truncated counterparts. To illustrate these novel approaches in practice, they are employed to obtain probabilistic precipitation forecasts at numerous sites in a mountainous region (Tyrol, Austria) based on a large number of numerical weather prediction quantities. It is shown that the novel distributional regression forests automatically select variables and interactions, performing on par or often even better than GAMLSS specified either through prior meteorological knowledge or a computationally more demanding boosting approach. Full Article
rai Train kills 15 migrant workers walking home in India By news.yahoo.com Published On :: Fri, 08 May 2020 00:56:54 -0400 A train in India on Friday plowed through a group of migrant workers who fell asleep on the tracks after walking back home from a coronavirus lockdown, killing 15, the Railways Ministry said. Early this week the government started running trains to carry stranded workers to their home states. Full Article
rai Constrained Bayesian Optimization with Noisy Experiments By projecteuclid.org Published On :: Wed, 13 Mar 2019 22:00 EDT Benjamin Letham, Brian Karrer, Guilherme Ottoni, Eytan Bakshy. Source: Bayesian Analysis, Volume 14, Number 2, 495--519.Abstract: Randomized experiments are the gold standard for evaluating the effects of changes to real-world systems. Data in these tests may be difficult to collect and outcomes may have high variance, resulting in potentially large measurement error. Bayesian optimization is a promising technique for efficiently optimizing multiple continuous parameters, but existing approaches degrade in performance when the noise level is high, limiting its applicability to many randomized experiments. We derive an expression for expected improvement under greedy batch optimization with noisy observations and noisy constraints, and develop a quasi-Monte Carlo approximation that allows it to be efficiently optimized. Simulations with synthetic functions show that optimization performance on noisy, constrained problems outperforms existing methods. We further demonstrate the effectiveness of the method with two real-world experiments conducted at Facebook: optimizing a ranking system, and optimizing server compiler flags. Full Article
rai Allometric Analysis Detects Brain Size-Independent Effects of Sex and Sex Chromosome Complement on Human Cerebellar Organization By www.jneurosci.org Published On :: 2017-05-24 Catherine MankiwMay 24, 2017; 37:5221-5231Development Plasticity Repair Full Article
rai Gut Microbes and the Brain: Paradigm Shift in Neuroscience By www.jneurosci.org Published On :: 2014-11-12 Emeran A. MayerNov 12, 2014; 34:15490-15496Symposium Full Article
rai Brain-Derived Neurotrophic Factor Protection of Cortical Neurons from Serum Withdrawal-Induced Apoptosis Is Inhibited by cAMP By www.jneurosci.org Published On :: 2003-06-01 Steven PoserJun 1, 2003; 23:4420-4427Cellular Full Article
rai Advances in Enteric Neurobiology: The "Brain" in the Gut in Health and Disease By www.jneurosci.org Published On :: 2018-10-31 Subhash KulkarniOct 31, 2018; 38:9346-9354Symposium and Mini-Symposium Full Article
rai Memory and Brain Systems: 1969-2009 By www.jneurosci.org Published On :: 2009-10-14 Larry R. SquireOct 14, 2009; 29:12711-1271640th Anniversary Retrospective Full Article
rai The Pain of Sleep Loss: A Brain Characterization in Humans By www.jneurosci.org Published On :: 2019-03-20 Adam J. KrauseMar 20, 2019; 39:2291-2300BehavioralSystemsCognitive Full Article
rai Daily Marijuana Use Is Not Associated with Brain Morphometric Measures in Adolescents or Adults By www.jneurosci.org Published On :: 2015-01-28 Barbara J. WeilandJan 28, 2015; 35:1505-1512Neurobiology of Disease Full Article
rai The Effect of Body Posture on Brain Glymphatic Transport By www.jneurosci.org Published On :: 2015-08-05 Hedok LeeAug 5, 2015; 35:11034-11044Neurobiology of Disease Full Article
rai Dural Calcitonin Gene-Related Peptide Produces Female-Specific Responses in Rodent Migraine Models By www.jneurosci.org Published On :: 2019-05-29 Amanda AvonaMay 29, 2019; 39:4323-4331Systems/Circuits Full Article
rai A Transcriptome Database for Astrocytes, Neurons, and Oligodendrocytes: A New Resource for Understanding Brain Development and Function By www.jneurosci.org Published On :: 2008-01-02 John D. CahoyJan 2, 2008; 28:264-278Cellular Full Article
rai Nasal Respiration Entrains Human Limbic Oscillations and Modulates Cognitive Function By www.jneurosci.org Published On :: 2016-12-07 Christina ZelanoDec 7, 2016; 36:12448-12467Systems/Circuits Full Article
rai Brain Structures Differ between Musicians and Non-Musicians By www.jneurosci.org Published On :: 2003-10-08 Christian GaserOct 8, 2003; 23:9240-9245BehavioralSystemsCognitive Full Article
rai Endothelial Adora2a Activation Promotes Blood-Brain Barrier Breakdown and Cognitive Impairment in Mice with Diet-Induced Insulin Resistance By www.jneurosci.org Published On :: 2019-05-22 Masaki YamamotoMay 22, 2019; 39:4179-4192Neurobiology of Disease Full Article
rai Brain Activation during Human Male Ejaculation By www.jneurosci.org Published On :: 2003-10-08 Gert HolstegeOct 8, 2003; 23:9185-9193BehavioralSystemsCognitive Full Article