rain R.I. Education Commissioner Diagnosed with Brain Tumor By feedproxy.google.com Published On :: Thu, 23 Aug 2012 00:00:00 +0000 From guest blogger Kimberly Shannon Rhode Island Education Commissioner Deborah Gist has been diagnosed with a brain tumor and will undergo surgery in September, according to the Associated Press. She is expected to have a full recovery, but will be working a limited schedule until her operation. Af Full Article Rhode_Island
rain Dual-Language Learning: Making Teacher and Principal Training a Priority By feedproxy.google.com Published On :: Mon, 24 Sep 2018 00:00:00 +0000 In this seventh installment on the growth in dual-language learning, two experts from Delaware explore how state education leaders can build capacity to support both students and educators. Full Article Delaware
rain Building confidence in enrolling learners with disability for providers of education and training / ACPET, NDCO. By www.catalog.slsa.sa.gov.au Published On :: Full Article
rain Employment law constraints on the implementation of redundancies / presented by Kaye Smith, EMA Legal. By www.catalog.slsa.sa.gov.au Published On :: Full Article
rain Ghost train walk : in the Vales / by Grandmother Christine. By www.catalog.slsa.sa.gov.au Published On :: With two quick bursts of the Ghost train's whistle it indicates that the time is 12 midnight, but as quick as we rush out of the front door we have never been able to catch a glimpse of that train, till now! Grammy's tales are being shared through her granddaughter, but are they tales or real events and people, hmm? Read on to find out for yourself. He He. A fictional story set In the Vales as a Children's Bedtime Story. History and characters of this rolling countryside between sea and hills has been packaged up to be fun, informative and thought provoking. Full Article
rain CTP update - Traumatic Brain Injury (TBI Assessment). By www.catalog.slsa.sa.gov.au Published On :: Full Article
rain Rewording the brain : how cryptic crosswords can improve your memory and boost the power and agility of your brain / David Astle. By www.catalog.slsa.sa.gov.au Published On :: Memory. Full Article
rain The gendered brain : the new neuroscience that shatters the myth of the female brain / Gina Rippon. By www.catalog.slsa.sa.gov.au Published On :: Neuropsychology. Full Article
rain The nocturnal brain : nightmares, neuroscience and the secret world of sleep / Guy Leschziner. By www.catalog.slsa.sa.gov.au Published On :: Sleep deprivation -- Anecdotes. Full Article
rain Project Rainfall : the secret history of Pine Gap / Tom Gilling. By www.catalog.slsa.sa.gov.au Published On :: United States. Central Intelligence Agency -- History. Full Article
rain 2019 public library training wrap-up By feedproxy.google.com Published On :: Fri, 15 Nov 2019 05:02:00 +0000 It's been another busy year of training, with nearly 30 training sessions delivered to over 300 public library staf Full Article
rain Die Drainirung der Peritonealhohle : chirurgische Studien, nebst einem Bericht uber sieben Nierenexstirpationen / von Dr. Bardenheuer. By feedproxy.google.com Published On :: Stuttgart : F. Enke, 1881. Full Article
rain Domestic sanitary drainage and plumbing : lectures on practical sanitation delivered to plumbers, engineers, and others in the Central Technical Institution, South Kensington, London ... / by William R. Maguire. By feedproxy.google.com Published On :: London : Kegan Paul, Trench, Trubner, 1890. Full Article
rain Du curettage de l'utérus sans abaissement forcé à la vulve et d'une méthode de drainage utérin au moyen du crin de Florence / par Prosper Bouteil. By feedproxy.google.com Published On :: Paris : Société d’éditions scientifiques, 1893. Full Article
rain Electrical and anatomical demonstrations : delivered at the School of Massage and Electricity, in connection with the West-End Hospital for Diseases of the Nervous System, Paralysis and Epilepsy, Welbeck Street, London. A handbook for trained nurses and m By feedproxy.google.com Published On :: London : J. & A. Churchill, 1887. Full Article
rain An elementary description of the anatomy and physiology of the brain, viscera of the thorax, abdomen, &c. ... / by W. Simpson. By feedproxy.google.com Published On :: London, 1826. Full Article
rain Florida Mandates Mental Health Training for Students in Grades 6-12 By feedproxy.google.com Published On :: Thu, 18 Jul 2019 00:00:00 +0000 After of a mandate approved by the State Board of Education, public schools in Florida will have to provide students at least five hours of mental health instruction starting in sixth grade. Full Article Florida
rain A castle (the Castello Odescalchi di Bracciano?), with a flock of sheep attended by a shepherd. Etching and mezzotint by L. Marvy after Claude Lorraine. By feedproxy.google.com Published On :: [Paris] : Calcographie du Louvre, Musées Imperiaux, [1849?] Full Article
rain For Educators Vying for State Office, Teachers' Union Offers 'Soup to Nuts' Campaign Training By feedproxy.google.com Published On :: Wed, 05 Sep 2018 00:00:00 +0000 In the aftermath of this spring's teacher protests, more educators are running for state office—and the National Education Association is seizing on the political moment. Full Article Illinois
rain Call for Racial Equity Training Leads to Threats to Superintendent, Resistance from Community By feedproxy.google.com Published On :: Thu, 20 Jun 2019 00:00:00 +0000 Controversy over an intiative aimed a reducing inequities in Lee's Summit, Mo., schools led the police department to provide security protection for the district's first African-American superintendent. Now the school board has reversed course. Full Article Missouri
rain How does my brain work? / by Stacey A. Bedwell. By search.wellcomelibrary.org Published On :: [Poland] : [Stacey A. Bedwell], 2016. Full Article
rain Daddy Dog and Papi Panda's rainbow family. By search.wellcomelibrary.org Published On :: Middletown, DE : Amazon, 2018. Full Article
rain Opiate receptor subtypes and brain function / editors, Roger M. Brown, Doris H. Clouet, David P. Friedman. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1986. Full Article
rain Addict aftercare : recovery training and self-help / Fred Zackon, William E. McAuliffe, James M.N. Ch'ien. By search.wellcomelibrary.org Published On :: Full Article
rain Brain on meds. By search.wellcomelibrary.org Published On :: [London] : [publisher not identified], [2019] Full Article
rain A Low Complexity Algorithm with O(√T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints By Published On :: 2020 This paper considers online convex optimization over a complicated constraint set, which typically consists of multiple functional constraints and a set constraint. The conventional online projection algorithm (Zinkevich, 2003) can be difficult to implement due to the potentially high computation complexity of the projection operation. In this paper, we relax the functional constraints by allowing them to be violated at each round but still requiring them to be satisfied in the long term. This type of relaxed online convex optimization (with long term constraints) was first considered in Mahdavi et al. (2012). That prior work proposes an algorithm to achieve $O(sqrt{T})$ regret and $O(T^{3/4})$ constraint violations for general problems and another algorithm to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when the constraint set can be described by a finite number of linear constraints. A recent extension in Jenatton et al. (2016) can achieve $O(T^{max{ heta,1- heta}})$ regret and $O(T^{1- heta/2})$ constraint violations where $ hetain (0,1)$. The current paper proposes a new simple algorithm that yields improved performance in comparison to prior works. The new algorithm achieves an $O(sqrt{T})$ regret bound with $O(1)$ constraint violations. Full Article
rain A Unified Framework for Structured Graph Learning via Spectral Constraints By Published On :: 2020 Graph learning from data is a canonical problem that has received substantial attention in the literature. Learning a structured graph is essential for interpretability and identification of the relationships among data. In general, learning a graph with a specific structure is an NP-hard combinatorial problem and thus designing a general tractable algorithm is challenging. Some useful structured graphs include connected, sparse, multi-component, bipartite, and regular graphs. In this paper, we introduce a unified framework for structured graph learning that combines Gaussian graphical model and spectral graph theory. We propose to convert combinatorial structural constraints into spectral constraints on graph matrices and develop an optimization framework based on block majorization-minimization to solve structured graph learning problem. The proposed algorithms are provably convergent and practically amenable for a number of graph based applications such as data clustering. Extensive numerical experiments with both synthetic and real data sets illustrate the effectiveness of the proposed algorithms. An open source R package containing the code for all the experiments is available at https://CRAN.R-project.org/package=spectralGraphTopology. Full Article
rain Tensor Train Decomposition on TensorFlow (T3F) By Published On :: 2020 Tensor Train decomposition is used across many branches of machine learning. We present T3F—a library for Tensor Train decomposition based on TensorFlow. T3F supports GPU execution, batch processing, automatic differentiation, and versatile functionality for the Riemannian optimization framework, which takes into account the underlying manifold structure to construct efficient optimization methods. The library makes it easier to implement machine learning papers that rely on the Tensor Train decomposition. T3F includes documentation, examples and 94% test coverage. Full Article
rain Self-paced Multi-view Co-training By Published On :: 2020 Co-training is a well-known semi-supervised learning approach which trains classifiers on two or more different views and exchanges pseudo labels of unlabeled instances in an iterative way. During the co-training process, pseudo labels of unlabeled instances are very likely to be false especially in the initial training, while the standard co-training algorithm adopts a 'draw without replacement' strategy and does not remove these wrongly labeled instances from training stages. Besides, most of the traditional co-training approaches are implemented for two-view cases, and their extensions in multi-view scenarios are not intuitive. These issues not only degenerate their performance as well as available application range but also hamper their fundamental theory. Moreover, there is no optimization model to explain the objective a co-training process manages to optimize. To address these issues, in this study we design a unified self-paced multi-view co-training (SPamCo) framework which draws unlabeled instances with replacement. Two specified co-regularization terms are formulated to develop different strategies for selecting pseudo-labeled instances during training. Both forms share the same optimization strategy which is consistent with the iteration process in co-training and can be naturally extended to multi-view scenarios. A distributed optimization strategy is also introduced to train the classifier of each view in parallel to further improve the efficiency of the algorithm. Furthermore, the SPamCo algorithm is proved to be PAC learnable, supporting its theoretical soundness. Experiments conducted on synthetic, text categorization, person re-identification, image recognition and object detection data sets substantiate the superiority of the proposed method. Full Article
rain Analyzing complex functional brain networks: Fusing statistics and network science to understand the brain By projecteuclid.org Published On :: Mon, 28 Oct 2013 09:06 EDT Sean L. Simpson, F. DuBois Bowman, Paul J. LaurientiSource: Statist. Surv., Volume 7, 1--36.Abstract: Complex functional brain network analyses have exploded over the last decade, gaining traction due to their profound clinical implications. The application of network science (an interdisciplinary offshoot of graph theory) has facilitated these analyses and enabled examining the brain as an integrated system that produces complex behaviors. While the field of statistics has been integral in advancing activation analyses and some connectivity analyses in functional neuroimaging research, it has yet to play a commensurate role in complex network analyses. Fusing novel statistical methods with network-based functional neuroimage analysis will engender powerful analytical tools that will aid in our understanding of normal brain function as well as alterations due to various brain disorders. Here we survey widely used statistical and network science tools for analyzing fMRI network data and discuss the challenges faced in filling some of the remaining methodological gaps. When applied and interpreted correctly, the fusion of network scientific and statistical methods has a chance to revolutionize the understanding of brain function. Full Article
rain Unsupervised Pre-trained Models from Healthy ADLs Improve Parkinson's Disease Classification of Gait Patterns. (arXiv:2005.02589v2 [cs.LG] UPDATED) By arxiv.org Published On :: Application and use of deep learning algorithms for different healthcare applications is gaining interest at a steady pace. However, use of such algorithms can prove to be challenging as they require large amounts of training data that capture different possible variations. This makes it difficult to use them in a clinical setting since in most health applications researchers often have to work with limited data. Less data can cause the deep learning model to over-fit. In this paper, we ask how can we use data from a different environment, different use-case, with widely differing data distributions. We exemplify this use case by using single-sensor accelerometer data from healthy subjects performing activities of daily living - ADLs (source dataset), to extract features relevant to multi-sensor accelerometer gait data (target dataset) for Parkinson's disease classification. We train the pre-trained model using the source dataset and use it as a feature extractor. We show that the features extracted for the target dataset can be used to train an effective classification model. Our pre-trained source model consists of a convolutional autoencoder, and the target classification model is a simple multi-layer perceptron model. We explore two different pre-trained source models, trained using different activity groups, and analyze the influence the choice of pre-trained model has over the task of Parkinson's disease classification. Full Article
rain How many modes can a constrained Gaussian mixture have?. (arXiv:2005.01580v2 [math.ST] UPDATED) By arxiv.org Published On :: We show, by an explicit construction, that a mixture of univariate Gaussians with variance 1 and means in $[-A,A]$ can have $Omega(A^2)$ modes. This disproves a recent conjecture of Dytso, Yagli, Poor and Shamai [IEEE Trans. Inform. Theory, Apr. 2020], who showed that such a mixture can have at most $O(A^2)$ modes and surmised that the upper bound could be improved to $O(A)$. Our result holds even if an additional variance constraint is imposed on the mixing distribution. Extending the result to higher dimensions, we exhibit a mixture of Gaussians in $mathbb{R}^d$, with identity covariances and means inside $[-A,A]^d$, that has $Omega(A^{2d})$ modes. Full Article
rain Mnemonics Training: Multi-Class Incremental Learning without Forgetting. (arXiv:2002.10211v3 [cs.CV] UPDATED) By arxiv.org Published On :: Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes. Full Article
rain Reducing Communication in Graph Neural Network Training. (arXiv:2005.03300v1 [cs.LG]) By arxiv.org Published On :: Graph Neural Networks (GNNs) are powerful and flexible neural networks that use the naturally sparse connectivity information of the data. GNNs represent this connectivity as sparse matrices, which have lower arithmetic intensity and thus higher communication costs compared to dense matrices, making GNNs harder to scale to high concurrencies than convolutional or fully-connected neural networks. We present a family of parallel algorithms for training GNNs. These algorithms are based on their counterparts in dense and sparse linear algebra, but they had not been previously applied to GNN training. We show that they can asymptotically reduce communication compared to existing parallel GNN training methods. We implement a promising and practical version that is based on 2D sparse-dense matrix multiplication using torch.distributed. Our implementation parallelizes over GPU-equipped clusters. We train GNNs on up to a hundred GPUs on datasets that include a protein network with over a billion edges. Full Article
rain An Empirical Study of Incremental Learning in Neural Network with Noisy Training Set. (arXiv:2005.03266v1 [cs.LG]) By arxiv.org Published On :: The notion of incremental learning is to train an ANN algorithm in stages, as and when newer training data arrives. Incremental learning is becoming widespread in recent times with the advent of deep learning. Noise in the training data reduces the accuracy of the algorithm. In this paper, we make an empirical study of the effect of noise in the training phase. We numerically show that the accuracy of the algorithm is dependent more on the location of the error than the percentage of error. Using Perceptron, Feed Forward Neural Network and Radial Basis Function Neural Network, we show that for the same percentage of error, the accuracy of the algorithm significantly varies with the location of error. Furthermore, our results show that the dependence of the accuracy with the location of error is independent of the algorithm. However, the slope of the degradation curve decreases with more sophisticated algorithms Full Article
rain Training and Classification using a Restricted Boltzmann Machine on the D-Wave 2000Q. (arXiv:2005.03247v1 [cs.LG]) By arxiv.org Published On :: Restricted Boltzmann Machine (RBM) is an energy based, undirected graphical model. It is commonly used for unsupervised and supervised machine learning. Typically, RBM is trained using contrastive divergence (CD). However, training with CD is slow and does not estimate exact gradient of log-likelihood cost function. In this work, the model expectation of gradient learning for RBM has been calculated using a quantum annealer (D-Wave 2000Q), which is much faster than Markov chain Monte Carlo (MCMC) used in CD. Training and classification results are compared with CD. The classification accuracy results indicate similar performance of both methods. Image reconstruction as well as log-likelihood calculations are used to compare the performance of quantum and classical algorithms for RBM training. It is shown that the samples obtained from quantum annealer can be used to train a RBM on a 64-bit `bars and stripes' data set with classification performance similar to a RBM trained with CD. Though training based on CD showed improved learning performance, training using a quantum annealer eliminates computationally expensive MCMC steps of CD. Full Article
rain Tumor microenvironments in organs : from the brain to the skin. By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030362140 (electronic bk.) Full Article
rain Mental Conditioning to Perform Common Operations in General Surgery Training By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319911649 978-3-319-91164-9 Full Article
rain Quantile regression under memory constraint By projecteuclid.org Published On :: Wed, 30 Oct 2019 22:03 EDT Xi Chen, Weidong Liu, Yichen Zhang. Source: The Annals of Statistics, Volume 47, Number 6, 3244--3273.Abstract: This paper studies the inference problem in quantile regression (QR) for a large sample size $n$ but under a limited memory constraint, where the memory can only store a small batch of data of size $m$. A natural method is the naive divide-and-conquer approach, which splits data into batches of size $m$, computes the local QR estimator for each batch and then aggregates the estimators via averaging. However, this method only works when $n=o(m^{2})$ and is computationally expensive. This paper proposes a computationally efficient method, which only requires an initial QR estimator on a small batch of data and then successively refines the estimator via multiple rounds of aggregations. Theoretically, as long as $n$ grows polynomially in $m$, we establish the asymptotic normality for the obtained estimator and show that our estimator with only a few rounds of aggregations achieves the same efficiency as the QR estimator computed on all the data. Moreover, our result allows the case that the dimensionality $p$ goes to infinity. The proposed method can also be applied to address the QR problem under distributed computing environment (e.g., in a large-scale sensor network) or for real-time streaming data. Full Article
rain Estimating causal effects in studies of human brain function: New models, methods and estimands By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Michael E. Sobel, Martin A. Lindquist. Source: The Annals of Applied Statistics, Volume 14, Number 1, 452--472.Abstract: Neuroscientists often use functional magnetic resonance imaging (fMRI) to infer effects of treatments on neural activity in brain regions. In a typical fMRI experiment, each subject is observed at several hundred time points. At each point, the blood oxygenation level dependent (BOLD) response is measured at 100,000 or more locations (voxels). Typically, these responses are modeled treating each voxel separately, and no rationale for interpreting associations as effects is given. Building on Sobel and Lindquist ( J. Amer. Statist. Assoc. 109 (2014) 967–976), who used potential outcomes to define unit and average effects at each voxel and time point, we define and estimate both “point” and “cumulated” effects for brain regions. Second, we construct a multisubject, multivoxel, multirun whole brain causal model with explicit parameters for regions. We justify estimation using BOLD responses averaged over voxels within regions, making feasible estimation for all regions simultaneously, thereby also facilitating inferences about association between effects in different regions. We apply the model to a study of pain, finding effects in standard pain regions. We also observe more cerebellar activity than observed in previous studies using prevailing methods. Full Article
rain Network classification with applications to brain connectomics By projecteuclid.org Published On :: Wed, 16 Oct 2019 22:03 EDT Jesús D. Arroyo Relión, Daniel Kessler, Elizaveta Levina, Stephan F. Taylor. Source: The Annals of Applied Statistics, Volume 13, Number 3, 1648--1677.Abstract: While statistical analysis of a single network has received a lot of attention in recent years, with a focus on social networks, analysis of a sample of networks presents its own challenges which require a different set of analytic tools. Here we study the problem of classification of networks with labeled nodes, motivated by applications in neuroimaging. Brain networks are constructed from imaging data to represent functional connectivity between regions of the brain, and previous work has shown the potential of such networks to distinguish between various brain disorders, giving rise to a network classification problem. Existing approaches tend to either treat all edge weights as a long vector, ignoring the network structure, or focus on graph topology as represented by summary measures while ignoring the edge weights. Our goal is to design a classification method that uses both the individual edge information and the network structure of the data in a computationally efficient way, and that can produce a parsimonious and interpretable representation of differences in brain connectivity patterns between classes. We propose a graph classification method that uses edge weights as predictors but incorporates the network nature of the data via penalties that promote sparsity in the number of nodes, in addition to the usual sparsity penalties that encourage selection of edges. We implement the method via efficient convex optimization and provide a detailed analysis of data from two fMRI studies of schizophrenia. Full Article
rain Distributional regression forests for probabilistic precipitation forecasting in complex terrain By projecteuclid.org Published On :: Wed, 16 Oct 2019 22:03 EDT Lisa Schlosser, Torsten Hothorn, Reto Stauffer, Achim Zeileis. Source: The Annals of Applied Statistics, Volume 13, Number 3, 1564--1589.Abstract: To obtain a probabilistic model for a dependent variable based on some set of explanatory variables, a distributional approach is often adopted where the parameters of the distribution are linked to regressors. In many classical models this only captures the location of the distribution but over the last decade there has been increasing interest in distributional regression approaches modeling all parameters including location, scale and shape. Notably, so-called nonhomogeneous Gaussian regression (NGR) models both mean and variance of a Gaussian response and is particularly popular in weather forecasting. Moreover, generalized additive models for location, scale and shape (GAMLSS) provide a framework where each distribution parameter is modeled separately capturing smooth linear or nonlinear effects. However, when variable selection is required and/or there are nonsmooth dependencies or interactions (especially unknown or of high-order), it is challenging to establish a good GAMLSS. A natural alternative in these situations would be the application of regression trees or random forests but, so far, no general distributional framework is available for these. Therefore, a framework for distributional regression trees and forests is proposed that blends regression trees and random forests with classical distributions from the GAMLSS framework as well as their censored or truncated counterparts. To illustrate these novel approaches in practice, they are employed to obtain probabilistic precipitation forecasts at numerous sites in a mountainous region (Tyrol, Austria) based on a large number of numerical weather prediction quantities. It is shown that the novel distributional regression forests automatically select variables and interactions, performing on par or often even better than GAMLSS specified either through prior meteorological knowledge or a computationally more demanding boosting approach. Full Article
rain Train kills 15 migrant workers walking home in India By news.yahoo.com Published On :: Fri, 08 May 2020 00:56:54 -0400 A train in India on Friday plowed through a group of migrant workers who fell asleep on the tracks after walking back home from a coronavirus lockdown, killing 15, the Railways Ministry said. Early this week the government started running trains to carry stranded workers to their home states. Full Article