train

Call for Racial Equity Training Leads to Threats to Superintendent, Resistance from Community

Controversy over an intiative aimed a reducing inequities in Lee's Summit, Mo., schools led the police department to provide security protection for the district's first African-American superintendent. Now the school board has reversed course.




train

Addict aftercare : recovery training and self-help / Fred Zackon, William E. McAuliffe, James M.N. Ch'ien.




train

A Low Complexity Algorithm with O(√T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints

This paper considers online convex optimization over a complicated constraint set, which typically consists of multiple functional constraints and a set constraint. The conventional online projection algorithm (Zinkevich, 2003) can be difficult to implement due to the potentially high computation complexity of the projection operation. In this paper, we relax the functional constraints by allowing them to be violated at each round but still requiring them to be satisfied in the long term. This type of relaxed online convex optimization (with long term constraints) was first considered in Mahdavi et al. (2012). That prior work proposes an algorithm to achieve $O(sqrt{T})$ regret and $O(T^{3/4})$ constraint violations for general problems and another algorithm to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when the constraint set can be described by a finite number of linear constraints. A recent extension in Jenatton et al. (2016) can achieve $O(T^{max{ heta,1- heta}})$ regret and $O(T^{1- heta/2})$ constraint violations where $ hetain (0,1)$. The current paper proposes a new simple algorithm that yields improved performance in comparison to prior works. The new algorithm achieves an $O(sqrt{T})$ regret bound with $O(1)$ constraint violations.




train

A Unified Framework for Structured Graph Learning via Spectral Constraints

Graph learning from data is a canonical problem that has received substantial attention in the literature. Learning a structured graph is essential for interpretability and identification of the relationships among data. In general, learning a graph with a specific structure is an NP-hard combinatorial problem and thus designing a general tractable algorithm is challenging. Some useful structured graphs include connected, sparse, multi-component, bipartite, and regular graphs. In this paper, we introduce a unified framework for structured graph learning that combines Gaussian graphical model and spectral graph theory. We propose to convert combinatorial structural constraints into spectral constraints on graph matrices and develop an optimization framework based on block majorization-minimization to solve structured graph learning problem. The proposed algorithms are provably convergent and practically amenable for a number of graph based applications such as data clustering. Extensive numerical experiments with both synthetic and real data sets illustrate the effectiveness of the proposed algorithms. An open source R package containing the code for all the experiments is available at https://CRAN.R-project.org/package=spectralGraphTopology.




train

Tensor Train Decomposition on TensorFlow (T3F)

Tensor Train decomposition is used across many branches of machine learning. We present T3F—a library for Tensor Train decomposition based on TensorFlow. T3F supports GPU execution, batch processing, automatic differentiation, and versatile functionality for the Riemannian optimization framework, which takes into account the underlying manifold structure to construct efficient optimization methods. The library makes it easier to implement machine learning papers that rely on the Tensor Train decomposition. T3F includes documentation, examples and 94% test coverage.




train

Self-paced Multi-view Co-training

Co-training is a well-known semi-supervised learning approach which trains classifiers on two or more different views and exchanges pseudo labels of unlabeled instances in an iterative way. During the co-training process, pseudo labels of unlabeled instances are very likely to be false especially in the initial training, while the standard co-training algorithm adopts a 'draw without replacement' strategy and does not remove these wrongly labeled instances from training stages. Besides, most of the traditional co-training approaches are implemented for two-view cases, and their extensions in multi-view scenarios are not intuitive. These issues not only degenerate their performance as well as available application range but also hamper their fundamental theory. Moreover, there is no optimization model to explain the objective a co-training process manages to optimize. To address these issues, in this study we design a unified self-paced multi-view co-training (SPamCo) framework which draws unlabeled instances with replacement. Two specified co-regularization terms are formulated to develop different strategies for selecting pseudo-labeled instances during training. Both forms share the same optimization strategy which is consistent with the iteration process in co-training and can be naturally extended to multi-view scenarios. A distributed optimization strategy is also introduced to train the classifier of each view in parallel to further improve the efficiency of the algorithm. Furthermore, the SPamCo algorithm is proved to be PAC learnable, supporting its theoretical soundness. Experiments conducted on synthetic, text categorization, person re-identification, image recognition and object detection data sets substantiate the superiority of the proposed method.




train

Unsupervised Pre-trained Models from Healthy ADLs Improve Parkinson's Disease Classification of Gait Patterns. (arXiv:2005.02589v2 [cs.LG] UPDATED)

Application and use of deep learning algorithms for different healthcare applications is gaining interest at a steady pace. However, use of such algorithms can prove to be challenging as they require large amounts of training data that capture different possible variations. This makes it difficult to use them in a clinical setting since in most health applications researchers often have to work with limited data. Less data can cause the deep learning model to over-fit. In this paper, we ask how can we use data from a different environment, different use-case, with widely differing data distributions. We exemplify this use case by using single-sensor accelerometer data from healthy subjects performing activities of daily living - ADLs (source dataset), to extract features relevant to multi-sensor accelerometer gait data (target dataset) for Parkinson's disease classification. We train the pre-trained model using the source dataset and use it as a feature extractor. We show that the features extracted for the target dataset can be used to train an effective classification model. Our pre-trained source model consists of a convolutional autoencoder, and the target classification model is a simple multi-layer perceptron model. We explore two different pre-trained source models, trained using different activity groups, and analyze the influence the choice of pre-trained model has over the task of Parkinson's disease classification.




train

How many modes can a constrained Gaussian mixture have?. (arXiv:2005.01580v2 [math.ST] UPDATED)

We show, by an explicit construction, that a mixture of univariate Gaussians with variance 1 and means in $[-A,A]$ can have $Omega(A^2)$ modes. This disproves a recent conjecture of Dytso, Yagli, Poor and Shamai [IEEE Trans. Inform. Theory, Apr. 2020], who showed that such a mixture can have at most $O(A^2)$ modes and surmised that the upper bound could be improved to $O(A)$. Our result holds even if an additional variance constraint is imposed on the mixing distribution. Extending the result to higher dimensions, we exhibit a mixture of Gaussians in $mathbb{R}^d$, with identity covariances and means inside $[-A,A]^d$, that has $Omega(A^{2d})$ modes.




train

Mnemonics Training: Multi-Class Incremental Learning without Forgetting. (arXiv:2002.10211v3 [cs.CV] UPDATED)

Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes.




train

Reducing Communication in Graph Neural Network Training. (arXiv:2005.03300v1 [cs.LG])

Graph Neural Networks (GNNs) are powerful and flexible neural networks that use the naturally sparse connectivity information of the data. GNNs represent this connectivity as sparse matrices, which have lower arithmetic intensity and thus higher communication costs compared to dense matrices, making GNNs harder to scale to high concurrencies than convolutional or fully-connected neural networks.

We present a family of parallel algorithms for training GNNs. These algorithms are based on their counterparts in dense and sparse linear algebra, but they had not been previously applied to GNN training. We show that they can asymptotically reduce communication compared to existing parallel GNN training methods. We implement a promising and practical version that is based on 2D sparse-dense matrix multiplication using torch.distributed. Our implementation parallelizes over GPU-equipped clusters. We train GNNs on up to a hundred GPUs on datasets that include a protein network with over a billion edges.




train

An Empirical Study of Incremental Learning in Neural Network with Noisy Training Set. (arXiv:2005.03266v1 [cs.LG])

The notion of incremental learning is to train an ANN algorithm in stages, as and when newer training data arrives. Incremental learning is becoming widespread in recent times with the advent of deep learning. Noise in the training data reduces the accuracy of the algorithm. In this paper, we make an empirical study of the effect of noise in the training phase. We numerically show that the accuracy of the algorithm is dependent more on the location of the error than the percentage of error. Using Perceptron, Feed Forward Neural Network and Radial Basis Function Neural Network, we show that for the same percentage of error, the accuracy of the algorithm significantly varies with the location of error. Furthermore, our results show that the dependence of the accuracy with the location of error is independent of the algorithm. However, the slope of the degradation curve decreases with more sophisticated algorithms




train

Training and Classification using a Restricted Boltzmann Machine on the D-Wave 2000Q. (arXiv:2005.03247v1 [cs.LG])

Restricted Boltzmann Machine (RBM) is an energy based, undirected graphical model. It is commonly used for unsupervised and supervised machine learning. Typically, RBM is trained using contrastive divergence (CD). However, training with CD is slow and does not estimate exact gradient of log-likelihood cost function. In this work, the model expectation of gradient learning for RBM has been calculated using a quantum annealer (D-Wave 2000Q), which is much faster than Markov chain Monte Carlo (MCMC) used in CD. Training and classification results are compared with CD. The classification accuracy results indicate similar performance of both methods. Image reconstruction as well as log-likelihood calculations are used to compare the performance of quantum and classical algorithms for RBM training. It is shown that the samples obtained from quantum annealer can be used to train a RBM on a 64-bit `bars and stripes' data set with classification performance similar to a RBM trained with CD. Though training based on CD showed improved learning performance, training using a quantum annealer eliminates computationally expensive MCMC steps of CD.




train

Mental Conditioning to Perform Common Operations in General Surgery Training

9783319911649 978-3-319-91164-9




train

Quantile regression under memory constraint

Xi Chen, Weidong Liu, Yichen Zhang.

Source: The Annals of Statistics, Volume 47, Number 6, 3244--3273.

Abstract:
This paper studies the inference problem in quantile regression (QR) for a large sample size $n$ but under a limited memory constraint, where the memory can only store a small batch of data of size $m$. A natural method is the naive divide-and-conquer approach, which splits data into batches of size $m$, computes the local QR estimator for each batch and then aggregates the estimators via averaging. However, this method only works when $n=o(m^{2})$ and is computationally expensive. This paper proposes a computationally efficient method, which only requires an initial QR estimator on a small batch of data and then successively refines the estimator via multiple rounds of aggregations. Theoretically, as long as $n$ grows polynomially in $m$, we establish the asymptotic normality for the obtained estimator and show that our estimator with only a few rounds of aggregations achieves the same efficiency as the QR estimator computed on all the data. Moreover, our result allows the case that the dimensionality $p$ goes to infinity. The proposed method can also be applied to address the QR problem under distributed computing environment (e.g., in a large-scale sensor network) or for real-time streaming data.




train

Train kills 15 migrant workers walking home in India

A train in India on Friday plowed through a group of migrant workers who fell asleep on the tracks after walking back home from a coronavirus lockdown, killing 15, the Railways Ministry said. Early this week the government started running trains to carry stranded workers to their home states.





train

Constrained Bayesian Optimization with Noisy Experiments

Benjamin Letham, Brian Karrer, Guilherme Ottoni, Eytan Bakshy.

Source: Bayesian Analysis, Volume 14, Number 2, 495--519.

Abstract:
Randomized experiments are the gold standard for evaluating the effects of changes to real-world systems. Data in these tests may be difficult to collect and outcomes may have high variance, resulting in potentially large measurement error. Bayesian optimization is a promising technique for efficiently optimizing multiple continuous parameters, but existing approaches degrade in performance when the noise level is high, limiting its applicability to many randomized experiments. We derive an expression for expected improvement under greedy batch optimization with noisy observations and noisy constraints, and develop a quasi-Monte Carlo approximation that allows it to be efficiently optimized. Simulations with synthetic functions show that optimization performance on noisy, constrained problems outperforms existing methods. We further demonstrate the effectiveness of the method with two real-world experiments conducted at Facebook: optimizing a ranking system, and optimizing server compiler flags.




train

Nasal Respiration Entrains Human Limbic Oscillations and Modulates Cognitive Function

Christina Zelano
Dec 7, 2016; 36:12448-12467
Systems/Circuits




train

Coding of Navigational Distance and Functional Constraint of Boundaries in the Human Scene-Selective Cortex

For visually guided navigation, the use of environmental cues is essential. Particularly, detecting local boundaries that impose limits to locomotion and estimating their location is crucial. In a series of three fMRI experiments, we investigated whether there is a neural coding of navigational distance in the human visual cortex (both female and male). We used virtual reality software to systematically manipulate the distance from a viewer perspective to different types of a boundary. Using a multivoxel pattern classification employing a linear support vector machine, we found that the occipital place area (OPA) is sensitive to the navigational distance restricted by the transparent glass wall. Further, the OPA was sensitive to a non-crossable boundary only, suggesting an importance of the functional constraint of a boundary. Together, we propose the OPA as a perceptual source of external environmental features relevant for navigation.

SIGNIFICANCE STATEMENT One of major goals in cognitive neuroscience has been to understand the nature of visual scene representation in human ventral visual cortex. An aspect of scene perception that has been overlooked despite its ecological importance is the analysis of space for navigation. One of critical computation necessary for navigation is coding of distance to environmental boundaries that impose limit on navigator's movements. This paper reports the first empirical evidence for coding of navigational distance in the human visual cortex and its striking sensitivity to functional constraint of environmental boundaries. Such finding links the paper to previous neurological and behavioral works that emphasized the distance to boundaries as a crucial geometric property for reorientation behavior of children and other animal species.




train

This U.S. Sub Launched an Attack on a Japanese Train

The USS Barb had an unusual target in its sights in 1945 - one that wasn't even in the water. It was a Japanese supply train on the island of Karafuto




train

Dogs Are Being Trained to Sniff Out COVID-19

Researchers are attempting to teach eight dogs to detect the pandemic, which could help quickly screen large numbers of people in public places




train

Forgotten Tunnel Found Beneath Danish Train Station

Wood used to build the secret passageway came from a tree felled in 1874, according to a new analysis




train

Kenora OPP identify 18-year-old struck, killed by train Wednesday

Ontario Provincial Police (OPP) have identified the person struck and killed by a train in Kenora on Wednesday as 18-year-old Tyrease Payash, of Kenora.



  • News/Canada/Thunder Bay

train

Cyber Defense Monitoring and Forensics Training

The Computer Emergency Response Team of Mauritius (CERT-MU) in collaboration with the Command and Control Centre of Kenya organised a 3-day training programme on Cyber Defense Monitoring and Forensics at Voilà Hotel, Bagatelle from the 27th February – 1st March 2018. The training course provided an introduction to Network Security Monitoring (NSM), Security Information and Events Management (SIEM), Malware Analysis and Digital Forensics. Major part of the course was hands-on case studies and analysis exercises using real world data. The main focus of the training programme was on intensive hands-on sessions on addressing key challenges faced by local organizations in all sectors/industries. A wide range of commercial and open source tools were used to equip cyber defenders with the necessary skills to anticipate, detect, respond and contain adversaries. The training programme was followed by 23 participants from the public and private sector. 




train

Comment on Squeekville, model train amusement park, on display at Children’s Museum Gala – Oak Ridger by modelsteamtrain

<span class="topsy_trackback_comment"><span class="topsy_twitter_username"><span class="topsy_trackback_content">Squeekville, model train amusement park, on display at Children's ...: Squeekville, model train amusement park, ... http://bit.ly/9x4oFS</span></span>




train

Train kills 14 labourers laid off in coronavirus lockdown in India

A train killed 14 migrant workers who had fallen asleep on the track in India on Friday while they were heading back to their home village after losing their jobs amid the coronavirus lockdown, police said.




train

Fin24.com | UIF will be under 'very serious' strain, warns labour minister

Minister of Employment and Labour Thulas Nxesi said on Thursday afternoon that the Unemployment Insurance Fund was going to be under "very serious strain" and that he foresaw a period where there would be heavy dependence on the state.




train

Training Sowers

Lima, Peru- In June four members from OM Peru led a day-long training seminar in Lima. The seminars were attended by over 50 believers from five different churches.




train

Trained and equipped in Ireland

Through training at OM, Rebecca became more confident sharing Jesus in her home country.




train

Extreme Leadership Training Creates Unity

Extreme Leadership Training camps create unity in Ukraine.




train

OM Panama re-starts training school

OM's International Intensive School of Missions in Panama is getting ready to start in January 2012 to equip Latinos for missions.




train

Lockdown guide. Pub crawl across Scotland through Still Game, Local Hero, Trainspotting

LOCKDOWN may start easing soon, but it seems likely to be a long time yet before any of us find ourselves in an actual physical pub. It’s not of course the booze we’re missing – we can get plenty of that – but the company, the conviviality, the atmosphere, the feeling that, in the late hours, almost anything might kick off. So, for those who cry inside every time they walk past their closed-down local, or wake-up having dreamed of standing with a pint at the bar, here’s a few ways you




train

For Educators Vying for State Office, Teachers' Union Offers 'Soup to Nuts' Campaign Training

In the aftermath of this spring's teacher protests, more educators are running for state office—and the National Education Association is seizing on the political moment.




train

Trump's Budget Eliminates Funding for Teacher Training, Class-Size Reductions

The proposed budget from the Trump administration eliminates the Title II grant program, which pays for professional development and class-size reduction efforts.




train

Facebook, National Urban League to Partner on Digital-Skills Training

The social media giant, which is facing withering scrutiny over its data-collection practices, has announced a partnership with the National Urban League.




train

UEFA Training Ground relaunched

The UEFA Training Ground has been relaunched with improved usability, sections for coaches and women's football and TactX and You're the Boss available in nine languages.




train

Does Fellowship Pay: What Is the Long-term Financial Impact of Subspecialty Training in Pediatrics?

No studies have focused on the financial impact of fellowship training in pediatrics.

The results from this study can be helpful to current pediatric residents as they contemplate their career options. In addition, the study may be valuable to policy makers who evaluate health care reform and pediatric workforce-allocation issues. (Read the full article)




train

Pediatric Training and Career Intentions, 2003-2009

In the previous decade, graduating pediatric residents generally experienced success in finding desired jobs, but they also experienced increased debt and flat starting salaries.

This study highlights trends over the past several years (2003–2009) including high levels of satisfaction among graduating pediatric residents, increasing ease in obtaining postresidency positions, and a modest decline in interest in primary care practice. (Read the full article)




train

Pediatric Residency Training Director Tobacco Survey II

A 2001 survey of pediatric residency training directors indicated that few programs prepared residents to intervene on tobacco. A decade later, it is not known whether programs are doing more to prepare residents to intervene effectively with patients and parents.

Despite the need for pediatricians to play a leadership role in tobacco prevention and control, most pediatric residency training programs focus more on health effects of tobacco use and smoke exposure than on how to intervene with patients and parents. (Read the full article)




train

Newborn Mortality and Fresh Stillbirth Rates in Tanzania After Helping Babies Breathe Training

Birth asphyxia, or failure to initiate or sustain spontaneous breathing at birth, contributes to ~27% to 30% of neonatal deaths in resource-limited countries, including Tanzania. Without change, these countries will fail to meet Millennium Development Goal 4 targets by 2015.

The Helping Babies Breathe program was implemented in 8 hospitals in Tanzania in 2009. It has been associated with a sustained 47% reduction in early neonatal mortality within 24 hours and a 24% reduction in fresh stillbirths after 2 years. (Read the full article)




train

Level of Trainee and Tracheal Intubation Outcomes

Provider training level is associated with lower rates of successful tracheal intubation in selected neonatal settings. However, little is known about the association of training level with tracheal intubation success and adverse events in the PICU.

Our results demonstrate the association of training level on the first attempt and overall success rate as well as the incidence of adverse tracheal intubation–associated events in a large-scale, prospective assessment across 15 academic PICUs. (Read the full article)




train

Working Memory Training Improves Cognitive Function in VLBW Preschoolers

Preterm born children have cognitive problems that include deficits in working memory. Computer-based working memory training has been reported to improve cognitive function in children.

A computer-based working memory training program designed for preschoolers seems effective in very low birth weight children, not only on working memory tasks, but also by having a generalizing effect regarding memory and learning. (Read the full article)




train

Interns' Success With Clinical Procedures in Infants After Simulation Training

Pediatric training programs use simulation for procedural skills training. Research demonstrates student satisfaction with simulation training, improved confidence, and improved skills when retested on a simulator. Few studies, however, have investigated the clinical impact of simulation education.

This is the first multicenter, randomized trial to evaluate the impact of simulation-based mastery learning on clinical procedural performance in pediatrics. A single simulation-based training session was not sufficient to improve interns’ clinical procedural performance. (Read the full article)




train

Strength Training and Physical Activity in Boys: a Randomized Trial

Levels of daily physical activity in children are decreasing worldwide. This implies risk factors for cardiovascular and metabolic diseases.

Strength training makes children not only stronger but significantly increases their daily spontaneous physical activity outside the training intervention. (Read the full article)




train

Disparities in Age-Appropriate Child Passenger Restraint Use Among Children Aged 1 to 12 Years

Age-appropriate child safety seat use in the United States is suboptimal, particularly among children older than 1 year. Minority children have higher rates of inappropriate child safety seat use based on observational studies. Explanations for observed differences include socioeconomic factors.

White parents reported greater use of age-appropriate child safety seats for 1- to 7-year-old children than nonwhite parents. Race remained a significant predictor of age-appropriate restraint use after adjusting for parental education, family income, and information sources. (Read the full article)




train

In-School Neurofeedback Training for ADHD: Sustained Improvements From a Randomized Control Trial

An estimated 9.5% of children are diagnosed with attention-deficit/hyperactivity disorder (ADHD), which affects academic and social outcomes. We previously found significant improvements in ADHD symptoms immediately after neurofeedback training at school.

This randomized controlled trial included a large sample of elementary school students with ADHD who received in-school computer attention training with neurofeedback or cognitive training. Students who received neurofeedback were reported to have fewer ADHD symptoms 6 months after the intervention. (Read the full article)




train

Five-Year Follow-up of Community Pediatrics Training Initiative

Compared with their peers, pediatric residents who report exposure to community settings anticipate greater future community involvement at the end of training. The impact of community pediatrics training on actual future community involvement is not known.

Pediatricians exposed to enhanced community pediatrics training during residency report greater participation in community activities and greater related skills than their peers nationally. (Read the full article)




train

An Innovative Nonanimal Simulation Trainer for Chest Tube Insertion in Neonates

Practitioners caring for critically ill infants need to acquire competence in insertion of chest tubes for pneumothorax. Ethical and logistic concerns inhibit the use of animals, and there are no realistic simulation models available for neonatal chest tube insertion training.

An inexpensive, nonanimal chest tube insertion model can be easily constructed and used effectively to train interns and residents to improve their knowledge, clinical skills, and comfort levels to perform the chest tube insertion procedure in infants. (Read the full article)




train

In Situ Simulation Training for Neonatal Resuscitation: An RCT

High-fidelity simulation improves individual skills in neonatal resuscitation. Usually, training is performed in a simulation center. Little is known about the impact of in situ training on overall team performance.

In situ high-fidelity simulation training of 80% of a maternity’s staff significantly improved overall team performance in neonatal resuscitation (technical skills and teamwork). Fewer hazardous events occurred, and delay in improving the heart rate was shorter. (Read the full article)




train

Diversity and Inclusion Training in Pediatric Departments

The diversifying US population has led to the examination of workforce diversity and training. National data on diversity, inclusion, and cultural competency training have been previously collected but have been assessed only at the macro level of medicine.

This study assesses workforce diversity, inclusion, and cultural competency training in departments of pediatrics across the country and provides the first assessment of departmental efforts to improve diversity and inclusion and provide cultural competency training to trainees and faculty. (Read the full article)




train

Predicting Neonatal Intubation Competency in Trainees

Pediatric residents may not be achieving competency in neonatal intubation. Opportunities for intubation during residency are decreasing. A precise definition of competency during training is lacking.

Bayesian statistics may be used to describe neonatal intubation competency in residents. At least 4 successful intubations are needed to achieve competency. The first 2 intubation opportunities appear to predict how many intubation opportunities are ultimately needed to achieve competency. (Read the full article)