learn

Where They Are: The Nation's Small But Growing Population of Black English-Learners

In five northern U.S. states, black students comprise more than a fifth of ELL enrollment.




learn

Schools Lean on Staff Who Speak Students' Language to Keep English-Learners Connected

The rocky shift to remote learning has exacerbated inequities for the nation's 5 million English-learners. An army of multilingual liaisons work round the clock to plug widening gaps.




learn

Educators Who Ran for Office Share Their Lessons Learned (Video)

Watch a discussion between three educators who ran for their state legislatures about their experiences on the campaign trail.




learn

How to Teach Math to Students With Disabilities, English Language Learners

Experts recommend emphasizing language skills, avoiding assumptions about ability based on broad student labels, and focusing on students’ strengths rather than their weaknesses.




learn

Curbing the Spread of COVID-19, Anxiety, and Learning Loss for Youth Behind Bars

Coronavirus is spreading rapidly in pre- and post-trial correctional facilities across the United States, and the challenges of social distancing for students in regular districts are all massively compounded for students behind bars.




learn

Rapid Deployment of Remote Learning: Lessons From 4 Districts

Chief technology officers are facing an unprecedented test of digital preparedness due to the coronavirus pandemic, struggling with shortfalls of available learning devices and huge Wi-Fi access challenges.




learn

What Teachers Tell Us About the Connections Between Standards, Curriculum, and Professional Learning

A statewide survey of educators in Tennessee provides critical insights into connections that exist between standards, curriculum, professional development, and ultimately student success.




learn

Dual-Language Learning: How Schools Can Empower Students and Parents

In this fifth installment on the growth in dual-language learning, the executive director of the BUENO Center for Multicultural Education at the University of Colorado, Boulder., says districts should focus on the what students and their families need, not what educators want.




learn

How to Teach Math to Students With Disabilities, English Language Learners

Experts recommend emphasizing language skills, avoiding assumptions about ability based on broad student labels, and focusing on students’ strengths rather than their weaknesses.




learn

What This Superintendent Learns From Teaching a High School Course

The leader of a Montana school district spends up to two hours each day grading assignments from students in an online English credit recovery program.




learn

The Year in Personalized Learning: 2017 in Review

The Chan Zuckerberg Initiative, states like Vermont and Rhode Island, and companies such as AltSchool all generated headlines about personalized learning in 2017.




learn

Rhode Island Announces Statewide K-12 Personalized Learning Push

The Chan-Zuckerberg Initiative and other funders are supporting Rhode Island's efforts to define and research personalized learning in traditional public schools.




learn

Rhode Island to Promote Blended Learning Through Nonprofit Partnership

The Rhode Island Department of Education and the nonprofit Learning Accelerator are teaming to develop a strategic plan and a communications strategy aimed at expanding blended learning.




learn

States Must Change, Too For Blended Learning

Lisa Duty of The Learning Accelerator, a Rhode Island Department of Education (RIDE) and Highlander Institute funding partner, outlines the Rhode Islands's commitment to a blended learning future. She describes how the state is developing its new five-year strategic plan that's engaging RIDE's Ambas




learn

Dual-Language Learning: How Schools Can Invest in Cultural and Linguistic Diversity

In this fourth installment on the growth in dual-language learning, the director of dual-language education in Portland, Ore., says schools must have a clear reason for why they are offering dual-language instruction.




learn

Rapid Deployment of Remote Learning: Lessons From 4 Districts

Chief technology officers are facing an unprecedented test of digital preparedness due to the coronavirus pandemic, struggling with shortfalls of available learning devices and huge Wi-Fi access challenges.




learn

Reimagining Professional Learning in Delaware

Stephanie Hirsh recently visited several schools in Delaware to see first-hand the impact of the state's redesigned professional learning system.




learn

Dual-Language Learning: Making Teacher and Principal Training a Priority

In this seventh installment on the growth in dual-language learning, two experts from Delaware explore how state education leaders can build capacity to support both students and educators.




learn

Knowledge sharing for the development of learning resources : theory, method, process and application for schools, communities and the workplace : a UNESCO-PNIEVE resource / by John E. Harrington, Professor Emeritis.

The Knowledge Sharing for the Development of Learning Resources tutorial provides a professional step forward, a learning experience that leads to recognition that your leadership is well founded as well as ensuring that participants in the development of learning resources recognize they are contributing to an exceptional achievement.




learn

Building confidence in enrolling learners with disability for providers of education and training / ACPET, NDCO.




learn

What Remote Learning Looks Like During the Coronavirus Crisis

We asked parents, students, and educators to share what their home learning environments look like as nearly all schools are shut down for extended periods because of the coronavirus pandemic.                          




learn

Designing the John B Fairfax Learning Centre

The John B Fairfax Learning Centre is officially launched and we look forward to welcoming visitors to this fabulous new




learn

Learning together in term two 

In the most extraordinary circumstances teachers have once again demonstrated their professionalism, skill, flexibility




learn

Rapid Deployment of Remote Learning: Lessons From 4 Districts

Chief technology officers are facing an unprecedented test of digital preparedness due to the coronavirus pandemic, struggling with shortfalls of available learning devices and huge Wi-Fi access challenges.




learn

There's Pushback to Social-Emotional Learning. Here's What Happened in One State

When Idaho education leaders pitched social-emotional learning training for teachers, some state lawmakers compared the plan to dystopian behavior control. Some walked out of the meeting.




learn

What Teachers Can Learn from Iowa's Efforts to Engage Teen Caucusgoers

A new generation of Iowans are preparing to caucus for the first time. Here's how their teachers are preparing them, and what it says about civics education in 2020.




learn

Largest Iowa school district could extend distance learning




learn

How Weather Forced a Minn. District to Establish E-Learning Options On the Fly

The director of teaching and learning for a Minnesota district talks about putting e-learning days into action under difficult circumstances.




learn

Social and Emotional Learning in Vermont

In the Green Mountain State, education leaders discuss their focus on the whole child.




learn

Where They Are: The Nation's Small But Growing Population of Black English-Learners

In five northern U.S. states, black students comprise more than a fifth of ELL enrollment.




learn

New Breed of After-School Programs Embrace English-Learners

A handful of districts and other groups are reshaping the after-school space to provide a wide range of social and linguistic supports for newcomer students.




learn

Schools Lean on Staff Who Speak Students' Language to Keep English-Learners Connected

The rocky shift to remote learning has exacerbated inequities for the nation's 5 million English-learners. An army of multilingual liaisons work round the clock to plug widening gaps.




learn

Cupid learning to read the letters of the alphabet. Engraving after A. Allegri, il Corrreggio.

[London] (at the Historic Gallery, 87 Pall Mall) : Pub.d by Mr Stone.




learn

New Breed of After-School Programs Embrace English-Learners

A handful of districts and other groups are reshaping the after-school space to provide a wide range of social and linguistic supports for newcomer students.




learn

What Principals Learn From Roughing It in the Woods

In three days of rock climbing, orienteering, and other challenging outdoor experiences, principals get to examine their own—and others’—strengths and weaknesses as leaders.




learn

Learning factors in substance abuse / editor, Barbara A. Ray.

Rockville, Maryland : National Institute on Drug Abuse, 1988.




learn

Gaussian field on the symmetric group: Prediction and learning

François Bachoc, Baptiste Broto, Fabrice Gamboa, Jean-Michel Loubes.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 503--546.

Abstract:
In the framework of the supervised learning of a real function defined on an abstract space $mathcal{X}$, Gaussian processes are widely used. The Euclidean case for $mathcal{X}$ is well known and has been widely studied. In this paper, we explore the less classical case where $mathcal{X}$ is the non commutative finite group of permutations (namely the so-called symmetric group $S_{N}$). We provide an application to Gaussian process based optimization of Latin Hypercube Designs. We also extend our results to the case of partial rankings.




learn

A Statistical Learning Approach to Modal Regression

This paper studies the nonparametric modal regression problem systematically from a statistical learning viewpoint. Originally motivated by pursuing a theoretical understanding of the maximum correntropy criterion based regression (MCCR), our study reveals that MCCR with a tending-to-zero scale parameter is essentially modal regression. We show that the nonparametric modal regression problem can be approached via the classical empirical risk minimization. Some efforts are then made to develop a framework for analyzing and implementing modal regression. For instance, the modal regression function is described, the modal regression risk is defined explicitly and its Bayes rule is characterized; for the sake of computational tractability, the surrogate modal regression risk, which is termed as the generalization risk in our study, is introduced. On the theoretical side, the excess modal regression risk, the excess generalization risk, the function estimation error, and the relations among the above three quantities are studied rigorously. It turns out that under mild conditions, function estimation consistency and convergence may be pursued in modal regression as in vanilla regression protocols such as mean regression, median regression, and quantile regression. On the practical side, the implementation issues of modal regression including the computational algorithm and the selection of the tuning parameters are discussed. Numerical validations on modal regression are also conducted to verify our findings.




learn

Perturbation Bounds for Procrustes, Classical Scaling, and Trilateration, with Applications to Manifold Learning

One of the common tasks in unsupervised learning is dimensionality reduction, where the goal is to find meaningful low-dimensional structures hidden in high-dimensional data. Sometimes referred to as manifold learning, this problem is closely related to the problem of localization, which aims at embedding a weighted graph into a low-dimensional Euclidean space. Several methods have been proposed for localization, and also manifold learning. Nonetheless, the robustness property of most of them is little understood. In this paper, we obtain perturbation bounds for classical scaling and trilateration, which are then applied to derive performance bounds for Isomap, Landmark Isomap, and Maximum Variance Unfolding. A new perturbation bound for procrustes analysis plays a key role.




learn

A Unified Framework for Structured Graph Learning via Spectral Constraints

Graph learning from data is a canonical problem that has received substantial attention in the literature. Learning a structured graph is essential for interpretability and identification of the relationships among data. In general, learning a graph with a specific structure is an NP-hard combinatorial problem and thus designing a general tractable algorithm is challenging. Some useful structured graphs include connected, sparse, multi-component, bipartite, and regular graphs. In this paper, we introduce a unified framework for structured graph learning that combines Gaussian graphical model and spectral graph theory. We propose to convert combinatorial structural constraints into spectral constraints on graph matrices and develop an optimization framework based on block majorization-minimization to solve structured graph learning problem. The proposed algorithms are provably convergent and practically amenable for a number of graph based applications such as data clustering. Extensive numerical experiments with both synthetic and real data sets illustrate the effectiveness of the proposed algorithms. An open source R package containing the code for all the experiments is available at https://CRAN.R-project.org/package=spectralGraphTopology.




learn

GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing

We present GluonCV and GluonNLP, the deep learning toolkits for computer vision and natural language processing based on Apache MXNet (incubating). These toolkits provide state-of-the-art pre-trained models, training scripts, and training logs, to facilitate rapid prototyping and promote reproducible research. We also provide modular APIs with flexible building blocks to enable efficient customization. Leveraging the MXNet ecosystem, the deep learning models in GluonCV and GluonNLP can be deployed onto a variety of platforms with different programming languages. The Apache 2.0 license has been adopted by GluonCV and GluonNLP to allow for software distribution, modification, and usage.




learn

On the consistency of graph-based Bayesian semi-supervised learning and the scalability of sampling algorithms

This paper considers a Bayesian approach to graph-based semi-supervised learning. We show that if the graph parameters are suitably scaled, the graph-posteriors converge to a continuum limit as the size of the unlabeled data set grows. This consistency result has profound algorithmic implications: we prove that when consistency holds, carefully designed Markov chain Monte Carlo algorithms have a uniform spectral gap, independent of the number of unlabeled inputs. Numerical experiments illustrate and complement the theory.




learn

Learning with Fenchel-Young losses

Over the past decades, numerous loss functions have been been proposed for a variety of supervised learning tasks, including regression, classification, ranking, and more generally structured prediction. Understanding the core principles and theoretical properties underpinning these losses is key to choose the right loss for the right problem, as well as to create new losses which combine their strengths. In this paper, we introduce Fenchel-Young losses, a generic way to construct a convex loss function for a regularized prediction function. We provide an in-depth study of their properties in a very broad setting, covering all the aforementioned supervised learning tasks, and revealing new connections between sparsity, generalized entropies, and separation margins. We show that Fenchel-Young losses unify many well-known loss functions and allow to create useful new ones easily. Finally, we derive efficient predictive and training algorithms, making Fenchel-Young losses appealing both in theory and practice.




learn

Learning Linear Non-Gaussian Causal Models in the Presence of Latent Variables

We consider the problem of learning causal models from observational data generated by linear non-Gaussian acyclic causal models with latent variables. Without considering the effect of latent variables, the inferred causal relationships among the observed variables are often wrong. Under faithfulness assumption, we propose a method to check whether there exists a causal path between any two observed variables. From this information, we can obtain the causal order among the observed variables. The next question is whether the causal effects can be uniquely identified as well. We show that causal effects among observed variables cannot be identified uniquely under mere assumptions of faithfulness and non-Gaussianity of exogenous noises. However, we are able to propose an efficient method that identifies the set of all possible causal effects that are compatible with the observational data. We present additional structural conditions on the causal graph under which causal effects among observed variables can be determined uniquely. Furthermore, we provide necessary and sufficient graphical conditions for unique identification of the number of variables in the system. Experiments on synthetic data and real-world data show the effectiveness of our proposed algorithm for learning causal models.




learn

Ensemble Learning for Relational Data

We present a theoretical analysis framework for relational ensemble models. We show that ensembles of collective classifiers can improve predictions for graph data by reducing errors due to variance in both learning and inference. In addition, we propose a relational ensemble framework that combines a relational ensemble learning approach with a relational ensemble inference approach for collective classification. The proposed ensemble techniques are applicable for both single and multiple graph settings. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed framework. Finally, our experimental results support the theoretical analysis and confirm that ensemble algorithms that explicitly focus on both learning and inference processes and aim at reducing errors associated with both, are the best performers.




learn

Learning Causal Networks via Additive Faithfulness

In this paper we introduce a statistical model, called additively faithful directed acyclic graph (AFDAG), for causal learning from observational data. Our approach is based on additive conditional independence (ACI), a recently proposed three-way statistical relation that shares many similarities with conditional independence but without resorting to multi-dimensional kernels. This distinct feature strikes a balance between a parametric model and a fully nonparametric model, which makes the proposed model attractive for handling large networks. We develop an estimator for AFDAG based on a linear operator that characterizes ACI, and establish the consistency and convergence rates of this estimator, as well as the uniform consistency of the estimated DAG. Moreover, we introduce a modified PC-algorithm to implement the estimating procedure efficiently, so that its complexity is determined by the level of sparseness rather than the dimension of the network. Through simulation studies we show that our method outperforms existing methods when commonly assumed conditions such as Gaussian or Gaussian copula distributions do not hold. Finally, the usefulness of AFDAG formulation is demonstrated through an application to a proteomics data set.




learn

Expected Policy Gradients for Reinforcement Learning

We propose expected policy gradients (EPG), which unify stochastic policy gradients (SPG) and deterministic policy gradients (DPG) for reinforcement learning. Inspired by expected sarsa, EPG integrates (or sums) across actions when estimating the gradient, instead of relying only on the action in the sampled trajectory. For continuous action spaces, we first derive a practical result for Gaussian policies and quadratic critics and then extend it to a universal analytical method, covering a broad class of actors and critics, including Gaussian, exponential families, and policies with bounded support. For Gaussian policies, we introduce an exploration method that uses covariance proportional to the matrix exponential of the scaled Hessian of the critic with respect to the actions. For discrete action spaces, we derive a variant of EPG based on softmax policies. We also establish a new general policy gradient theorem, of which the stochastic and deterministic policy gradient theorems are special cases. Furthermore, we prove that EPG reduces the variance of the gradient estimates without requiring deterministic policies and with little computational overhead. Finally, we provide an extensive experimental evaluation of EPG and show that it outperforms existing approaches on multiple challenging control domains.




learn

Unique Sharp Local Minimum in L1-minimization Complete Dictionary Learning

We study the problem of globally recovering a dictionary from a set of signals via $ell_1$-minimization. We assume that the signals are generated as i.i.d. random linear combinations of the $K$ atoms from a complete reference dictionary $D^*in mathbb R^{K imes K}$, where the linear combination coefficients are from either a Bernoulli type model or exact sparse model. First, we obtain a necessary and sufficient norm condition for the reference dictionary $D^*$ to be a sharp local minimum of the expected $ell_1$ objective function. Our result substantially extends that of Wu and Yu (2015) and allows the combination coefficient to be non-negative. Secondly, we obtain an explicit bound on the region within which the objective value of the reference dictionary is minimal. Thirdly, we show that the reference dictionary is the unique sharp local minimum, thus establishing the first known global property of $ell_1$-minimization dictionary learning. Motivated by the theoretical results, we introduce a perturbation based test to determine whether a dictionary is a sharp local minimum of the objective function. In addition, we also propose a new dictionary learning algorithm based on Block Coordinate Descent, called DL-BCD, which is guaranteed to decrease the obective function monotonically. Simulation studies show that DL-BCD has competitive performance in terms of recovery rate compared to other state-of-the-art dictionary learning algorithms when the reference dictionary is generated from random Gaussian matrices.




learn

Representation Learning for Dynamic Graphs: A Survey

Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets and highlight directions for future research.




learn

GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning

When the data is distributed across multiple servers, lowering the communication cost between the servers (or workers) while solving the distributed learning problem is an important problem and is the focus of this paper. In particular, we propose a fast, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm, Group Alternating Direction Method of Multipliers (GADMM) is based on the Alternating Direction Method of Multipliers (ADMM) framework. The key novelty in GADMM is that it solves the problem in a decentralized topology where at most half of the workers are competing for the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges to the optimal solution for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets. Furthermore, we propose Dynamic GADMM (D-GADMM), a variant of GADMM, and prove its convergence under the time-varying network topology of the workers.