lear Dual-Language Learning: How Schools Can Empower Students and Parents By feedproxy.google.com Published On :: Thu, 20 Sep 2018 00:00:00 +0000 In this fifth installment on the growth in dual-language learning, the executive director of the BUENO Center for Multicultural Education at the University of Colorado, Boulder., says districts should focus on the what students and their families need, not what educators want. Full Article Colorado
lear How to Teach Math to Students With Disabilities, English Language Learners By feedproxy.google.com Published On :: 2020-05-05T16:26:10-04:00 Experts recommend emphasizing language skills, avoiding assumptions about ability based on broad student labels, and focusing on students’ strengths rather than their weaknesses. Full Article Education
lear What This Superintendent Learns From Teaching a High School Course By feedproxy.google.com Published On :: Tue, 25 Feb 2020 00:00:00 +0000 The leader of a Montana school district spends up to two hours each day grading assignments from students in an online English credit recovery program. Full Article Montana
lear The Year in Personalized Learning: 2017 in Review By feedproxy.google.com Published On :: Thu, 21 Dec 2017 00:00:00 +0000 The Chan Zuckerberg Initiative, states like Vermont and Rhode Island, and companies such as AltSchool all generated headlines about personalized learning in 2017. Full Article Rhode_Island
lear Rhode Island Announces Statewide K-12 Personalized Learning Push By feedproxy.google.com Published On :: Wed, 22 Feb 2017 00:00:00 +0000 The Chan-Zuckerberg Initiative and other funders are supporting Rhode Island's efforts to define and research personalized learning in traditional public schools. Full Article Rhode_Island
lear Rhode Island to Promote Blended Learning Through Nonprofit Partnership By feedproxy.google.com Published On :: Fri, 08 Aug 2014 00:00:00 +0000 The Rhode Island Department of Education and the nonprofit Learning Accelerator are teaming to develop a strategic plan and a communications strategy aimed at expanding blended learning. Full Article Rhode_Island
lear States Must Change, Too For Blended Learning By feedproxy.google.com Published On :: Tue, 10 Mar 2015 00:00:00 +0000 Lisa Duty of The Learning Accelerator, a Rhode Island Department of Education (RIDE) and Highlander Institute funding partner, outlines the Rhode Islands's commitment to a blended learning future. She describes how the state is developing its new five-year strategic plan that's engaging RIDE's Ambas Full Article Rhode_Island
lear Dual-Language Learning: How Schools Can Invest in Cultural and Linguistic Diversity By feedproxy.google.com Published On :: Wed, 19 Sep 2018 00:00:00 +0000 In this fourth installment on the growth in dual-language learning, the director of dual-language education in Portland, Ore., says schools must have a clear reason for why they are offering dual-language instruction. Full Article Oregon
lear Rapid Deployment of Remote Learning: Lessons From 4 Districts By feedproxy.google.com Published On :: Thu, 19 Mar 2020 00:00:00 +0000 Chief technology officers are facing an unprecedented test of digital preparedness due to the coronavirus pandemic, struggling with shortfalls of available learning devices and huge Wi-Fi access challenges. Full Article Oregon
lear Reimagining Professional Learning in Delaware By feedproxy.google.com Published On :: Thu, 25 May 2017 00:00:00 +0000 Stephanie Hirsh recently visited several schools in Delaware to see first-hand the impact of the state's redesigned professional learning system. Full Article Delaware
lear Dual-Language Learning: Making Teacher and Principal Training a Priority By feedproxy.google.com Published On :: Mon, 24 Sep 2018 00:00:00 +0000 In this seventh installment on the growth in dual-language learning, two experts from Delaware explore how state education leaders can build capacity to support both students and educators. Full Article Delaware
lear Knowledge sharing for the development of learning resources : theory, method, process and application for schools, communities and the workplace : a UNESCO-PNIEVE resource / by John E. Harrington, Professor Emeritis. By www.catalog.slsa.sa.gov.au Published On :: The Knowledge Sharing for the Development of Learning Resources tutorial provides a professional step forward, a learning experience that leads to recognition that your leadership is well founded as well as ensuring that participants in the development of learning resources recognize they are contributing to an exceptional achievement. Full Article
lear Building confidence in enrolling learners with disability for providers of education and training / ACPET, NDCO. By www.catalog.slsa.sa.gov.au Published On :: Full Article
lear "That's what she said" : how to draft a clear and effective affidavit in family law / paper presented by Marita Pangallo, Howard Zelling Chambers and Daniel Praolini, Capmpbell Chambers. By www.catalog.slsa.sa.gov.au Published On :: Full Article
lear Clearing the air : the beginning and the end of air pollution / Tim Smedley. By www.catalog.slsa.sa.gov.au Published On :: Air -- Pollution. Full Article
lear What Remote Learning Looks Like During the Coronavirus Crisis By feedproxy.google.com Published On :: Mon, 23 Mar 2020 17:20:04 +0000 We asked parents, students, and educators to share what their home learning environments look like as nearly all schools are shut down for extended periods because of the coronavirus pandemic. Full Article Photo Gallery Point of View coronavirus COVID-19 remote learning school closure schools Students
lear Designing the John B Fairfax Learning Centre By www.sl.nsw.gov.au Published On :: Thu, 20 Sep 2018 06:35:44 +0000 The John B Fairfax Learning Centre is officially launched and we look forward to welcoming visitors to this fabulous new Full Article
lear Learning together in term two By www.sl.nsw.gov.au Published On :: Thu, 07 May 2020 00:38:53 +0000 In the most extraordinary circumstances teachers have once again demonstrated their professionalism, skill, flexibility Full Article
lear Rapid Deployment of Remote Learning: Lessons From 4 Districts By feedproxy.google.com Published On :: Thu, 19 Mar 2020 00:00:00 +0000 Chief technology officers are facing an unprecedented test of digital preparedness due to the coronavirus pandemic, struggling with shortfalls of available learning devices and huge Wi-Fi access challenges. Full Article Indiana
lear There's Pushback to Social-Emotional Learning. Here's What Happened in One State By feedproxy.google.com Published On :: Thu, 13 Feb 2020 00:00:00 +0000 When Idaho education leaders pitched social-emotional learning training for teachers, some state lawmakers compared the plan to dystopian behavior control. Some walked out of the meeting. Full Article Idaho
lear What Teachers Can Learn from Iowa's Efforts to Engage Teen Caucusgoers By feedproxy.google.com Published On :: Wed, 29 Jan 2020 00:00:00 +0000 A new generation of Iowans are preparing to caucus for the first time. Here's how their teachers are preparing them, and what it says about civics education in 2020. Full Article Iowa
lear Largest Iowa school district could extend distance learning By feedproxy.google.com Published On :: Mon, 13 Apr 2020 00:00:00 +0000 Full Article Iowa
lear How Weather Forced a Minn. District to Establish E-Learning Options On the Fly By feedproxy.google.com Published On :: Fri, 17 Jan 2020 00:00:00 +0000 The director of teaching and learning for a Minnesota district talks about putting e-learning days into action under difficult circumstances. Full Article Minnesota
lear Social and Emotional Learning in Vermont By feedproxy.google.com Published On :: Wed, 08 Aug 2018 00:00:00 +0000 In the Green Mountain State, education leaders discuss their focus on the whole child. Full Article Vermont
lear Where They Are: The Nation's Small But Growing Population of Black English-Learners By feedproxy.google.com Published On :: Fri, 17 Apr 2020 00:00:00 +0000 In five northern U.S. states, black students comprise more than a fifth of ELL enrollment. Full Article Vermont
lear New Breed of After-School Programs Embrace English-Learners By feedproxy.google.com Published On :: Tue, 10 Mar 2020 00:00:00 +0000 A handful of districts and other groups are reshaping the after-school space to provide a wide range of social and linguistic supports for newcomer students. Full Article Vermont
lear Schools Lean on Staff Who Speak Students' Language to Keep English-Learners Connected By feedproxy.google.com Published On :: Mon, 27 Apr 2020 00:00:00 +0000 The rocky shift to remote learning has exacerbated inequities for the nation's 5 million English-learners. An army of multilingual liaisons work round the clock to plug widening gaps. Full Article Vermont
lear Cupid learning to read the letters of the alphabet. Engraving after A. Allegri, il Corrreggio. By feedproxy.google.com Published On :: [London] (at the Historic Gallery, 87 Pall Mall) : Pub.d by Mr Stone. Full Article
lear New Breed of After-School Programs Embrace English-Learners By feedproxy.google.com Published On :: Tue, 10 Mar 2020 00:00:00 +0000 A handful of districts and other groups are reshaping the after-school space to provide a wide range of social and linguistic supports for newcomer students. Full Article Illinois
lear What Principals Learn From Roughing It in the Woods By feedproxy.google.com Published On :: Tue, 29 Oct 2019 00:00:00 +0000 In three days of rock climbing, orienteering, and other challenging outdoor experiences, principals get to examine their own—and others’—strengths and weaknesses as leaders. Full Article Missouri
lear Learning factors in substance abuse / editor, Barbara A. Ray. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1988. Full Article
lear Gaussian field on the symmetric group: Prediction and learning By projecteuclid.org Published On :: Tue, 05 May 2020 22:00 EDT François Bachoc, Baptiste Broto, Fabrice Gamboa, Jean-Michel Loubes. Source: Electronic Journal of Statistics, Volume 14, Number 1, 503--546.Abstract: In the framework of the supervised learning of a real function defined on an abstract space $mathcal{X}$, Gaussian processes are widely used. The Euclidean case for $mathcal{X}$ is well known and has been widely studied. In this paper, we explore the less classical case where $mathcal{X}$ is the non commutative finite group of permutations (namely the so-called symmetric group $S_{N}$). We provide an application to Gaussian process based optimization of Latin Hypercube Designs. We also extend our results to the case of partial rankings. Full Article
lear A Statistical Learning Approach to Modal Regression By Published On :: 2020 This paper studies the nonparametric modal regression problem systematically from a statistical learning viewpoint. Originally motivated by pursuing a theoretical understanding of the maximum correntropy criterion based regression (MCCR), our study reveals that MCCR with a tending-to-zero scale parameter is essentially modal regression. We show that the nonparametric modal regression problem can be approached via the classical empirical risk minimization. Some efforts are then made to develop a framework for analyzing and implementing modal regression. For instance, the modal regression function is described, the modal regression risk is defined explicitly and its Bayes rule is characterized; for the sake of computational tractability, the surrogate modal regression risk, which is termed as the generalization risk in our study, is introduced. On the theoretical side, the excess modal regression risk, the excess generalization risk, the function estimation error, and the relations among the above three quantities are studied rigorously. It turns out that under mild conditions, function estimation consistency and convergence may be pursued in modal regression as in vanilla regression protocols such as mean regression, median regression, and quantile regression. On the practical side, the implementation issues of modal regression including the computational algorithm and the selection of the tuning parameters are discussed. Numerical validations on modal regression are also conducted to verify our findings. Full Article
lear Perturbation Bounds for Procrustes, Classical Scaling, and Trilateration, with Applications to Manifold Learning By Published On :: 2020 One of the common tasks in unsupervised learning is dimensionality reduction, where the goal is to find meaningful low-dimensional structures hidden in high-dimensional data. Sometimes referred to as manifold learning, this problem is closely related to the problem of localization, which aims at embedding a weighted graph into a low-dimensional Euclidean space. Several methods have been proposed for localization, and also manifold learning. Nonetheless, the robustness property of most of them is little understood. In this paper, we obtain perturbation bounds for classical scaling and trilateration, which are then applied to derive performance bounds for Isomap, Landmark Isomap, and Maximum Variance Unfolding. A new perturbation bound for procrustes analysis plays a key role. Full Article
lear A Unified Framework for Structured Graph Learning via Spectral Constraints By Published On :: 2020 Graph learning from data is a canonical problem that has received substantial attention in the literature. Learning a structured graph is essential for interpretability and identification of the relationships among data. In general, learning a graph with a specific structure is an NP-hard combinatorial problem and thus designing a general tractable algorithm is challenging. Some useful structured graphs include connected, sparse, multi-component, bipartite, and regular graphs. In this paper, we introduce a unified framework for structured graph learning that combines Gaussian graphical model and spectral graph theory. We propose to convert combinatorial structural constraints into spectral constraints on graph matrices and develop an optimization framework based on block majorization-minimization to solve structured graph learning problem. The proposed algorithms are provably convergent and practically amenable for a number of graph based applications such as data clustering. Extensive numerical experiments with both synthetic and real data sets illustrate the effectiveness of the proposed algorithms. An open source R package containing the code for all the experiments is available at https://CRAN.R-project.org/package=spectralGraphTopology. Full Article
lear GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing By Published On :: 2020 We present GluonCV and GluonNLP, the deep learning toolkits for computer vision and natural language processing based on Apache MXNet (incubating). These toolkits provide state-of-the-art pre-trained models, training scripts, and training logs, to facilitate rapid prototyping and promote reproducible research. We also provide modular APIs with flexible building blocks to enable efficient customization. Leveraging the MXNet ecosystem, the deep learning models in GluonCV and GluonNLP can be deployed onto a variety of platforms with different programming languages. The Apache 2.0 license has been adopted by GluonCV and GluonNLP to allow for software distribution, modification, and usage. Full Article
lear On the consistency of graph-based Bayesian semi-supervised learning and the scalability of sampling algorithms By Published On :: 2020 This paper considers a Bayesian approach to graph-based semi-supervised learning. We show that if the graph parameters are suitably scaled, the graph-posteriors converge to a continuum limit as the size of the unlabeled data set grows. This consistency result has profound algorithmic implications: we prove that when consistency holds, carefully designed Markov chain Monte Carlo algorithms have a uniform spectral gap, independent of the number of unlabeled inputs. Numerical experiments illustrate and complement the theory. Full Article
lear Learning with Fenchel-Young losses By Published On :: 2020 Over the past decades, numerous loss functions have been been proposed for a variety of supervised learning tasks, including regression, classification, ranking, and more generally structured prediction. Understanding the core principles and theoretical properties underpinning these losses is key to choose the right loss for the right problem, as well as to create new losses which combine their strengths. In this paper, we introduce Fenchel-Young losses, a generic way to construct a convex loss function for a regularized prediction function. We provide an in-depth study of their properties in a very broad setting, covering all the aforementioned supervised learning tasks, and revealing new connections between sparsity, generalized entropies, and separation margins. We show that Fenchel-Young losses unify many well-known loss functions and allow to create useful new ones easily. Finally, we derive efficient predictive and training algorithms, making Fenchel-Young losses appealing both in theory and practice. Full Article
lear Learning Linear Non-Gaussian Causal Models in the Presence of Latent Variables By Published On :: 2020 We consider the problem of learning causal models from observational data generated by linear non-Gaussian acyclic causal models with latent variables. Without considering the effect of latent variables, the inferred causal relationships among the observed variables are often wrong. Under faithfulness assumption, we propose a method to check whether there exists a causal path between any two observed variables. From this information, we can obtain the causal order among the observed variables. The next question is whether the causal effects can be uniquely identified as well. We show that causal effects among observed variables cannot be identified uniquely under mere assumptions of faithfulness and non-Gaussianity of exogenous noises. However, we are able to propose an efficient method that identifies the set of all possible causal effects that are compatible with the observational data. We present additional structural conditions on the causal graph under which causal effects among observed variables can be determined uniquely. Furthermore, we provide necessary and sufficient graphical conditions for unique identification of the number of variables in the system. Experiments on synthetic data and real-world data show the effectiveness of our proposed algorithm for learning causal models. Full Article
lear Ensemble Learning for Relational Data By Published On :: 2020 We present a theoretical analysis framework for relational ensemble models. We show that ensembles of collective classifiers can improve predictions for graph data by reducing errors due to variance in both learning and inference. In addition, we propose a relational ensemble framework that combines a relational ensemble learning approach with a relational ensemble inference approach for collective classification. The proposed ensemble techniques are applicable for both single and multiple graph settings. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed framework. Finally, our experimental results support the theoretical analysis and confirm that ensemble algorithms that explicitly focus on both learning and inference processes and aim at reducing errors associated with both, are the best performers. Full Article
lear Learning Causal Networks via Additive Faithfulness By Published On :: 2020 In this paper we introduce a statistical model, called additively faithful directed acyclic graph (AFDAG), for causal learning from observational data. Our approach is based on additive conditional independence (ACI), a recently proposed three-way statistical relation that shares many similarities with conditional independence but without resorting to multi-dimensional kernels. This distinct feature strikes a balance between a parametric model and a fully nonparametric model, which makes the proposed model attractive for handling large networks. We develop an estimator for AFDAG based on a linear operator that characterizes ACI, and establish the consistency and convergence rates of this estimator, as well as the uniform consistency of the estimated DAG. Moreover, we introduce a modified PC-algorithm to implement the estimating procedure efficiently, so that its complexity is determined by the level of sparseness rather than the dimension of the network. Through simulation studies we show that our method outperforms existing methods when commonly assumed conditions such as Gaussian or Gaussian copula distributions do not hold. Finally, the usefulness of AFDAG formulation is demonstrated through an application to a proteomics data set. Full Article
lear Expected Policy Gradients for Reinforcement Learning By Published On :: 2020 We propose expected policy gradients (EPG), which unify stochastic policy gradients (SPG) and deterministic policy gradients (DPG) for reinforcement learning. Inspired by expected sarsa, EPG integrates (or sums) across actions when estimating the gradient, instead of relying only on the action in the sampled trajectory. For continuous action spaces, we first derive a practical result for Gaussian policies and quadratic critics and then extend it to a universal analytical method, covering a broad class of actors and critics, including Gaussian, exponential families, and policies with bounded support. For Gaussian policies, we introduce an exploration method that uses covariance proportional to the matrix exponential of the scaled Hessian of the critic with respect to the actions. For discrete action spaces, we derive a variant of EPG based on softmax policies. We also establish a new general policy gradient theorem, of which the stochastic and deterministic policy gradient theorems are special cases. Furthermore, we prove that EPG reduces the variance of the gradient estimates without requiring deterministic policies and with little computational overhead. Finally, we provide an extensive experimental evaluation of EPG and show that it outperforms existing approaches on multiple challenging control domains. Full Article
lear Unique Sharp Local Minimum in L1-minimization Complete Dictionary Learning By Published On :: 2020 We study the problem of globally recovering a dictionary from a set of signals via $ell_1$-minimization. We assume that the signals are generated as i.i.d. random linear combinations of the $K$ atoms from a complete reference dictionary $D^*in mathbb R^{K imes K}$, where the linear combination coefficients are from either a Bernoulli type model or exact sparse model. First, we obtain a necessary and sufficient norm condition for the reference dictionary $D^*$ to be a sharp local minimum of the expected $ell_1$ objective function. Our result substantially extends that of Wu and Yu (2015) and allows the combination coefficient to be non-negative. Secondly, we obtain an explicit bound on the region within which the objective value of the reference dictionary is minimal. Thirdly, we show that the reference dictionary is the unique sharp local minimum, thus establishing the first known global property of $ell_1$-minimization dictionary learning. Motivated by the theoretical results, we introduce a perturbation based test to determine whether a dictionary is a sharp local minimum of the objective function. In addition, we also propose a new dictionary learning algorithm based on Block Coordinate Descent, called DL-BCD, which is guaranteed to decrease the obective function monotonically. Simulation studies show that DL-BCD has competitive performance in terms of recovery rate compared to other state-of-the-art dictionary learning algorithms when the reference dictionary is generated from random Gaussian matrices. Full Article
lear Representation Learning for Dynamic Graphs: A Survey By Published On :: 2020 Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets and highlight directions for future research. Full Article
lear GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning By Published On :: 2020 When the data is distributed across multiple servers, lowering the communication cost between the servers (or workers) while solving the distributed learning problem is an important problem and is the focus of this paper. In particular, we propose a fast, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm, Group Alternating Direction Method of Multipliers (GADMM) is based on the Alternating Direction Method of Multipliers (ADMM) framework. The key novelty in GADMM is that it solves the problem in a decentralized topology where at most half of the workers are competing for the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges to the optimal solution for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets. Furthermore, we propose Dynamic GADMM (D-GADMM), a variant of GADMM, and prove its convergence under the time-varying network topology of the workers. Full Article
lear Primal and dual model representations in kernel-based learning By projecteuclid.org Published On :: Wed, 25 Aug 2010 10:28 EDT Johan A.K. Suykens, Carlos Alzate, Kristiaan PelckmansSource: Statist. Surv., Volume 4, 148--183.Abstract: This paper discusses the role of primal and (Lagrange) dual model representations in problems of supervised and unsupervised learning. The specification of the estimation problem is conceived at the primal level as a constrained optimization problem. The constraints relate to the model which is expressed in terms of the feature map. From the conditions for optimality one jointly finds the optimal model representation and the model estimate. At the dual level the model is expressed in terms of a positive definite kernel function, which is characteristic for a support vector machine methodology. It is discussed how least squares support vector machines are playing a central role as core models across problems of regression, classification, principal component analysis, spectral clustering, canonical correlation analysis, dimensionality reduction and data visualization. Full Article
lear Generating Thermal Image Data Samples using 3D Facial Modelling Techniques and Deep Learning Methodologies. (arXiv:2005.01923v2 [cs.CV] UPDATED) By arxiv.org Published On :: Methods for generating synthetic data have become of increasing importance to build large datasets required for Convolution Neural Networks (CNN) based deep learning techniques for a wide range of computer vision applications. In this work, we extend existing methodologies to show how 2D thermal facial data can be mapped to provide 3D facial models. For the proposed research work we have used tufts datasets for generating 3D varying face poses by using a single frontal face pose. The system works by refining the existing image quality by performing fusion based image preprocessing operations. The refined outputs have better contrast adjustments, decreased noise level and higher exposedness of the dark regions. It makes the facial landmarks and temperature patterns on the human face more discernible and visible when compared to original raw data. Different image quality metrics are used to compare the refined version of images with original images. In the next phase of the proposed study, the refined version of images is used to create 3D facial geometry structures by using Convolution Neural Networks (CNN). The generated outputs are then imported in blender software to finally extract the 3D thermal facial outputs of both males and females. The same technique is also used on our thermal face data acquired using prototype thermal camera (developed under Heliaus EU project) in an indoor lab environment which is then used for generating synthetic 3D face data along with varying yaw face angles and lastly facial depth map is generated. Full Article
lear Deep transfer learning for improving single-EEG arousal detection. (arXiv:2004.05111v2 [cs.CV] UPDATED) By arxiv.org Published On :: Datasets in sleep science present challenges for machine learning algorithms due to differences in recording setups across clinics. We investigate two deep transfer learning strategies for overcoming the channel mismatch problem for cases where two datasets do not contain exactly the same setup leading to degraded performance in single-EEG models. Specifically, we train a baseline model on multivariate polysomnography data and subsequently replace the first two layers to prepare the architecture for single-channel electroencephalography data. Using a fine-tuning strategy, our model yields similar performance to the baseline model (F1=0.682 and F1=0.694, respectively), and was significantly better than a comparable single-channel model. Our results are promising for researchers working with small databases who wish to use deep learning models pre-trained on larger databases. Full Article
lear Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A Multi-Agent Deep Reinforcement Learning Approach. (arXiv:2003.02157v2 [physics.soc-ph] UPDATED) By arxiv.org Published On :: In recent years, multi-access edge computing (MEC) is a key enabler for handling the massive expansion of Internet of Things (IoT) applications and services. However, energy consumption of a MEC network depends on volatile tasks that induces risk for energy demand estimations. As an energy supplier, a microgrid can facilitate seamless energy supply. However, the risk associated with energy supply is also increased due to unpredictable energy generation from renewable and non-renewable sources. Especially, the risk of energy shortfall is involved with uncertainties in both energy consumption and generation. In this paper, we study a risk-aware energy scheduling problem for a microgrid-powered MEC network. First, we formulate an optimization problem considering the conditional value-at-risk (CVaR) measurement for both energy consumption and generation, where the objective is to minimize the loss of energy shortfall of the MEC networks and we show this problem is an NP-hard problem. Second, we analyze our formulated problem using a multi-agent stochastic game that ensures the joint policy Nash equilibrium, and show the convergence of the proposed model. Third, we derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based asynchronous advantage actor-critic (A3C) algorithm with shared neural networks. This method mitigates the curse of dimensionality of the state space and chooses the best policy among the agents for the proposed problem. Finally, the experimental results establish a significant performance gain by considering CVaR for high accuracy energy scheduling of the proposed model than both the single and random agent models. Full Article
lear Mnemonics Training: Multi-Class Incremental Learning without Forgetting. (arXiv:2002.10211v3 [cs.CV] UPDATED) By arxiv.org Published On :: Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes. Full Article