line The German-speaking community of Victoria between 1850 and 1930 : origins, progress and decline / Volkhard Wehner. By www.catalog.slsa.sa.gov.au Published On :: Germans -- Victoria -- History. Full Article
line Fascists among us : online hate and the Christchurch massacre / Jeff Sparrow. By www.catalog.slsa.sa.gov.au Published On :: Fascism -- History -- 21st century. Full Article
line The Australian disease : on the decline of love and the rise of non-freedom / Richard Flanagan. By www.catalog.slsa.sa.gov.au Published On :: Social psychology -- Australia. Full Article
line Invisible women : exposing data bias in a world designed for men / Caroline Criado Perez. By www.catalog.slsa.sa.gov.au Published On :: Sex discrimination against women. Full Article
line Struggling to Stay Connected on Maryland’s Eastern Shore as Teaching Moves Online By feedproxy.google.com Published On :: Thu, 30 Apr 2020 14:46:52 +0000 As teachers across the country grapple with the challenges that come with remote learning due to the coronavirus pandemic, an elementary school teacher on Maryland’s Eastern Shore faces the added challenge of a lack of internet access at home. Full Article Another Look Photo Gallery Point of View coronavirus COVID-19 eastern shore maryland remote learning teaching virtual learning
line European sources online By search.wellcomelibrary.org Published On :: European Sources Online (ESO) is an online database and information service which provides access to information on the institutions and activities of the European Union, the countries, regions and other international organisations of Europe, and on issues of importance to European researchers, citizens and stakeholders. Full Article
line Early modern letters online By search.wellcomelibrary.org Published On :: Early Modern Letters Online is a combined finding aid and editorial interface for basic descriptions of early modern correspondence: a collaboratively populated union catalogue of sixteenth, seventeenth, and eighteenth-century letters. Full Article
line Brill Online Books and Journals By search.wellcomelibrary.org Published On :: Brill Online Books and Journals is the delivery platform that provides online access to the full text of individual Brill journals. The Wellcome Library offers access to selected titles from this publisher. Full Article
line Wiley Online Library Backfiles By search.wellcomelibrary.org Published On :: Wiley Online Library hosts a multidisciplinary collection of online resources covering life, health and physical sciences, social science, and the humanities. The Wellcome Library offers access to selected titles in this collection. Full Article
line Wiley Online Library Open Access By search.wellcomelibrary.org Published On :: Wiley Online Library hosts a multidisciplinary collection of online resources covering life, health and physical sciences, social science, and the humanities. The Wellcome Library offers access to selected titles in this collection. Full Article
line Periodicals archive online : JISC collections selection 2 By search.wellcomelibrary.org Published On :: This is a subset of full text journal backfiles in the arts, humanities, and social sciences from the Periodicals Archive Online collection, dating from 1891-. Full Article
line Indiana Educators Race to Renew Teaching Licenses Before Deadline By feedproxy.google.com Published On :: Tue, 18 Jun 2019 00:00:00 +0000 Thousands of Indiana teachers are scrambling to begin renewing their professional teaching licenses before new rules that state lawmakers approved this spring take effect July 1. Full Article Indiana
line Des methodes generales d'operation de la cataracte et en particulier de l'extraction lineaire composee / par Paul Hyades. By feedproxy.google.com Published On :: Paris : J.-B. Baillière, 1870. Full Article
line Die Rosaniline und Pararosaniline. Eine bakteriologische Farbenstudie / von P. G. Unna. By feedproxy.google.com Published On :: Hamburg : Voss, 1887. Full Article
line Elements of pathology and therapeutics being the outlines of a work, intended to ascertain the nature, causes, and most efficacious modes of prevention and cure, of the greater number of the diseases incidental to the human frame : illustrated by numerous By feedproxy.google.com Published On :: Bath : And sold by Underwood, London, 1825. Full Article
line Finding Common Ground Through Data to Improve Idaho's Teacher Pipeline By feedproxy.google.com Published On :: Thu, 22 Feb 2018 00:00:00 +0000 A superintendent outlines the importance of various groups coming together to address teacher recruitment and retention challenges in Idaho. Full Article Idaho
line Allegorical tomb of William Cadogan, Earl Cadogan. Line engraving by N. Dorigny, 1736, after C. Vanloo. By feedproxy.google.com Published On :: [Paris?] : Mac. S. [le tout dirigé & mis au jour, par les soins de Eugene Mac-Swiny], [1741?] Full Article
line Poligny (Jura), France: an ancient mosaic floor at the villa of Estavage (Chambrettes), including figures of gryphons and centaurs. Line engraving. By feedproxy.google.com Published On :: Full Article
line Fruit on a dish and a tureen, with elaborate vessels, rugs, and a bas-relief of grape-pickers. Colour line block by Leighton Brothers after G. Lance. By feedproxy.google.com Published On :: [London?] : [Illustrated London News?], [between 1850 and 1870?] Full Article
line Episodes in human life described in Schiller's "The song of the bell". Line engraving by A. Schleich after C. Nilson. By feedproxy.google.com Published On :: [Dresden? Leipzig?] : Verlag und Eigentum von A.E. Payne, [1848?] (Printed by H. Boulton) Full Article
line The skeleton of a horse: right side view. Line engraving with etching by A. Bell, ca. 1790. By feedproxy.google.com Published On :: [Edinburgh], [between 1788 and 1797] Full Article
line Teeth of a horse. Line engraving by A. Bell, ca. 1790. By feedproxy.google.com Published On :: [Edinburgh], [between 1788 and 1797] Full Article
line An ivory statue of the standing Buddha from Candy, Sri Lanka. Line engraving by H. Mutlow, 1804. By feedproxy.google.com Published On :: [London] : Published by C. & R. Baldwin, [1804] Full Article
line Chicago Strike: Why Teachers Are on the Picket Lines Once Again By feedproxy.google.com Published On :: Fri, 18 Oct 2019 00:00:00 +0000 Teachers in the nation's third-largest school system are fighting for salary increases, class-size caps, and a written commitment for more nurses, social workers, and librarians—as well as investments some say are outside the scope of collective bargaining. Full Article Illinois
line High Court Declines Missouri District's Appeal Over At-Large Board Voting By feedproxy.google.com Published On :: Mon, 07 Jan 2019 00:00:00 +0000 The justices declined to hear the appeal of the Ferguson-Florissant district over its at-large board elections, which lower courts invalidated as violating the Voting Rights Act. Full Article Missouri
line Night Sounds, a mini chapbook about listening to nature in the city, with black line drawings. Tiny illustrated zine about nature. By search.wellcomelibrary.org Published On :: 2017 Full Article
line Contemporary research in pain and analgesia, 1983 / editors, Roger M. Brown, Theodore M. Pinkert, Jacqueline P. Ludford. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1983. Full Article
line Opioids in the hippocampus / editors, Jacqueline F. McGinty, David P. Friedman. By search.wellcomelibrary.org Published On :: Rockville, Maryland : National Institute on Drug Abuse, 1988. Full Article
line Drug abuse treatment evaluation : strategies, progress, and prospects / editors Frank M. Tims, Jacqueline P. Ludford. By search.wellcomelibrary.org Published On :: Springfield, Virginia. : National Technical Information Service, 1984. Full Article
line Policy and guidelines for the provision of needle and syringe exchange services to young people / Tom Aldridge and Andrew Preston. By search.wellcomelibrary.org Published On :: [Dorchester] : Dorset Community NHS Trust, 1997. Full Article
line Kobe, Duncan, Garnett headline Basketball Hall of Fame class By sports.yahoo.com Published On :: Sat, 04 Apr 2020 16:12:32 GMT Kobe Bryant was already immortal. Bryant and fellow NBA greats Tim Duncan and Kevin Garnett headlined a nine-person group announced Saturday as this year’s class of enshrinees into the Naismith Memorial Basketball Hall of Fame. Two-time NBA champion coach Rudy Tomjanovich finally got his call, as did longtime Baylor women’s coach Kim Mulkey, 1,000-game winner Barbara Stevens of Bentley and three-time Final Four coach Eddie Sutton. Full Article article Sports
line Estimation of linear projections of non-sparse coefficients in high-dimensional regression By projecteuclid.org Published On :: Mon, 27 Apr 2020 22:02 EDT David Azriel, Armin Schwartzman. Source: Electronic Journal of Statistics, Volume 14, Number 1, 174--206.Abstract: In this work we study estimation of signals when the number of parameters is much larger than the number of observations. A large body of literature assumes for these kind of problems a sparse structure where most of the parameters are zero or close to zero. When this assumption does not hold, one can focus on low-dimensional functions of the parameter vector. In this work we study one-dimensional linear projections. Specifically, in the context of high-dimensional linear regression, the parameter of interest is ${oldsymbol{eta}}$ and we study estimation of $mathbf{a}^{T}{oldsymbol{eta}}$. We show that $mathbf{a}^{T}hat{oldsymbol{eta}}$, where $hat{oldsymbol{eta}}$ is the least squares estimator, using pseudo-inverse when $p>n$, is minimax and admissible. Thus, for linear projections no regularization or shrinkage is needed. This estimator is easy to analyze and confidence intervals can be constructed. We study a high-dimensional dataset from brain imaging where it is shown that the signal is weak, non-sparse and significantly different from zero. Full Article
line Exact recovery in block spin Ising models at the critical line By projecteuclid.org Published On :: Thu, 23 Apr 2020 22:01 EDT Matthias Löwe, Kristina Schubert. Source: Electronic Journal of Statistics, Volume 14, Number 1, 1796--1815.Abstract: We show how to exactly reconstruct the block structure at the critical line in the so-called Ising block model. This model was recently re-introduced by Berthet, Rigollet and Srivastava in [2]. There the authors show how to exactly reconstruct blocks away from the critical line and they give an upper and a lower bound on the number of observations one needs; thereby they establish a minimax optimal rate (up to constants). Our technique relies on a combination of their methods with fluctuation results obtained in [20]. The latter are extended to the full critical regime. We find that the number of necessary observations depends on whether the interaction parameter between two blocks is positive or negative: In the first case, there are about $Nlog N$ observations required to exactly recover the block structure, while in the latter case $sqrt{N}log N$ observations suffice. Full Article
line A fast and consistent variable selection method for high-dimensional multivariate linear regression with a large number of explanatory variables By projecteuclid.org Published On :: Fri, 27 Mar 2020 22:00 EDT Ryoya Oda, Hirokazu Yanagihara. Source: Electronic Journal of Statistics, Volume 14, Number 1, 1386--1412.Abstract: We put forward a variable selection method for selecting explanatory variables in a normality-assumed multivariate linear regression. It is cumbersome to calculate variable selection criteria for all subsets of explanatory variables when the number of explanatory variables is large. Therefore, we propose a fast and consistent variable selection method based on a generalized $C_{p}$ criterion. The consistency of the method is provided by a high-dimensional asymptotic framework such that the sample size and the sum of the dimensions of response vectors and explanatory vectors divided by the sample size tend to infinity and some positive constant which are less than one, respectively. Through numerical simulations, it is shown that the proposed method has a high probability of selecting the true subset of explanatory variables and is fast under a moderate sample size even when the number of dimensions is large. Full Article
line A Low Complexity Algorithm with O(√T) Regret and O(1) Constraint Violations for Online Convex Optimization with Long Term Constraints By Published On :: 2020 This paper considers online convex optimization over a complicated constraint set, which typically consists of multiple functional constraints and a set constraint. The conventional online projection algorithm (Zinkevich, 2003) can be difficult to implement due to the potentially high computation complexity of the projection operation. In this paper, we relax the functional constraints by allowing them to be violated at each round but still requiring them to be satisfied in the long term. This type of relaxed online convex optimization (with long term constraints) was first considered in Mahdavi et al. (2012). That prior work proposes an algorithm to achieve $O(sqrt{T})$ regret and $O(T^{3/4})$ constraint violations for general problems and another algorithm to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when the constraint set can be described by a finite number of linear constraints. A recent extension in Jenatton et al. (2016) can achieve $O(T^{max{ heta,1- heta}})$ regret and $O(T^{1- heta/2})$ constraint violations where $ hetain (0,1)$. The current paper proposes a new simple algorithm that yields improved performance in comparison to prior works. The new algorithm achieves an $O(sqrt{T})$ regret bound with $O(1)$ constraint violations. Full Article
line Online Sufficient Dimension Reduction Through Sliced Inverse Regression By Published On :: 2020 Sliced inverse regression is an effective paradigm that achieves the goal of dimension reduction through replacing high dimensional covariates with a small number of linear combinations. It does not impose parametric assumptions on the dependence structure. More importantly, such a reduction of dimension is sufficient in that it does not cause loss of information. In this paper, we adapt the stationary sliced inverse regression to cope with the rapidly changing environments. We propose to implement sliced inverse regression in an online fashion. This online learner consists of two steps. In the first step we construct an online estimate for the kernel matrix; in the second step we propose two online algorithms, one is motivated by the perturbation method and the other is originated from the gradient descent optimization, to perform online singular value decomposition. The theoretical properties of this online learner are established. We demonstrate the numerical performance of this online learner through simulations and real world applications. All numerical studies confirm that this online learner performs as well as the batch learner. Full Article
line Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems By Published On :: 2020 We study derivative-free methods for policy optimization over the class of linear policies. We focus on characterizing the convergence rate of these methods when applied to linear-quadratic systems, and study various settings of driving noise and reward feedback. Our main theoretical result provides an explicit bound on the sample or evaluation complexity: we show that these methods are guaranteed to converge to within any pre-specified tolerance of the optimal policy with a number of zero-order evaluations that is an explicit polynomial of the error tolerance, dimension, and curvature properties of the problem. Our analysis reveals some interesting differences between the settings of additive driving noise and random initialization, as well as the settings of one-point and two-point reward feedback. Our theory is corroborated by simulations of derivative-free methods in application to these systems. Along the way, we derive convergence rates for stochastic zero-order optimization algorithms when applied to a certain class of non-convex problems. Full Article
line Learning Linear Non-Gaussian Causal Models in the Presence of Latent Variables By Published On :: 2020 We consider the problem of learning causal models from observational data generated by linear non-Gaussian acyclic causal models with latent variables. Without considering the effect of latent variables, the inferred causal relationships among the observed variables are often wrong. Under faithfulness assumption, we propose a method to check whether there exists a causal path between any two observed variables. From this information, we can obtain the causal order among the observed variables. The next question is whether the causal effects can be uniquely identified as well. We show that causal effects among observed variables cannot be identified uniquely under mere assumptions of faithfulness and non-Gaussianity of exogenous noises. However, we are able to propose an efficient method that identifies the set of all possible causal effects that are compatible with the observational data. We present additional structural conditions on the causal graph under which causal effects among observed variables can be determined uniquely. Furthermore, we provide necessary and sufficient graphical conditions for unique identification of the number of variables in the system. Experiments on synthetic data and real-world data show the effectiveness of our proposed algorithm for learning causal models. Full Article
line Branch and Bound for Piecewise Linear Neural Network Verification By Published On :: 2020 The success of Deep Learning and its potential use in many safety-critical applicationshas motivated research on formal verification of Neural Network (NN) models. In thiscontext, verification involves proving or disproving that an NN model satisfies certaininput-output properties. Despite the reputation of learned NN models as black boxes,and the theoretical hardness of proving useful properties about them, researchers havebeen successful in verifying some classes of models by exploiting their piecewise linearstructure and taking insights from formal methods such as Satisifiability Modulo Theory.However, these methods are still far from scaling to realistic neural networks. To facilitateprogress on this crucial area, we exploit the Mixed Integer Linear Programming (MIP) formulation of verification to propose a family of algorithms based on Branch-and-Bound (BaB). We show that our family contains previous verification methods as special cases.With the help of the BaB framework, we make three key contributions. Firstly, we identifynew methods that combine the strengths of multiple existing approaches, accomplishingsignificant performance improvements over previous state of the art. Secondly, we introducean effective branching strategy on ReLU non-linearities. This branching strategy allows usto efficiently and successfully deal with high input dimensional problems with convolutionalnetwork architecture, on which previous methods fail frequently. Finally, we proposecomprehensive test data sets and benchmarks which includes a collection of previouslyreleased testcases. We use the data sets to conduct a thorough experimental comparison ofexisting and new algorithms and to provide an inclusive analysis of the factors impactingthe hardness of verification problems. Full Article
line Town launches new Community Support Hotline By www.eastgwillimbury.ca Published On :: Tue, 28 Apr 2020 23:15:02 GMT Full Article
line Stein characterizations for linear combinations of gamma random variables By projecteuclid.org Published On :: Mon, 04 May 2020 04:00 EDT Benjamin Arras, Ehsan Azmoodeh, Guillaume Poly, Yvik Swan. Source: Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 394--413.Abstract: In this paper we propose a new, simple and explicit mechanism allowing to derive Stein operators for random variables whose characteristic function satisfies a simple ODE. We apply this to study random variables which can be represented as linear combinations of (not necessarily independent) gamma distributed random variables. The connection with Malliavin calculus for random variables in the second Wiener chaos is detailed. An application to McKay Type I random variables is also outlined. Full Article
line On estimating the location parameter of the selected exponential population under the LINEX loss function By projecteuclid.org Published On :: Mon, 03 Feb 2020 04:00 EST Mohd Arshad, Omer Abdalghani. Source: Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 167--182.Abstract: Suppose that $pi_{1},pi_{2},ldots ,pi_{k}$ be $k(geq2)$ independent exponential populations having unknown location parameters $mu_{1},mu_{2},ldots,mu_{k}$ and known scale parameters $sigma_{1},ldots,sigma_{k}$. Let $mu_{[k]}=max {mu_{1},ldots,mu_{k}}$. For selecting the population associated with $mu_{[k]}$, a class of selection rules (proposed by Arshad and Misra [ Statistical Papers 57 (2016) 605–621]) is considered. We consider the problem of estimating the location parameter $mu_{S}$ of the selected population under the criterion of the LINEX loss function. We consider three natural estimators $delta_{N,1},delta_{N,2}$ and $delta_{N,3}$ of $mu_{S}$, based on the maximum likelihood estimators, uniformly minimum variance unbiased estimator (UMVUE) and minimum risk equivariant estimator (MREE) of $mu_{i}$’s, respectively. The uniformly minimum risk unbiased estimator (UMRUE) and the generalized Bayes estimator of $mu_{S}$ are derived. Under the LINEX loss function, a general result for improving a location-equivariant estimator of $mu_{S}$ is derived. Using this result, estimator better than the natural estimator $delta_{N,1}$ is obtained. We also shown that the estimator $delta_{N,1}$ is dominated by the natural estimator $delta_{N,3}$. Finally, we perform a simulation study to evaluate and compare risk functions among various competing estimators of $mu_{S}$. Full Article
line Robust Bayesian model selection for heavy-tailed linear regression using finite mixtures By projecteuclid.org Published On :: Mon, 03 Feb 2020 04:00 EST Flávio B. Gonçalves, Marcos O. Prates, Victor Hugo Lachos. Source: Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 51--70.Abstract: In this paper, we present a novel methodology to perform Bayesian model selection in linear models with heavy-tailed distributions. We consider a finite mixture of distributions to model a latent variable where each component of the mixture corresponds to one possible model within the symmetrical class of normal independent distributions. Naturally, the Gaussian model is one of the possibilities. This allows for a simultaneous analysis based on the posterior probability of each model. Inference is performed via Markov chain Monte Carlo—a Gibbs sampler with Metropolis–Hastings steps for a class of parameters. Simulated examples highlight the advantages of this approach compared to a segregated analysis based on arbitrarily chosen model selection criteria. Examples with real data are presented and an extension to censored linear regression is introduced and discussed. Full Article
line A new log-linear bimodal Birnbaum–Saunders regression model with application to survival data By projecteuclid.org Published On :: Mon, 04 Mar 2019 04:00 EST Francisco Cribari-Neto, Rodney V. Fonseca. Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 329--355.Abstract: The log-linear Birnbaum–Saunders model has been widely used in empirical applications. We introduce an extension of this model based on a recently proposed version of the Birnbaum–Saunders distribution which is more flexible than the standard Birnbaum–Saunders law since its density may assume both unimodal and bimodal shapes. We show how to perform point estimation, interval estimation and hypothesis testing inferences on the parameters that index the regression model we propose. We also present a number of diagnostic tools, such as residual analysis, local influence, generalized leverage, generalized Cook’s distance and model misspecification tests. We investigate the usefulness of model selection criteria and the accuracy of prediction intervals for the proposed model. Results of Monte Carlo simulations are presented. Finally, we also present and discuss an empirical application. Full Article
line Bayesian robustness to outliers in linear regression and ratio estimation By projecteuclid.org Published On :: Mon, 04 Mar 2019 04:00 EST Alain Desgagné, Philippe Gagnon. Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 205--221.Abstract: Whole robustness is a nice property to have for statistical models. It implies that the impact of outliers gradually vanishes as they approach plus or minus infinity. So far, the Bayesian literature provides results that ensure whole robustness for the location-scale model. In this paper, we make two contributions. First, we generalise the results to attain whole robustness in simple linear regression through the origin, which is a necessary step towards results for general linear regression models. We allow the variance of the error term to depend on the explanatory variable. This flexibility leads to the second contribution: we provide a simple Bayesian approach to robustly estimate finite population means and ratios. The strategy to attain whole robustness is simple since it lies in replacing the traditional normal assumption on the error term by a super heavy-tailed distribution assumption. As a result, users can estimate the parameters as usual, using the posterior distribution. Full Article
line On a phase transition in general order spline regression. (arXiv:2004.10922v2 [math.ST] UPDATED) By arxiv.org Published On :: In the Gaussian sequence model $Y= heta_0 + varepsilon$ in $mathbb{R}^n$, we study the fundamental limit of approximating the signal $ heta_0$ by a class $Theta(d,d_0,k)$ of (generalized) splines with free knots. Here $d$ is the degree of the spline, $d_0$ is the order of differentiability at each inner knot, and $k$ is the maximal number of pieces. We show that, given any integer $dgeq 0$ and $d_0in{-1,0,ldots,d-1}$, the minimax rate of estimation over $Theta(d,d_0,k)$ exhibits the following phase transition: egin{equation*} egin{aligned} inf_{widetilde{ heta}}sup_{ hetainTheta(d,d_0, k)}mathbb{E}_ heta|widetilde{ heta} - heta|^2 asymp_d egin{cases} kloglog(16n/k), & 2leq kleq k_0,\ klog(en/k), & k geq k_0+1. end{cases} end{aligned} end{equation*} The transition boundary $k_0$, which takes the form $lfloor{(d+1)/(d-d_0) floor} + 1$, demonstrates the critical role of the regularity parameter $d_0$ in the separation between a faster $log log(16n)$ and a slower $log(en)$ rate. We further show that, once encouraging an additional '$d$-monotonicity' shape constraint (including monotonicity for $d = 0$ and convexity for $d=1$), the above phase transition is eliminated and the faster $kloglog(16n/k)$ rate can be achieved for all $k$. These results provide theoretical support for developing $ell_0$-penalized (shape-constrained) spline regression procedures as useful alternatives to $ell_1$- and $ell_2$-penalized ones. Full Article
line lmSubsets: Exact Variable-Subset Selection in Linear Regression for R By www.jstatsoft.org Published On :: Tue, 28 Apr 2020 00:00:00 +0000 An R package for computing the all-subsets regression problem is presented. The proposed algorithms are based on computational strategies recently developed. A novel algorithm for the best-subset regression problem selects subset models based on a predetermined criterion. The package user can choose from exact and from approximation algorithms. The core of the package is written in C++ and provides an efficient implementation of all the underlying numerical computations. A case study and benchmark results illustrate the usage and the computational efficiency of the package. Full Article
line Optimal prediction in the linearly transformed spiked model By projecteuclid.org Published On :: Mon, 17 Feb 2020 04:02 EST Edgar Dobriban, William Leeb, Amit Singer. Source: The Annals of Statistics, Volume 48, Number 1, 491--513.Abstract: We consider the linearly transformed spiked model , where the observations $Y_{i}$ are noisy linear transforms of unobserved signals of interest $X_{i}$: egin{equation*}Y_{i}=A_{i}X_{i}+varepsilon_{i},end{equation*} for $i=1,ldots ,n$. The transform matrices $A_{i}$ are also observed. We model the unobserved signals (or regression coefficients) $X_{i}$ as vectors lying on an unknown low-dimensional space. Given only $Y_{i}$ and $A_{i}$ how should we predict or recover their values? The naive approach of performing regression for each observation separately is inaccurate due to the large noise level. Instead, we develop optimal methods for predicting $X_{i}$ by “borrowing strength” across the different samples. Our linear empirical Bayes methods scale to large datasets and rely on weak moment assumptions. We show that this model has wide-ranging applications in signal processing, deconvolution, cryo-electron microscopy, and missing data with noise. For missing data, we show in simulations that our methods are more robust to noise and to unequal sampling than well-known matrix completion methods. Full Article
line Efficient estimation of linear functionals of principal components By projecteuclid.org Published On :: Mon, 17 Feb 2020 04:02 EST Vladimir Koltchinskii, Matthias Löffler, Richard Nickl. Source: The Annals of Statistics, Volume 48, Number 1, 464--490.Abstract: We study principal component analysis (PCA) for mean zero i.i.d. Gaussian observations $X_{1},dots,X_{n}$ in a separable Hilbert space $mathbb{H}$ with unknown covariance operator $Sigma $. The complexity of the problem is characterized by its effective rank $mathbf{r}(Sigma):=frac{operatorname{tr}(Sigma)}{|Sigma |}$, where $mathrm{tr}(Sigma)$ denotes the trace of $Sigma $ and $|Sigma|$ denotes its operator norm. We develop a method of bias reduction in the problem of estimation of linear functionals of eigenvectors of $Sigma $. Under the assumption that $mathbf{r}(Sigma)=o(n)$, we establish the asymptotic normality and asymptotic properties of the risk of the resulting estimators and prove matching minimax lower bounds, showing their semiparametric optimality. Full Article
line Hypothesis testing on linear structures of high-dimensional covariance matrix By projecteuclid.org Published On :: Wed, 30 Oct 2019 22:03 EDT Shurong Zheng, Zhao Chen, Hengjian Cui, Runze Li. Source: The Annals of Statistics, Volume 47, Number 6, 3300--3334.Abstract: This paper is concerned with test of significance on high-dimensional covariance structures, and aims to develop a unified framework for testing commonly used linear covariance structures. We first construct a consistent estimator for parameters involved in the linear covariance structure, and then develop two tests for the linear covariance structures based on entropy loss and quadratic loss used for covariance matrix estimation. To study the asymptotic properties of the proposed tests, we study related high-dimensional random matrix theory, and establish several highly useful asymptotic results. With the aid of these asymptotic results, we derive the limiting distributions of these two tests under the null and alternative hypotheses. We further show that the quadratic loss based test is asymptotically unbiased. We conduct Monte Carlo simulation study to examine the finite sample performance of the two tests. Our simulation results show that the limiting null distributions approximate their null distributions quite well, and the corresponding asymptotic critical values keep Type I error rate very well. Our numerical comparison implies that the proposed tests outperform existing ones in terms of controlling Type I error rate and power. Our simulation indicates that the test based on quadratic loss seems to have better power than the test based on entropy loss. Full Article