tim Functional weak limit theorem for a local empirical process of non-stationary time series and its application By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Ulrike Mayer, Henryk Zähle, Zhou Zhou. Source: Bernoulli, Volume 26, Number 3, 1891--1911.Abstract: We derive a functional weak limit theorem for a local empirical process of a wide class of piece-wise locally stationary (PLS) time series. The latter result is applied to derive the asymptotics of weighted empirical quantiles and weighted V-statistics of non-stationary time series. The class of admissible underlying time series is illustrated by means of PLS linear processes and PLS ARCH processes. Full Article
tim Kernel and wavelet density estimators on manifolds and more general metric spaces By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Galatia Cleanthous, Athanasios G. Georgiadis, Gerard Kerkyacharian, Pencho Petrushev, Dominique Picard. Source: Bernoulli, Volume 26, Number 3, 1832--1862.Abstract: We consider the problem of estimating the density of observations taking values in classical or nonclassical spaces such as manifolds and more general metric spaces. Our setting is quite general but also sufficiently rich in allowing the development of smooth functional calculus with well localized spectral kernels, Besov regularity spaces, and wavelet type systems. Kernel and both linear and nonlinear wavelet density estimators are introduced and studied. Convergence rates for these estimators are established and discussed. Full Article
tim Optimal functional supervised classification with separation condition By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Sébastien Gadat, Sébastien Gerchinovitz, Clément Marteau. Source: Bernoulli, Volume 26, Number 3, 1797--1831.Abstract: We consider the binary supervised classification problem with the Gaussian functional model introduced in ( Math. Methods Statist. 22 (2013) 213–225). Taking advantage of the Gaussian structure, we design a natural plug-in classifier and derive a family of upper bounds on its worst-case excess risk over Sobolev spaces. These bounds are parametrized by a separation distance quantifying the difficulty of the problem, and are proved to be optimal (up to logarithmic factors) through matching minimax lower bounds. Using the recent works of (In Advances in Neural Information Processing Systems (2014) 3437–3445 Curran Associates) and ( Ann. Statist. 44 (2016) 982–1009), we also derive a logarithmic lower bound showing that the popular $k$-nearest neighbors classifier is far from optimality in this specific functional setting. Full Article
tim A fast algorithm with minimax optimal guarantees for topic models with an unknown number of topics By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Xin Bing, Florentina Bunea, Marten Wegkamp. Source: Bernoulli, Volume 26, Number 3, 1765--1796.Abstract: Topic models have become popular for the analysis of data that consists in a collection of n independent multinomial observations, with parameters $N_{i}inmathbb{N}$ and $Pi_{i}in[0,1]^{p}$ for $i=1,ldots,n$. The model links all cell probabilities, collected in a $p imes n$ matrix $Pi$, via the assumption that $Pi$ can be factorized as the product of two nonnegative matrices $Ain[0,1]^{p imes K}$ and $Win[0,1]^{K imes n}$. Topic models have been originally developed in text mining, when one browses through $n$ documents, based on a dictionary of $p$ words, and covering $K$ topics. In this terminology, the matrix $A$ is called the word-topic matrix, and is the main target of estimation. It can be viewed as a matrix of conditional probabilities, and it is uniquely defined, under appropriate separability assumptions, discussed in detail in this work. Notably, the unique $A$ is required to satisfy what is commonly known as the anchor word assumption, under which $A$ has an unknown number of rows respectively proportional to the canonical basis vectors in $mathbb{R}^{K}$. The indices of such rows are referred to as anchor words. Recent computationally feasible algorithms, with theoretical guarantees, utilize constructively this assumption by linking the estimation of the set of anchor words with that of estimating the $K$ vertices of a simplex. This crucial step in the estimation of $A$ requires $K$ to be known, and cannot be easily extended to the more realistic set-up when $K$ is unknown. This work takes a different view on anchor word estimation, and on the estimation of $A$. We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates $K$ from the observed data. We derive new finite sample minimax lower bounds for the estimation of $A$, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any $n,N_{i},p$ and $K$, and both $p$ and $K$ are allowed to increase with $n$, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics $K$, while we provide the competing methods with the correct value in our simulations. Full Article
tim Local differential privacy: Elbow effect in optimal density estimation and adaptation over Besov ellipsoids By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Cristina Butucea, Amandine Dubois, Martin Kroll, Adrien Saumard. Source: Bernoulli, Volume 26, Number 3, 1727--1764.Abstract: We address the problem of non-parametric density estimation under the additional constraint that only privatised data are allowed to be published and available for inference. For this purpose, we adopt a recent generalisation of classical minimax theory to the framework of local $alpha$-differential privacy and provide a lower bound on the rate of convergence over Besov spaces $mathcal{B}^{s}_{pq}$ under mean integrated $mathbb{L}^{r}$-risk. This lower bound is deteriorated compared to the standard setup without privacy, and reveals a twofold elbow effect. In order to fulfill the privacy requirement, we suggest adding suitably scaled Laplace noise to empirical wavelet coefficients. Upper bounds within (at most) a logarithmic factor are derived under the assumption that $alpha$ stays bounded as $n$ increases: A linear but non-adaptive wavelet estimator is shown to attain the lower bound whenever $pgeq r$ but provides a slower rate of convergence otherwise. An adaptive non-linear wavelet estimator with appropriately chosen smoothing parameters and thresholding is shown to attain the lower bound within a logarithmic factor for all cases. Full Article
tim Estimating the number of connected components in a graph via subgraph sampling By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Jason M. Klusowski, Yihong Wu. Source: Bernoulli, Volume 26, Number 3, 1635--1664.Abstract: Learning properties of large graphs from samples has been an important problem in statistical network analysis since the early work of Goodman ( Ann. Math. Stat. 20 (1949) 572–579) and Frank ( Scand. J. Stat. 5 (1978) 177–188). We revisit a problem formulated by Frank ( Scand. J. Stat. 5 (1978) 177–188) of estimating the number of connected components in a large graph based on the subgraph sampling model, in which we randomly sample a subset of the vertices and observe the induced subgraph. The key question is whether accurate estimation is achievable in the sublinear regime where only a vanishing fraction of the vertices are sampled. We show that it is impossible if the parent graph is allowed to contain high-degree vertices or long induced cycles. For the class of chordal graphs, where induced cycles of length four or above are forbidden, we characterize the optimal sample complexity within constant factors and construct linear-time estimators that provably achieve these bounds. This significantly expands the scope of previous results which have focused on unbiased estimators and special classes of graphs such as forests or cliques. Both the construction and the analysis of the proposed methodology rely on combinatorial properties of chordal graphs and identities of induced subgraph counts. They, in turn, also play a key role in proving minimax lower bounds based on construction of random instances of graphs with matching structures of small subgraphs. Full Article
tim Sojourn time dimensions of fractional Brownian motion By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Ivan Nourdin, Giovanni Peccati, Stéphane Seuret. Source: Bernoulli, Volume 26, Number 3, 1619--1634.Abstract: We describe the size of the sets of sojourn times $E_{gamma }={tgeq 0:|B_{t}|leq t^{gamma }}$ associated with a fractional Brownian motion $B$ in terms of various large scale dimensions. Full Article
tim Efficient estimation in single index models through smoothing splines By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Arun K. Kuchibhotla, Rohit K. Patra. Source: Bernoulli, Volume 26, Number 2, 1587--1618.Abstract: We consider estimation and inference in a single index regression model with an unknown but smooth link function. In contrast to the standard approach of using kernels or regression splines, we use smoothing splines to estimate the smooth link function. We develop a method to compute the penalized least squares estimators (PLSEs) of the parametric and the nonparametric components given independent and identically distributed (i.i.d.) data. We prove the consistency and find the rates of convergence of the estimators. We establish asymptotic normality under mild assumption and prove asymptotic efficiency of the parametric component under homoscedastic errors. A finite sample simulation corroborates our asymptotic theory. We also analyze a car mileage data set and a Ozone concentration data set. The identifiability and existence of the PLSEs are also investigated. Full Article
tim On the probability distribution of the local times of diagonally operator-self-similar Gaussian fields with stationary increments By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Kamran Kalbasi, Thomas Mountford. Source: Bernoulli, Volume 26, Number 2, 1504--1534.Abstract: In this paper, we study the local times of vector-valued Gaussian fields that are ‘diagonally operator-self-similar’ and whose increments are stationary. Denoting the local time of such a Gaussian field around the spatial origin and over the temporal unit hypercube by $Z$, we show that there exists $lambdain(0,1)$ such that under some quite weak conditions, $lim_{n ightarrow+infty}frac{sqrt[n]{mathbb{E}(Z^{n})}}{n^{lambda}}$ and $lim_{x ightarrow+infty}frac{-logmathbb{P}(Z>x)}{x^{frac{1}{lambda}}}$ both exist and are strictly positive (possibly $+infty$). Moreover, we show that if the underlying Gaussian field is ‘strongly locally nondeterministic’, the above limits will be finite as well. These results are then applied to establish similar statements for the intersection local times of diagonally operator-self-similar Gaussian fields with stationary increments. Full Article
tim Consistent structure estimation of exponential-family random graph models with block structure By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Michael Schweinberger. Source: Bernoulli, Volume 26, Number 2, 1205--1233.Abstract: We consider the challenging problem of statistical inference for exponential-family random graph models based on a single observation of a random graph with complex dependence. To facilitate statistical inference, we consider random graphs with additional structure in the form of block structure. We have shown elsewhere that when the block structure is known, it facilitates consistency results for $M$-estimators of canonical and curved exponential-family random graph models with complex dependence, such as transitivity. In practice, the block structure is known in some applications (e.g., multilevel networks), but is unknown in others. When the block structure is unknown, the first and foremost question is whether it can be recovered with high probability based on a single observation of a random graph with complex dependence. The main consistency results of the paper show that it is possible to do so under weak dependence and smoothness conditions. These results confirm that exponential-family random graph models with block structure constitute a promising direction of statistical network analysis. Full Article
tim A Bayesian nonparametric approach to log-concave density estimation By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Ester Mariucci, Kolyan Ray, Botond Szabó. Source: Bernoulli, Volume 26, Number 2, 1070--1097.Abstract: The estimation of a log-concave density on $mathbb{R}$ is a canonical problem in the area of shape-constrained nonparametric inference. We present a Bayesian nonparametric approach to this problem based on an exponentiated Dirichlet process mixture prior and show that the posterior distribution converges to the log-concave truth at the (near-) minimax rate in Hellinger distance. Our proof proceeds by establishing a general contraction result based on the log-concave maximum likelihood estimator that prevents the need for further metric entropy calculations. We further present computationally more feasible approximations and both an empirical and hierarchical Bayes approach. All priors are illustrated numerically via simulations. Full Article
tim Robust estimation of mixing measures in finite mixture models By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Nhat Ho, XuanLong Nguyen, Ya’acov Ritov. Source: Bernoulli, Volume 26, Number 2, 828--857.Abstract: In finite mixture models, apart from underlying mixing measure, true kernel density function of each subpopulation in the data is, in many scenarios, unknown. Perhaps the most popular approach is to choose some kernel functions that we empirically believe our data are generated from and use these kernels to fit our models. Nevertheless, as long as the chosen kernel and the true kernel are different, statistical inference of mixing measure under this setting will be highly unstable. To overcome this challenge, we propose flexible and efficient robust estimators of the mixing measure in these models, which are inspired by the idea of minimum Hellinger distance estimator, model selection criteria, and superefficiency phenomenon. We demonstrate that our estimators consistently recover the true number of components and achieve the optimal convergence rates of parameter estimation under both the well- and misspecified kernel settings for any fixed bandwidth. These desirable asymptotic properties are illustrated via careful simulation studies with both synthetic and real data. Full Article
tim Robust modifications of U-statistics and applications to covariance estimation problems By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Stanislav Minsker, Xiaohan Wei. Source: Bernoulli, Volume 26, Number 1, 694--727.Abstract: Let $Y$ be a $d$-dimensional random vector with unknown mean $mu $ and covariance matrix $Sigma $. This paper is motivated by the problem of designing an estimator of $Sigma $ that admits exponential deviation bounds in the operator norm under minimal assumptions on the underlying distribution, such as existence of only 4th moments of the coordinates of $Y$. To address this problem, we propose robust modifications of the operator-valued U-statistics, obtain non-asymptotic guarantees for their performance, and demonstrate the implications of these results to the covariance estimation problem under various structural assumptions. Full Article
tim Consistent semiparametric estimators for recurrent event times models with application to virtual age models By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Eric Beutner, Laurent Bordes, Laurent Doyen. Source: Bernoulli, Volume 26, Number 1, 557--586.Abstract: Virtual age models are very useful to analyse recurrent events. Among the strengths of these models is their ability to account for treatment (or intervention) effects after an event occurrence. Despite their flexibility for modeling recurrent events, the number of applications is limited. This seems to be a result of the fact that in the semiparametric setting all the existing results assume the virtual age function that describes the treatment (or intervention) effects to be known. This shortcoming can be overcome by considering semiparametric virtual age models with parametrically specified virtual age functions. Yet, fitting such a model is a difficult task. Indeed, it has recently been shown that for these models the standard profile likelihood method fails to lead to consistent estimators. Here we show that consistent estimators can be constructed by smoothing the profile log-likelihood function appropriately. We show that our general result can be applied to most of the relevant virtual age models of the literature. Our approach shows that empirical process techniques may be a worthwhile alternative to martingale methods for studying asymptotic properties of these inference methods. A simulation study is provided to illustrate our consistency results together with an application to real data. Full Article
tim Prediction and estimation consistency of sparse multi-class penalized optimal scoring By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Irina Gaynanova. Source: Bernoulli, Volume 26, Number 1, 286--322.Abstract: Sparse linear discriminant analysis via penalized optimal scoring is a successful tool for classification in high-dimensional settings. While the variable selection consistency of sparse optimal scoring has been established, the corresponding prediction and estimation consistency results have been lacking. We bridge this gap by providing probabilistic bounds on out-of-sample prediction error and estimation error of multi-class penalized optimal scoring allowing for diverging number of classes. Full Article
tim Estimation of the linear fractional stable motion By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Stepan Mazur, Dmitry Otryakhin, Mark Podolskij. Source: Bernoulli, Volume 26, Number 1, 226--252.Abstract: In this paper, we investigate the parametric inference for the linear fractional stable motion in high and low frequency setting. The symmetric linear fractional stable motion is a three-parameter family, which constitutes a natural non-Gaussian analogue of the scaled fractional Brownian motion. It is fully characterised by the scaling parameter $sigma>0$, the self-similarity parameter $Hin(0,1)$ and the stability index $alphain(0,2)$ of the driving stable motion. The parametric estimation of the model is inspired by the limit theory for stationary increments Lévy moving average processes that has been recently studied in ( Ann. Probab. 45 (2017) 4477–4528). More specifically, we combine (negative) power variation statistics and empirical characteristic functions to obtain consistent estimates of $(sigma,alpha,H)$. We present the law of large numbers and some fully feasible weak limit theorems. Full Article
tim A new method for obtaining sharp compound Poisson approximation error estimates for sums of locally dependent random variables By projecteuclid.org Published On :: Thu, 05 Aug 2010 15:41 EDT Michael V. Boutsikas, Eutichia VaggelatouSource: Bernoulli, Volume 16, Number 2, 301--330.Abstract: Let X 1 , X 2 , …, X n be a sequence of independent or locally dependent random variables taking values in ℤ + . In this paper, we derive sharp bounds, via a new probabilistic method, for the total variation distance between the distribution of the sum ∑ i =1 n X i and an appropriate Poisson or compound Poisson distribution. These bounds include a factor which depends on the smoothness of the approximating Poisson or compound Poisson distribution. This “smoothness factor” is of order O( σ −2 ), according to a heuristic argument, where σ 2 denotes the variance of the approximating distribution. In this way, we offer sharp error estimates for a large range of values of the parameters. Finally, specific examples concerning appearances of rare runs in sequences of Bernoulli trials are presented by way of illustration. Full Article
tim From the coalfields of Somerset to the Adelaide Hills and beyond : the story of the Hewish Family : three centuries of one family's journey through time / Maureen Brown. By www.catalog.slsa.sa.gov.au Published On :: Hewish Henry -- Family. Full Article
tim Austin-Area District Looks for Digital/Blended Learning Program; Baltimore Seeks High School Literacy Program By marketbrief.edweek.org Published On :: Tue, 05 May 2020 22:14:33 +0000 The Round Rock Independent School District in Texas is looking for a digital curriculum and blended learning program. Baltimore is looking for a comprehensive high school literacy program. The post Austin-Area District Looks for Digital/Blended Learning Program; Baltimore Seeks High School Literacy Program appeared first on Market Brief. Full Article Purchasing Alert Curriculum / Digital Curriculum Educational Technology/Ed-Tech Learning Management / Student Information Systems Procurement / Purchasing / RFPs
tim Function-Specific Mixing Times and Concentration Away from Equilibrium By projecteuclid.org Published On :: Thu, 19 Mar 2020 22:02 EDT Maxim Rabinovich, Aaditya Ramdas, Michael I. Jordan, Martin J. Wainwright. Source: Bayesian Analysis, Volume 15, Number 2, 505--532.Abstract: Slow mixing is the central hurdle is applications of Markov chains, especially those used for Monte Carlo approximations (MCMC). In the setting of Bayesian inference, it is often only of interest to estimate the stationary expectations of a small set of functions, and so the usual definition of mixing based on total variation convergence may be too conservative. Accordingly, we introduce function-specific analogs of mixing times and spectral gaps, and use them to prove Hoeffding-like function-specific concentration inequalities. These results show that it is possible for empirical expectations of functions to concentrate long before the underlying chain has mixed in the classical sense, and we show that the concentration rates we achieve are optimal up to constants. We use our techniques to derive confidence intervals that are sharper than those implied by both classical Markov-chain Hoeffding bounds and Berry-Esseen-corrected central limit theorem (CLT) bounds. For applications that require testing, rather than point estimation, we show similar improvements over recent sequential testing results for MCMC. We conclude by applying our framework to real-data examples of MCMC, providing evidence that our theory is both accurate and relevant to practice. Full Article
tim Bayesian Estimation Under Informative Sampling with Unattenuated Dependence By projecteuclid.org Published On :: Mon, 13 Jan 2020 04:00 EST Matthew R. Williams, Terrance D. Savitsky. Source: Bayesian Analysis, Volume 15, Number 1, 57--77.Abstract: An informative sampling design leads to unit inclusion probabilities that are correlated with the response variable of interest. However, multistage sampling designs may also induce higher order dependencies, which are ignored in the literature when establishing consistency of estimators for survey data under a condition requiring asymptotic independence among the unit inclusion probabilities. This paper constructs new theoretical conditions that guarantee that the pseudo-posterior, which uses sampling weights based on first order inclusion probabilities to exponentiate the likelihood, is consistent not only for survey designs which have asymptotic factorization, but also for survey designs that induce residual or unattenuated dependence among sampled units. The use of the survey-weighted pseudo-posterior, together with our relaxed requirements for the survey design, establish a wide variety of analysis models that can be applied to a broad class of survey data sets. Using the complex sampling design of the National Survey on Drug Use and Health, we demonstrate our new theoretical result on multistage designs characterized by a cluster sampling step that expresses within-cluster dependence. We explore the impact of multistage designs and order based sampling. Full Article
tim Estimating the Use of Public Lands: Integrated Modeling of Open Populations with Convolution Likelihood Ecological Abundance Regression By projecteuclid.org Published On :: Thu, 19 Dec 2019 22:10 EST Lutz F. Gruber, Erica F. Stuber, Lyndsie S. Wszola, Joseph J. Fontaine. Source: Bayesian Analysis, Volume 14, Number 4, 1173--1199.Abstract: We present an integrated open population model where the population dynamics are defined by a differential equation, and the related statistical model utilizes a Poisson binomial convolution likelihood. Key advantages of the proposed approach over existing open population models include the flexibility to predict related, but unobserved quantities such as total immigration or emigration over a specified time period, and more computationally efficient posterior simulation by elimination of the need to explicitly simulate latent immigration and emigration. The viability of the proposed method is shown in an in-depth analysis of outdoor recreation participation on public lands, where the surveyed populations changed rapidly and demographic population closure cannot be assumed even within a single day. Full Article
tim Post-Processing Posteriors Over Precision Matrices to Produce Sparse Graph Estimates By projecteuclid.org Published On :: Thu, 19 Dec 2019 22:10 EST Amir Bashir, Carlos M. Carvalho, P. Richard Hahn, M. Beatrix Jones. Source: Bayesian Analysis, Volume 14, Number 4, 1075--1090.Abstract: A variety of computationally efficient Bayesian models for the covariance matrix of a multivariate Gaussian distribution are available. However, all produce a relatively dense estimate of the precision matrix, and are therefore unsatisfactory when one wishes to use the precision matrix to consider the conditional independence structure of the data. This paper considers the posterior predictive distribution of model fit for these covariance models. We then undertake post-processing of the Bayes point estimate for the precision matrix to produce a sparse model whose expected fit lies within the upper 95% of the posterior predictive distribution of fit. The impact of the method for selecting the zero elements of the precision matrix is evaluated. Good results were obtained using models that encouraged a sparse posterior (G-Wishart, Bayesian adaptive graphical lasso) and selection using credible intervals. We also find that this approach is easily extended to the problem of finding a sparse set of elements that differ across a set of precision matrices, a natural summary when a common set of variables is observed under multiple conditions. We illustrate our findings with moderate dimensional data examples from finance and metabolomics. Full Article
tim Beyond Whittle: Nonparametric Correction of a Parametric Likelihood with a Focus on Bayesian Time Series Analysis By projecteuclid.org Published On :: Thu, 19 Dec 2019 22:10 EST Claudia Kirch, Matthew C. Edwards, Alexander Meier, Renate Meyer. Source: Bayesian Analysis, Volume 14, Number 4, 1037--1073.Abstract: Nonparametric Bayesian inference has seen a rapid growth over the last decade but only few nonparametric Bayesian approaches to time series analysis have been developed. Most existing approaches use Whittle’s likelihood for Bayesian modelling of the spectral density as the main nonparametric characteristic of stationary time series. It is known that the loss of efficiency using Whittle’s likelihood can be substantial. On the other hand, parametric methods are more powerful than nonparametric methods if the observed time series is close to the considered model class but fail if the model is misspecified. Therefore, we suggest a nonparametric correction of a parametric likelihood that takes advantage of the efficiency of parametric models while mitigating sensitivities through a nonparametric amendment. We use a nonparametric Bernstein polynomial prior on the spectral density with weights induced by a Dirichlet process and prove posterior consistency for Gaussian stationary time series. Bayesian posterior computations are implemented via an MH-within-Gibbs sampler and the performance of the nonparametrically corrected likelihood for Gaussian time series is illustrated in a simulation study and in three astronomy applications, including estimating the spectral density of gravitational wave data from the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO). Full Article
tim Constrained Bayesian Optimization with Noisy Experiments By projecteuclid.org Published On :: Wed, 13 Mar 2019 22:00 EDT Benjamin Letham, Brian Karrer, Guilherme Ottoni, Eytan Bakshy. Source: Bayesian Analysis, Volume 14, Number 2, 495--519.Abstract: Randomized experiments are the gold standard for evaluating the effects of changes to real-world systems. Data in these tests may be difficult to collect and outcomes may have high variance, resulting in potentially large measurement error. Bayesian optimization is a promising technique for efficiently optimizing multiple continuous parameters, but existing approaches degrade in performance when the noise level is high, limiting its applicability to many randomized experiments. We derive an expression for expected improvement under greedy batch optimization with noisy observations and noisy constraints, and develop a quasi-Monte Carlo approximation that allows it to be efficiently optimized. Simulations with synthetic functions show that optimization performance on noisy, constrained problems outperforms existing methods. We further demonstrate the effectiveness of the method with two real-world experiments conducted at Facebook: optimizing a ranking system, and optimizing server compiler flags. Full Article
tim Gaussianization Machines for Non-Gaussian Function Estimation Models By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST T. Tony Cai. Source: Statistical Science, Volume 34, Number 4, 635--656.Abstract: A wide range of nonparametric function estimation models have been studied individually in the literature. Among them the homoscedastic nonparametric Gaussian regression is arguably the best known and understood. Inspired by the asymptotic equivalence theory, Brown, Cai and Zhou ( Ann. Statist. 36 (2008) 2055–2084; Ann. Statist. 38 (2010) 2005–2046) and Brown et al. ( Probab. Theory Related Fields 146 (2010) 401–433) developed a unified approach to turn a collection of non-Gaussian function estimation models into a standard Gaussian regression and any good Gaussian nonparametric regression method can then be used. These Gaussianization Machines have two key components, binning and transformation. When combined with BlockJS, a wavelet thresholding procedure for Gaussian regression, the procedures are computationally efficient with strong theoretical guarantees. Technical analysis given in Brown, Cai and Zhou ( Ann. Statist. 36 (2008) 2055–2084; Ann. Statist. 38 (2010) 2005–2046) and Brown et al. ( Probab. Theory Related Fields 146 (2010) 401–433) shows that the estimators attain the optimal rate of convergence adaptively over a large set of Besov spaces and across a collection of non-Gaussian function estimation models, including robust nonparametric regression, density estimation, and nonparametric regression in exponential families. The estimators are also spatially adaptive. The Gaussianization Machines significantly extend the flexibility and scope of the theories and methodologies originally developed for the conventional nonparametric Gaussian regression. This article aims to provide a concise account of the Gaussianization Machines developed in Brown, Cai and Zhou ( Ann. Statist. 36 (2008) 2055–2084; Ann. Statist. 38 (2010) 2005–2046), Brown et al. ( Probab. Theory Related Fields 146 (2010) 401–433). Full Article
tim User-Friendly Covariance Estimation for Heavy-Tailed Distributions By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Yuan Ke, Stanislav Minsker, Zhao Ren, Qiang Sun, Wen-Xin Zhou. Source: Statistical Science, Volume 34, Number 3, 454--471.Abstract: We provide a survey of recent results on covariance estimation for heavy-tailed distributions. By unifying ideas scattered in the literature, we propose user-friendly methods that facilitate practical implementation. Specifically, we introduce elementwise and spectrumwise truncation operators, as well as their $M$-estimator counterparts, to robustify the sample covariance matrix. Different from the classical notion of robustness that is characterized by the breakdown property, we focus on the tail robustness which is evidenced by the connection between nonasymptotic deviation and confidence level. The key insight is that estimators should adapt to the sample size, dimensionality and noise level to achieve optimal tradeoff between bias and robustness. Furthermore, to facilitate practical implementation, we propose data-driven procedures that automatically calibrate the tuning parameters. We demonstrate their applications to a series of structured models in high dimensions, including the bandable and low-rank covariance matrices and sparse precision matrices. Numerical studies lend strong support to the proposed methods. Full Article
tim ROS Regression: Integrating Regularization with Optimal Scaling Regression By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Jacqueline J. Meulman, Anita J. van der Kooij, Kevin L. W. Duisters. Source: Statistical Science, Volume 34, Number 3, 361--390.Abstract: We present a methodology for multiple regression analysis that deals with categorical variables (possibly mixed with continuous ones), in combination with regularization, variable selection and high-dimensional data ($Pgg N$). Regularization and optimal scaling (OS) are two important extensions of ordinary least squares regression (OLS) that will be combined in this paper. There are two data analytic situations for which optimal scaling was developed. One is the analysis of categorical data, and the other the need for transformations because of nonlinear relationships between predictors and outcome. Optimal scaling of categorical data finds quantifications for the categories, both for the predictors and for the outcome variables, that are optimal for the regression model in the sense that they maximize the multiple correlation. When nonlinear relationships exist, nonlinear transformation of predictors and outcome maximize the multiple correlation in the same way. We will consider a variety of transformation types; typically we use step functions for categorical variables, and smooth (spline) functions for continuous variables. Both types of functions can be restricted to be monotonic, preserving the ordinal information in the data. In combination with optimal scaling, three popular regularization methods will be considered: Ridge regression, the Lasso and the Elastic Net. The resulting method will be called ROS Regression (Regularized Optimal Scaling Regression). The OS algorithm provides straightforward and efficient estimation of the regularized regression coefficients, automatically gives the Group Lasso and Blockwise Sparse Regression, and extends them by the possibility to maintain ordinal properties in the data. Extended examples are provided. Full Article
tim Producing Official County-Level Agricultural Estimates in the United States: Needs and Challenges By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Nathan B. Cruze, Andreea L. Erciulescu, Balgobin Nandram, Wendy J. Barboza, Linda J. Young. Source: Statistical Science, Volume 34, Number 2, 301--316.Abstract: In the United States, county-level estimates of crop yield, production, and acreage published by the United States Department of Agriculture’s National Agricultural Statistics Service (USDA NASS) play an important role in determining the value of payments allotted to farmers and ranchers enrolled in several federal programs. Given the importance of these official county-level crop estimates, NASS continually strives to improve its crops county estimates program in terms of accuracy, reliability and coverage. In 2015, NASS engaged a panel of experts convened under the auspices of the National Academies of Sciences, Engineering, and Medicine Committee on National Statistics (CNSTAT) for guidance on implementing models that may synthesize multiple sources of information into a single estimate, provide defensible measures of uncertainty, and potentially increase the number of publishable county estimates. The final report titled Improving Crop Estimates by Integrating Multiple Data Sources was released in 2017. This paper discusses several needs and requirements for NASS county-level crop estimates that were illuminated during the activities of the CNSTAT panel. A motivating example of planted acreage estimation in Illinois illustrates several challenges that NASS faces as it considers adopting any explicit model for official crops county estimates. Full Article
tim Comment: Empirical Bayes Interval Estimation By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Wenhua Jiang. Source: Statistical Science, Volume 34, Number 2, 219--223.Abstract: This is a contribution to the discussion of the enlightening paper by Professor Efron. We focus on empirical Bayes interval estimation. We discuss the oracle interval estimation rules, the empirical Bayes estimation of the oracle rule and the computation. Some numerical results are reported. Full Article
tim Shoppers Swear These $30 Colorfulkoala Leggings Are the Ultimate Lululemon Dupes By www.health.com Published On :: Wed, 11 Dec 2019 12:44:27 -0500 And they’re available in 19 fun prints. Full Article
tim Optimization of a GCaMP Calcium Indicator for Neural Activity Imaging By www.jneurosci.org Published On :: 2012-10-03 Jasper AkerboomOct 3, 2012; 32:13819-13840Cellular Full Article
tim The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality By www.jneurosci.org Published On :: 2019-09-25 Fatma DenizSep 25, 2019; 39:7722-7736BehavioralSystemsCognitive Full Article
tim Increased Neural Activity in Mesostriatal Regions after Prefrontal Transcranial Direct Current Stimulation and L-DOPA Administration By www.jneurosci.org Published On :: 2019-07-03 Benjamin MeyerJul 3, 2019; 39:5326-5335Systems/Circuits Full Article
tim Optimization of a GCaMP Calcium Indicator for Neural Activity Imaging By www.jneurosci.org Published On :: 2012-10-03 Jasper AkerboomOct 3, 2012; 32:13819-13840Cellular Full Article
tim Response of Neurons in the Lateral Intraparietal Area during a Combined Visual Discrimination Reaction Time Task By www.jneurosci.org Published On :: 2002-11-01 Jamie D. RoitmanNov 1, 2002; 22:9475-9489Behavioral Full Article
tim Synaptic Modifications in Cultured Hippocampal Neurons: Dependence on Spike Timing, Synaptic Strength, and Postsynaptic Cell Type By www.jneurosci.org Published On :: 1998-12-15 Guo-qiang BiDec 15, 1998; 18:10464-10472Articles Full Article
tim Cogliere l'attimo per garantire una crescita sostenuta By www.bis.org Published On :: 2018-06-24T10:30:00Z Italian translation of the BIS press release on the presentation of the Annual Economic Report 2018, 24 June 2018. Le autorità possono fare in modo che l'attuale ripresa economica si mantenga oltre il breve termine avviando riforme strutturali, ridando margine di manovra alle politiche monetarie e di bilancio per affrontare eventuali future minacce, e incoraggiando la pronta attuazione delle riforme regolamentari, scrive la Banca dei Regolamenti Internazionali (BRI) nella sua Relazione economica annuale. ... Full Article
tim In the Time of Virus By www.crmbuyer.com Published On :: 2020-03-19T12:36:58-07:00 Life goes on, though I keep thinking about the coronavirus and its impact on business, which is substantial. I am wondering if it's time to count our blessings even as we remain vigilant. At this point the number of active cases in the U.S. is still small, relative to the population, though the bug has an annoying ability to do math in the form of exponential spread. Full Article
tim Trying Times for Employee Engagement By www.crmbuyer.com Published On :: 2020-03-28T04:00:00-07:00 These days are either the most trying time for encouraging employee engagement or the best we could expect. With so many people working remotely, many businesses need extra ways to communicate with the rank and file, and this might present a prime opportunity to try new things. We make a big deal of engaging the customer, and in most CRM circles engagement outranks simple customer experience. Full Article
tim Timothy Egan: John Roberts' America By opinionator.blogs.nytimes.com Published On :: Full Article
tim Central banking in challenging times By www.bis.org Published On :: 2019-11-08T12:30:00Z Speech by Mr Claudio Borio, Head of the Monetary and Economic Department of the BIS, at the SUERF Annual Lecture Conference on "Populism, Economic Policies and Central Banking", SUERF/BAFFI CAREFIN Centre Conference, Milan, 8 November 2019. Full Article
tim Zero Hunger is possible ‘within our lifetimes' By www.fao.org Published On :: Tue, 24 Sep 2013 00:00:00 GMT FAO Director-General José Graziano da Silva underlined his firm belief that a hunger-free world is possible "within our lifetimes," during high-level talks in New York. "The Zero Hunger Challenge calls for something new – something bold, but long overdue," he said. It was a "decisive global commitment to end hunger, eliminate childhood stunting, make all food systems sustainable, eradicate rural poverty, [...] Full Article
tim It's about time we talk about soil! By www.fao.org Published On :: Thu, 08 Jan 2015 00:00:00 GMT There can be no life without it, it feeds us and we are responsible for it! Soil is formed from rocks that are decomposed slowly by sun, the wind and the rain, by animals and plants. But it is in danger because of expanding cities, deforestation, unsustainable land use and management practices, pollution, overgrazing and climate change. The current rate [...] Full Article
tim Water Scarcity – One of the greatest challenges of our time By www.fao.org Published On :: Wed, 12 Apr 2017 00:00:00 GMT Water is essential for agricultural production and food security. It is the lifeblood of ecosystems, including forests, lakes and wetlands, on which the food and nutritional security of present and future generations depends on. Yet, our freshwater resources are dwindling at an alarming rate. Growing water scarcity is now one of the leading challenges for sustainable development. This challenge will [...] Full Article
tim Antimicrobial resistance – What you need to know By www.fao.org Published On :: Tue, 14 Nov 2017 00:00:00 GMT An estimated 700 000 people die each year from antimicrobial resistant (AMR) infections and an untold number of sick animals may not be responding to treatment. AMR is a significant global threat to public health, food safety and security, as well as to livelihoods, animal production and economic and agricultural development. The intensification of agricultural production has led to a rising use of antimicrobials – a use that is expected to more than double by 2030. Antimicrobials are important for the treatment of animal and plant diseases [...] Full Article
tim 07.05.11: Sometimes I have so much fun I forget about everthing. By www.explodingdog.com Published On :: Full Article
tim After a Lifetime of Donkey Polo, This Chinese Noblewoman Asked to Be Buried With Her Steeds By www.smithsonianmag.com Published On :: Wed, 18 Mar 2020 14:51:56 +0000 New research reveals a Tang Dynasty woman's love for sports—and big-eared, braying equids Full Article
tim Stores Launch Special Shopping Times for Seniors and Other Groups Vulnerable to COVID-19 By www.smithsonianmag.com Published On :: Thu, 19 Mar 2020 12:00:00 +0000 But will that keep susceptible populations safe? Full Article