con A Bayesian nonparametric approach to log-concave density estimation By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Ester Mariucci, Kolyan Ray, Botond Szabó. Source: Bernoulli, Volume 26, Number 2, 1070--1097.Abstract: The estimation of a log-concave density on $mathbb{R}$ is a canonical problem in the area of shape-constrained nonparametric inference. We present a Bayesian nonparametric approach to this problem based on an exponentiated Dirichlet process mixture prior and show that the posterior distribution converges to the log-concave truth at the (near-) minimax rate in Hellinger distance. Our proof proceeds by establishing a general contraction result based on the log-concave maximum likelihood estimator that prevents the need for further metric entropy calculations. We further present computationally more feasible approximations and both an empirical and hierarchical Bayes approach. All priors are illustrated numerically via simulations. Full Article
con Stable processes conditioned to hit an interval continuously from the outside By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Leif Döring, Philip Weissmann. Source: Bernoulli, Volume 26, Number 2, 980--1015.Abstract: Conditioning stable Lévy processes on zero probability events recently became a tractable subject since several explicit formulas emerged from a deep analysis using the Lamperti transformations for self-similar Markov processes. In this article, we derive new harmonic functions and use them to explain how to condition stable processes to hit continuously a compact interval from the outside. Full Article
con Convergence of the age structure of general schemes of population processes By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Jie Yen Fan, Kais Hamza, Peter Jagers, Fima Klebaner. Source: Bernoulli, Volume 26, Number 2, 893--926.Abstract: We consider a family of general branching processes with reproduction parameters depending on the age of the individual as well as the population age structure and a parameter $K$, which may represent the carrying capacity. These processes are Markovian in the age structure. In a previous paper ( Proc. Steklov Inst. Math. 282 (2013) 90–105), the Law of Large Numbers as $K o infty $ was derived. Here we prove the central limit theorem, namely the weak convergence of the fluctuation processes in an appropriate Skorokhod space. We also show that the limit is driven by a stochastic partial differential equation. Full Article
con Convergence and concentration of empirical measures under Wasserstein distance in unbounded functional spaces By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Jing Lei. Source: Bernoulli, Volume 26, Number 1, 767--798.Abstract: We provide upper bounds of the expected Wasserstein distance between a probability measure and its empirical version, generalizing recent results for finite dimensional Euclidean spaces and bounded functional spaces. Such a generalization can cover Euclidean spaces with large dimensionality, with the optimal dependence on the dimensionality. Our method also covers the important case of Gaussian processes in separable Hilbert spaces, with rate-optimal upper bounds for functional data distributions whose coordinates decay geometrically or polynomially. Moreover, our bounds of the expected value can be combined with mean-concentration results to yield improved exponential tail probability bounds for the Wasserstein error of empirical measures under Bernstein-type or log Sobolev-type conditions. Full Article
con Consistent semiparametric estimators for recurrent event times models with application to virtual age models By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Eric Beutner, Laurent Bordes, Laurent Doyen. Source: Bernoulli, Volume 26, Number 1, 557--586.Abstract: Virtual age models are very useful to analyse recurrent events. Among the strengths of these models is their ability to account for treatment (or intervention) effects after an event occurrence. Despite their flexibility for modeling recurrent events, the number of applications is limited. This seems to be a result of the fact that in the semiparametric setting all the existing results assume the virtual age function that describes the treatment (or intervention) effects to be known. This shortcoming can be overcome by considering semiparametric virtual age models with parametrically specified virtual age functions. Yet, fitting such a model is a difficult task. Indeed, it has recently been shown that for these models the standard profile likelihood method fails to lead to consistent estimators. Here we show that consistent estimators can be constructed by smoothing the profile log-likelihood function appropriately. We show that our general result can be applied to most of the relevant virtual age models of the literature. Our approach shows that empirical process techniques may be a worthwhile alternative to martingale methods for studying asymptotic properties of these inference methods. A simulation study is provided to illustrate our consistency results together with an application to real data. Full Article
con Construction results for strong orthogonal arrays of strength three By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Chenlu Shi, Boxin Tang. Source: Bernoulli, Volume 26, Number 1, 418--431.Abstract: Strong orthogonal arrays were recently introduced as a class of space-filling designs for computer experiments. The most attractive are those of strength three for their economical run sizes. Although the existence of strong orthogonal arrays of strength three has been completely characterized, the construction of these arrays has not been explored. In this paper, we provide a systematic and comprehensive study on the construction of these arrays, with the aim at better space-filling properties. Besides various characterizing results, three families of strength-three strong orthogonal arrays are presented. One of these families deserves special mention, as the arrays in this family enjoy almost all of the space-filling properties of strength-four strong orthogonal arrays, and do so with much more economical run sizes than the latter. The theory of maximal designs and their doubling constructions plays a crucial role in many of theoretical developments. Full Article
con SPDEs with fractional noise in space: Continuity in law with respect to the Hurst index By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Luca M. Giordano, Maria Jolis, Lluís Quer-Sardanyons. Source: Bernoulli, Volume 26, Number 1, 352--386.Abstract: In this article, we consider the quasi-linear stochastic wave and heat equations on the real line and with an additive Gaussian noise which is white in time and behaves in space like a fractional Brownian motion with Hurst index $Hin (0,1)$. The drift term is assumed to be globally Lipschitz. We prove that the solution of each of the above equations is continuous in terms of the index $H$, with respect to the convergence in law in the space of continuous functions. Full Article
con Weak convergence of quantile and expectile processes under general assumptions By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Tobias Zwingmann, Hajo Holzmann. Source: Bernoulli, Volume 26, Number 1, 323--351.Abstract: We show weak convergence of quantile and expectile processes to Gaussian limit processes in the space of bounded functions endowed with an appropriate semimetric which is based on the concepts of epi- and hypo- convergence as introduced in A. Bücher, J. Segers and S. Volgushev (2014), ‘ When Uniform Weak Convergence Fails: Empirical Processes for Dependence Functions and Residuals via Epi- and Hypographs ’, Annals of Statistics 42 . We impose assumptions for which it is known that weak convergence with respect to the supremum norm generally fails to hold. For quantiles, we consider stationary observations, where the marginal distribution function is assumed to be strictly increasing and continuous except for finitely many points and to admit strictly positive – possibly infinite – left- and right-sided derivatives. For expectiles, we focus on independent and identically distributed (i.i.d.) observations. Only a finite second moment and continuity at the boundary points but no further smoothness properties of the distribution function are required. We also show consistency of the bootstrap for this mode of convergence in the i.i.d. case for quantiles and expectiles. Full Article
con Prediction and estimation consistency of sparse multi-class penalized optimal scoring By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Irina Gaynanova. Source: Bernoulli, Volume 26, Number 1, 286--322.Abstract: Sparse linear discriminant analysis via penalized optimal scoring is a successful tool for classification in high-dimensional settings. While the variable selection consistency of sparse optimal scoring has been established, the corresponding prediction and estimation consistency results have been lacking. We bridge this gap by providing probabilistic bounds on out-of-sample prediction error and estimation error of multi-class penalized optimal scoring allowing for diverging number of classes. Full Article
con Needles and straw in a haystack: Robust confidence for possibly sparse sequences By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Eduard Belitser, Nurzhan Nurushev. Source: Bernoulli, Volume 26, Number 1, 191--225.Abstract: In the general signal$+$noise (allowing non-normal, non-independent observations) model, we construct an empirical Bayes posterior which we then use for uncertainty quantification for the unknown, possibly sparse, signal. We introduce a novel excessive bias restriction (EBR) condition, which gives rise to a new slicing of the entire space that is suitable for uncertainty quantification. Under EBR and some mild exchangeable exponential moment condition on the noise, we establish the local (oracle) optimality of the proposed confidence ball. Without EBR, we propose another confidence ball of full coverage, but its radius contains an additional $sigma n^{1/4}$-term. In passing, we also get the local optimal results for estimation , posterior contraction problems, and the problem of weak recovery of sparsity structure . Adaptive minimax results (also for the estimation and posterior contraction problems) over various sparsity classes follow from our local results. Full Article
con Discover Protestant nonconformity in England and Wales / Paul Blake. By www.catalog.slsa.sa.gov.au Published On :: Dissenters, Religious -- Great Britain. Full Article
con Economists Expect Huge Future Earnings Loss for Students Missing School Due to COVID-19 By marketbrief.edweek.org Published On :: Mon, 04 May 2020 14:47:10 +0000 Members of the future American workforce could see losses of earnings that add up to trillions of dollars, depending on how long coronavirus-related school closures persist. The post Economists Expect Huge Future Earnings Loss for Students Missing School Due to COVID-19 appeared first on Market Brief. Full Article Marketplace K-12 Academic Research Career / College Readiness COVID-19 Data Federal / State Policy Research/Evaluation
con 4 Ways to Help Students Cultivate Meaningful Connections Through Tech By marketbrief.edweek.org Published On :: Thu, 07 May 2020 15:19:55 +0000 The CEO of Move This World isn't big on screen time, but in the midst of the coronavirus pandemic, technology--when used with care--can help strengthen relationships. The post 4 Ways to Help Students Cultivate Meaningful Connections Through Tech appeared first on Market Brief. Full Article Marketplace K-12 Coronavirus COVID-19 Educational Technology/Ed-Tech Online / Virtual Learning Social Emotional Learning (SEL) wellbeing
con Calif. Ed-Tech Consortium Seeks Media Repository Solutions; Saint Paul District Needs Background Check Services By marketbrief.edweek.org Published On :: Fri, 08 May 2020 13:52:21 +0000 Saint Paul schools are in the market for a vendor to provide background checks, while the Education Technology Joint Powers Authority is seeking media repositories. A Texas district wants quotes on technology for new campuses. The post Calif. Ed-Tech Consortium Seeks Media Repository Solutions; Saint Paul District Needs Background Check Services appeared first on Market Brief. Full Article Purchasing Alert Background Checks Media Repository Procurement / Purchasing / RFPs Software / Hardware
con Item 01: Notebooks (2) containing hand written copies of 123 letters from Major William Alan Audsley to his parents, ca. 1916-ca. 1919, transcribed by his father. Also includes original letters (2) written by Major Audsley. By feedproxy.google.com Published On :: 28/05/2015 11:00:09 AM Full Article
con Item 01: Autograph letter signed, from Hume, Appin, to William E. Riley, concerning an account for money owed by Riley, 4 September 1834 By feedproxy.google.com Published On :: 14/07/2015 9:51:03 AM Full Article
con Delta, citing health concerns, drops service to 10 US airports. Is yours on the list? By news.yahoo.com Published On :: Fri, 08 May 2020 18:41:45 -0400 Delta said it is making the move to protect employees amid the coronavirus pandemic, but planes have been flying near empty Full Article
con Chaffetz: I don't understand why Adam Schiff continues to have a security clearance By news.yahoo.com Published On :: Fri, 08 May 2020 14:43:30 -0400 Fox News contributor Jason Chaffetz and Andy McCarthy react to House Intelligence transcripts on Russia probe. Full Article
con Almost 12,000 meatpacking and food plant workers have reportedly contracted COVID-19. At least 48 have died. By news.yahoo.com Published On :: Fri, 08 May 2020 12:21:01 -0400 The infections and deaths are spread across roughly two farms and 189 meat and processed food factories. Full Article
con Function-Specific Mixing Times and Concentration Away from Equilibrium By projecteuclid.org Published On :: Thu, 19 Mar 2020 22:02 EDT Maxim Rabinovich, Aaditya Ramdas, Michael I. Jordan, Martin J. Wainwright. Source: Bayesian Analysis, Volume 15, Number 2, 505--532.Abstract: Slow mixing is the central hurdle is applications of Markov chains, especially those used for Monte Carlo approximations (MCMC). In the setting of Bayesian inference, it is often only of interest to estimate the stationary expectations of a small set of functions, and so the usual definition of mixing based on total variation convergence may be too conservative. Accordingly, we introduce function-specific analogs of mixing times and spectral gaps, and use them to prove Hoeffding-like function-specific concentration inequalities. These results show that it is possible for empirical expectations of functions to concentrate long before the underlying chain has mixed in the classical sense, and we show that the concentration rates we achieve are optimal up to constants. We use our techniques to derive confidence intervals that are sharper than those implied by both classical Markov-chain Hoeffding bounds and Berry-Esseen-corrected central limit theorem (CLT) bounds. For applications that require testing, rather than point estimation, we show similar improvements over recent sequential testing results for MCMC. We conclude by applying our framework to real-data examples of MCMC, providing evidence that our theory is both accurate and relevant to practice. Full Article
con High-Dimensional Posterior Consistency for Hierarchical Non-Local Priors in Regression By projecteuclid.org Published On :: Mon, 13 Jan 2020 04:00 EST Xuan Cao, Kshitij Khare, Malay Ghosh. Source: Bayesian Analysis, Volume 15, Number 1, 241--262.Abstract: The choice of tuning parameters in Bayesian variable selection is a critical problem in modern statistics. In particular, for Bayesian linear regression with non-local priors, the scale parameter in the non-local prior density is an important tuning parameter which reflects the dispersion of the non-local prior density around zero, and implicitly determines the size of the regression coefficients that will be shrunk to zero. Current approaches treat the scale parameter as given, and suggest choices based on prior coverage/asymptotic considerations. In this paper, we consider the fully Bayesian approach introduced in (Wu, 2016) with the pMOM non-local prior and an appropriate Inverse-Gamma prior on the tuning parameter to analyze the underlying theoretical property. Under standard regularity assumptions, we establish strong model selection consistency in a high-dimensional setting, where $p$ is allowed to increase at a polynomial rate with $n$ or even at a sub-exponential rate with $n$ . Through simulation studies, we demonstrate that our model selection procedure can outperform other Bayesian methods which treat the scale parameter as given, and commonly used penalized likelihood methods, in a range of simulation settings. Full Article
con Estimating the Use of Public Lands: Integrated Modeling of Open Populations with Convolution Likelihood Ecological Abundance Regression By projecteuclid.org Published On :: Thu, 19 Dec 2019 22:10 EST Lutz F. Gruber, Erica F. Stuber, Lyndsie S. Wszola, Joseph J. Fontaine. Source: Bayesian Analysis, Volume 14, Number 4, 1173--1199.Abstract: We present an integrated open population model where the population dynamics are defined by a differential equation, and the related statistical model utilizes a Poisson binomial convolution likelihood. Key advantages of the proposed approach over existing open population models include the flexibility to predict related, but unobserved quantities such as total immigration or emigration over a specified time period, and more computationally efficient posterior simulation by elimination of the need to explicitly simulate latent immigration and emigration. The viability of the proposed method is shown in an in-depth analysis of outdoor recreation participation on public lands, where the surveyed populations changed rapidly and demographic population closure cannot be assumed even within a single day. Full Article
con A Bayesian Conjugate Gradient Method (with Discussion) By projecteuclid.org Published On :: Mon, 02 Dec 2019 04:00 EST Jon Cockayne, Chris J. Oates, Ilse C.F. Ipsen, Mark Girolami. Source: Bayesian Analysis, Volume 14, Number 3, 937--1012.Abstract: A fundamental task in numerical computation is the solution of large linear systems. The conjugate gradient method is an iterative method which offers rapid convergence to the solution, particularly when an effective preconditioner is employed. However, for more challenging systems a substantial error can be present even after many iterations have been performed. The estimates obtained in this case are of little value unless further information can be provided about, for example, the magnitude of the error. In this paper we propose a novel statistical model for this error, set in a Bayesian framework. Our approach is a strict generalisation of the conjugate gradient method, which is recovered as the posterior mean for a particular choice of prior. The estimates obtained are analysed with Krylov subspace methods and a contraction result for the posterior is presented. The method is then analysed in a simulation study as well as being applied to a challenging problem in medical imaging. Full Article
con High-Dimensional Confounding Adjustment Using Continuous Spike and Slab Priors By projecteuclid.org Published On :: Tue, 11 Jun 2019 04:00 EDT Joseph Antonelli, Giovanni Parmigiani, Francesca Dominici. Source: Bayesian Analysis, Volume 14, Number 3, 825--848.Abstract: In observational studies, estimation of a causal effect of a treatment on an outcome relies on proper adjustment for confounding. If the number of the potential confounders ( $p$ ) is larger than the number of observations ( $n$ ), then direct control for all potential confounders is infeasible. Existing approaches for dimension reduction and penalization are generally aimed at predicting the outcome, and are less suited for estimation of causal effects. Under standard penalization approaches (e.g. Lasso), if a variable $X_{j}$ is strongly associated with the treatment $T$ but weakly with the outcome $Y$ , the coefficient $eta_{j}$ will be shrunk towards zero thus leading to confounding bias. Under the assumption of a linear model for the outcome and sparsity, we propose continuous spike and slab priors on the regression coefficients $eta_{j}$ corresponding to the potential confounders $X_{j}$ . Specifically, we introduce a prior distribution that does not heavily shrink to zero the coefficients ( $eta_{j}$ s) of the $X_{j}$ s that are strongly associated with $T$ but weakly associated with $Y$ . We compare our proposed approach to several state of the art methods proposed in the literature. Our proposed approach has the following features: 1) it reduces confounding bias in high dimensional settings; 2) it shrinks towards zero coefficients of instrumental variables; and 3) it achieves good coverages even in small sample sizes. We apply our approach to the National Health and Nutrition Examination Survey (NHANES) data to estimate the causal effects of persistent pesticide exposure on triglyceride levels. Full Article
con A Bayesian Nonparametric Multiple Testing Procedure for Comparing Several Treatments Against a Control By projecteuclid.org Published On :: Fri, 31 May 2019 22:05 EDT Luis Gutiérrez, Andrés F. Barrientos, Jorge González, Daniel Taylor-Rodríguez. Source: Bayesian Analysis, Volume 14, Number 2, 649--675.Abstract: We propose a Bayesian nonparametric strategy to test for differences between a control group and several treatment regimes. Most of the existing tests for this type of comparison are based on the differences between location parameters. In contrast, our approach identifies differences across the entire distribution, avoids strong modeling assumptions over the distributions for each treatment, and accounts for multiple testing through the prior distribution on the space of hypotheses. The proposal is compared to other commonly used hypothesis testing procedures under simulated scenarios. Two real applications are also analyzed with the proposed methodology. Full Article
con Alleviating Spatial Confounding for Areal Data Problems by Displacing the Geographical Centroids By projecteuclid.org Published On :: Fri, 31 May 2019 22:05 EDT Marcos Oliveira Prates, Renato Martins Assunção, Erica Castilho Rodrigues. Source: Bayesian Analysis, Volume 14, Number 2, 623--647.Abstract: Spatial confounding between the spatial random effects and fixed effects covariates has been recently discovered and showed that it may bring misleading interpretation to the model results. Techniques to alleviate this problem are based on decomposing the spatial random effect and fitting a restricted spatial regression. In this paper, we propose a different approach: a transformation of the geographic space to ensure that the unobserved spatial random effect added to the regression is orthogonal to the fixed effects covariates. Our approach, named SPOCK, has the additional benefit of providing a fast and simple computational method to estimate the parameters. Also, it does not constrain the distribution class assumed for the spatial error term. A simulation study and real data analyses are presented to better understand the advantages of the new method in comparison with the existing ones. Full Article
con Constrained Bayesian Optimization with Noisy Experiments By projecteuclid.org Published On :: Wed, 13 Mar 2019 22:00 EDT Benjamin Letham, Brian Karrer, Guilherme Ottoni, Eytan Bakshy. Source: Bayesian Analysis, Volume 14, Number 2, 495--519.Abstract: Randomized experiments are the gold standard for evaluating the effects of changes to real-world systems. Data in these tests may be difficult to collect and outcomes may have high variance, resulting in potentially large measurement error. Bayesian optimization is a promising technique for efficiently optimizing multiple continuous parameters, but existing approaches degrade in performance when the noise level is high, limiting its applicability to many randomized experiments. We derive an expression for expected improvement under greedy batch optimization with noisy observations and noisy constraints, and develop a quasi-Monte Carlo approximation that allows it to be efficiently optimized. Simulations with synthetic functions show that optimization performance on noisy, constrained problems outperforms existing methods. We further demonstrate the effectiveness of the method with two real-world experiments conducted at Facebook: optimizing a ranking system, and optimizing server compiler flags. Full Article
con Control of Type I Error Rates in Bayesian Sequential Designs By projecteuclid.org Published On :: Wed, 13 Mar 2019 22:00 EDT Haolun Shi, Guosheng Yin. Source: Bayesian Analysis, Volume 14, Number 2, 399--425.Abstract: Bayesian approaches to phase II clinical trial designs are usually based on the posterior distribution of the parameter of interest and calibration of certain threshold for decision making. If the posterior probability is computed and assessed in a sequential manner, the design may involve the problem of multiplicity, which, however, is often a neglected aspect in Bayesian trial designs. To effectively maintain the overall type I error rate, we propose solutions to the problem of multiplicity for Bayesian sequential designs and, in particular, the determination of the cutoff boundaries for the posterior probabilities. We present both theoretical and numerical methods for finding the optimal posterior probability boundaries with $alpha$ -spending functions that mimic those of the frequentist group sequential designs. The theoretical approach is based on the asymptotic properties of the posterior probability, which establishes a connection between the Bayesian trial design and the frequentist group sequential method. The numerical approach uses a sandwich-type searching algorithm, which immensely reduces the computational burden. We apply least-square fitting to find the $alpha$ -spending function closest to the target. We discuss the application of our method to single-arm and double-arm cases with binary and normal endpoints, respectively, and provide a real trial example for each case. Full Article
con A Tale of Two Parasites: Statistical Modelling to Support Disease Control Programmes in Africa By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Peter J. Diggle, Emanuele Giorgi, Julienne Atsame, Sylvie Ntsame Ella, Kisito Ogoussan, Katherine Gass. Source: Statistical Science, Volume 35, Number 1, 42--50.Abstract: Vector-borne diseases have long presented major challenges to the health of rural communities in the wet tropical regions of the world, but especially in sub-Saharan Africa. In this paper, we describe the contribution that statistical modelling has made to the global elimination programme for one vector-borne disease, onchocerciasis. We explain why information on the spatial distribution of a second vector-borne disease, Loa loa, is needed before communities at high risk of onchocerciasis can be treated safely with mass distribution of ivermectin, an antifiarial medication. We show how a model-based geostatistical analysis of Loa loa prevalence survey data can be used to map the predictive probability that each location in the region of interest meets a WHO policy guideline for safe mass distribution of ivermectin and describe two applications: one is to data from Cameroon that assesses prevalence using traditional blood-smear microscopy; the other is to Africa-wide data that uses a low-cost questionnaire-based method. We describe how a recent technological development in image-based microscopy has resulted in a change of emphasis from prevalence alone to the bivariate spatial distribution of prevalence and the intensity of infection among infected individuals. We discuss how statistical modelling of the kind described here can contribute to health policy guidelines and decision-making in two ways. One is to ensure that, in a resource-limited setting, prevalence surveys are designed, and the resulting data analysed, as efficiently as possible. The other is to provide an honest quantification of the uncertainty attached to any binary decision by reporting predictive probabilities that a policy-defined condition for action is or is not met. Full Article
con Larry Brown’s Contributions to Parametric Inference, Decision Theory and Foundations: A Survey By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST James O. Berger, Anirban DasGupta. Source: Statistical Science, Volume 34, Number 4, 621--634.Abstract: This article gives a panoramic survey of the general area of parametric statistical inference, decision theory and foundations of statistics for the period 1965–2010 through the lens of Larry Brown’s contributions to varied aspects of this massive area. The article goes over sufficiency, shrinkage estimation, admissibility, minimaxity, complete class theorems, estimated confidence, conditional confidence procedures, Edgeworth and higher order asymptotic expansions, variational Bayes, Stein’s SURE, differential inequalities, geometrization of convergence rates, asymptotic equivalence, aspects of empirical process theory, inference after model selection, unified frequentist and Bayesian testing, and Wald’s sequential theory. A reasonably comprehensive bibliography is provided. Full Article
con Comment: “Models as Approximations I: Consequences Illustrated with Linear Regression” by A. Buja, R. Berk, L. Brown, E. George, E. Pitkin, L. Zhan and K. Zhang By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Roderick J. Little. Source: Statistical Science, Volume 34, Number 4, 580--583. Full Article
con Models as Approximations I: Consequences Illustrated with Linear Regression By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Andreas Buja, Lawrence Brown, Richard Berk, Edward George, Emil Pitkin, Mikhail Traskin, Kai Zhang, Linda Zhao. Source: Statistical Science, Volume 34, Number 4, 523--544.Abstract: In the early 1980s, Halbert White inaugurated a “model-robust” form of statistical inference based on the “sandwich estimator” of standard error. This estimator is known to be “heteroskedasticity-consistent,” but it is less well known to be “nonlinearity-consistent” as well. Nonlinearity, however, raises fundamental issues because in its presence regressors are not ancillary, hence cannot be treated as fixed. The consequences are deep: (1) population slopes need to be reinterpreted as statistical functionals obtained from OLS fits to largely arbitrary joint ${x extrm{-}y}$ distributions; (2) the meaning of slope parameters needs to be rethought; (3) the regressor distribution affects the slope parameters; (4) randomness of the regressors becomes a source of sampling variability in slope estimates of order $1/sqrt{N}$; (5) inference needs to be based on model-robust standard errors, including sandwich estimators or the ${x extrm{-}y}$ bootstrap. In theory, model-robust and model-trusting standard errors can deviate by arbitrary magnitudes either way. In practice, significant deviations between them can be detected with a diagnostic test. Full Article
con A Conversation with Peter Diggle By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Peter M. Atkinson, Jorge Mateu. Source: Statistical Science, Volume 34, Number 3, 504--521.Abstract: Peter John Diggle was born on February 24, 1950, in Lancashire, England. Peter went to school in Scotland, and it was at the end of his school years that he found that he was good at maths and actually enjoyed it. Peter went to Edinburgh to do a maths degree, but transferred halfway through to Liverpool where he completed his degree. Peter studied for a year at Oxford and was then appointed in 1974 as a lecturer in statistics at the University of Newcastle-upon-Tyne where he gained his PhD, and was promoted to Reader in 1983. A sabbatical at the Swedish Royal College of Forestry gave him his first exposure to real scientific data and problems, prompting a move to CSIRO, Australia. After five years with CSIRO where he was Senior, then Principal, then Chief Research Scientist and Chief of the Division of Mathematics and Statistics, he returned to the UK in 1988, to a Chair at Lancaster University. Since 2011 Peter has held appointments at Lancaster and Liverpool, together with honorary appointments at Johns Hopkins, Columbia and Yale. At Lancaster, Peter was the founder and Director of the Medical Statistics Unit (1995–2001), University Dean for Research (1998–2001), EPSRC Senior Fellow (2004–2008), Associate Dean for Research at the School of Health and Medicine (2007–2011), Distinguished University Professor, and leader of the CHICAS Research Group (2007–2017). A Fellow of the Royal Statistical Society since 1974, he was a Member of Council (1983–1985), Joint Editor of JRSSB (1984–1987), Honorary Secretary (1990–1996), awarded the Guy Medal in Silver (1997) and the Barnett Award (2018), Associate Editor of Applied Statistics (1998–2000), Chair of the Research Section Committee (1998–2000), and President (2014–2016). Away from work, Peter enjoys music, playing folk-blues guitar and tenor recorder, and listening to jazz. His running days are behind him, but he can just about hold his own in mixed-doubles badminton with his family. His boyhoood hero was Stirling Moss, and he retains an enthusiasm for classic cars, not least his 1988 Porsche 924S. His favorite authors are George Orwell, Primo Levi and Nigel Slater. This interview was done prior to the fourth Spatial Statistics conference held in Lancaster, July 2017 where a session was dedicated to Peter celebrating his contributions to statistics. Full Article
con Conditionally Conjugate Mean-Field Variational Bayes for Logistic Models By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Daniele Durante, Tommaso Rigon. Source: Statistical Science, Volume 34, Number 3, 472--485.Abstract: Variational Bayes (VB) is a common strategy for approximate Bayesian inference, but simple methods are only available for specific classes of models including, in particular, representations having conditionally conjugate constructions within an exponential family. Models with logit components are an apparently notable exception to this class, due to the absence of conjugacy among the logistic likelihood and the Gaussian priors for the coefficients in the linear predictor. To facilitate approximate inference within this widely used class of models, Jaakkola and Jordan ( Stat. Comput. 10 (2000) 25–37) proposed a simple variational approach which relies on a family of tangent quadratic lower bounds of the logistic log-likelihood, thus restoring conjugacy between these approximate bounds and the Gaussian priors. This strategy is still implemented successfully, but few attempts have been made to formally understand the reasons underlying its excellent performance. Following a review on VB for logistic models, we cover this gap by providing a formal connection between the above bound and a recent Pólya-gamma data augmentation for logistic regression. Such a result places the computational methods associated with the aforementioned bounds within the framework of variational inference for conditionally conjugate exponential family models, thereby allowing recent advances for this class to be inherited also by the methods relying on Jaakkola and Jordan ( Stat. Comput. 10 (2000) 25–37). Full Article
con The Geometry of Continuous Latent Space Models for Network Data By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Anna L. Smith, Dena M. Asta, Catherine A. Calder. Source: Statistical Science, Volume 34, Number 3, 428--453.Abstract: We review the class of continuous latent space (statistical) models for network data, paying particular attention to the role of the geometry of the latent space. In these models, the presence/absence of network dyadic ties are assumed to be conditionally independent given the dyads’ unobserved positions in a latent space. In this way, these models provide a probabilistic framework for embedding network nodes in a continuous space equipped with a geometry that facilitates the description of dependence between random dyadic ties. Specifically, these models naturally capture homophilous tendencies and triadic clustering, among other common properties of observed networks. In addition to reviewing the literature on continuous latent space models from a geometric perspective, we highlight the important role the geometry of the latent space plays on properties of networks arising from these models via intuition and simulation. Finally, we discuss results from spectral graph theory that allow us to explore the role of the geometry of the latent space, independent of network size. We conclude with conjectures about how these results might be used to infer the appropriate latent space geometry from observed networks. Full Article
con A Conversation with Noel Cressie By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Christopher K. Wikle, Jay M. Ver Hoef. Source: Statistical Science, Volume 34, Number 2, 349--359.Abstract: Noel Cressie, FAA is Director of the Centre for Environmental Informatics in the National Institute for Applied Statistics Research Australia (NIASRA) and Distinguished Professor in the School of Mathematics and Applied Statistics at the University of Wollongong, Australia. He is also Adjunct Professor at the University of Missouri (USA), Affiliate of Org 398, Science Data Understanding, at NASA’s Jet Propulsion Laboratory (USA), and a member of the Science Team for NASA’s Orbiting Carbon Observatory-2 (OCO-2) satellite. Cressie was awarded a B.Sc. with First Class Honours in Mathematics in 1972 from the University of Western Australia, and an M.A. and Ph.D. in Statistics in 1973 and 1975, respectively, from Princeton University (USA). Two brief postdoctoral periods followed, at the Centre de Morphologie Mathématique, ENSMP, in Fontainebleau (France) from April 1975–September 1975, and at Imperial College, London (UK) from September 1975–January 1976. His past appointments have been at The Flinders University of South Australia from 1976–1983, at Iowa State University (USA) from 1983–1998, and at The Ohio State University (USA) from 1998–2012. He has authored or co-authored four books and more than 280 papers in peer-reviewed outlets, covering areas that include spatial and spatio-temporal statistics, environmental statistics, empirical-Bayesian and Bayesian methods including sequential design, goodness-of-fit, and remote sensing of the environment. Many of his papers also address important questions in the sciences. Cressie is a Fellow of the Australian Academy of Science, the American Statistical Association, the Institute of Mathematical Statistics, and the Spatial Econometrics Association, and he is an Elected Member of the International Statistical Institute. Noel Cressie’s refereed, unrefereed, and other publications are available at: https://niasra.uow.edu.au/cei/people/UOW232444.html. Full Article
con A Conversation with Robert E. Kass By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Sam Behseta. Source: Statistical Science, Volume 34, Number 2, 334--348.Abstract: Rob Kass has been been on the faculty of the Department of Statistics at Carnegie Mellon since 1981; he joined the Center for the Neural Basis of Cognition (CNBC) in 1997, and the Machine Learning Department (in the School of Computer Science) in 2007. He served as Department Head of Statistics from 1995 to 2004 and served as Interim Co-Director of the CNBC 2015–2018. He became the Maurice Falk Professor of Statistics and Computational Neuroscience in 2016. Kass has served as Chair of the Section for Bayesian Statistical Science of the American Statistical Association, Chair of the Statistics Section of the American Association for the Advancement of Science, founding Editor-in-Chief of the journal Bayesian Analysis and Executive Editor of Statistical Science . He is an elected Fellow of the American Statistical Association, the Institute of Mathematical Statistics and the American Association for the Advancement of Science. He has been recognized by the Institute for Scientific Information as one of the 10 most highly cited researchers, 1995–2005, in the category of mathematics. Kass is the recipient of the 2017 Fisher Award and lectureship by the Committee of the Presidents of the Statistical Societies. This interview took place at Carnegie Mellon University in November 2017. Full Article
con Statistical Analysis of Zero-Inflated Nonnegative Continuous Data: A Review By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Lei Liu, Ya-Chen Tina Shih, Robert L. Strawderman, Daowen Zhang, Bankole A. Johnson, Haitao Chai. Source: Statistical Science, Volume 34, Number 2, 253--279.Abstract: Zero-inflated nonnegative continuous (or semicontinuous) data arise frequently in biomedical, economical, and ecological studies. Examples include substance abuse, medical costs, medical care utilization, biomarkers (e.g., CD4 cell counts, coronary artery calcium scores), single cell gene expression rates, and (relative) abundance of microbiome. Such data are often characterized by the presence of a large portion of zero values and positive continuous values that are skewed to the right and heteroscedastic. Both of these features suggest that no simple parametric distribution may be suitable for modeling such type of outcomes. In this paper, we review statistical methods for analyzing zero-inflated nonnegative outcome data. We will start with the cross-sectional setting, discussing ways to separate zero and positive values and introducing flexible models to characterize right skewness and heteroscedasticity in the positive values. We will then present models of correlated zero-inflated nonnegative continuous data, using random effects to tackle the correlation on repeated measures from the same subject and that across different parts of the model. We will also discuss expansion to related topics, for example, zero-inflated count and survival data, nonlinear covariate effects, and joint models of longitudinal zero-inflated nonnegative continuous data and survival. Finally, we will present applications to three real datasets (i.e., microbiome, medical costs, and alcohol drinking) to illustrate these methods. Example code will be provided to facilitate applications of these methods. Full Article
con A Conversation with Dick Dudley By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Vladimir Koltchinskii, Richard Nickl, Philippe Rigollet. Source: Statistical Science, Volume 34, Number 1, 169--175.Abstract: Richard Mansfield Dudley (Dick Dudley) was born in 1938. He received the A.B. from Harvard in 1952 and the Ph.D. from Princeton in 1962 (under the supervision of Gilbert Hunt and Edward Nelson). Following an appointment at UC Berkeley as an assistant professor, he joined the Department of Mathematics at MIT in 1967. Dick Dudley has made fundamental contributions to the theory of Gaussian processes and Probability in Banach Spaces. Among his major achievements is the development of a general framework for empirical processes theory, in particular, for uniform central limit theorems. These results have had and continue having tremendous impact in contemporary statistics and in mathematical foundations of machine learning. A more extensive biographical sketch is contained in the preface to the Selected works of R. M. Dudley (editors: E. Giné, V. Koltchinskii and R. Norvaisa) published in 2010. This conversation took place (mostly, via email) in the fall of 2017. Full Article
con A Conversation with Piet Groeneboom By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Geurt Jongbloed. Source: Statistical Science, Volume 34, Number 1, 156--168.Abstract: Petrus (Piet) Groeneboom was born in Scheveningen in 1941 and grew up in Voorburg. Both villages are located near The Hague in The Netherlands; Scheveningen actually being part of The Hague. He attended the gymnasium of the Huygens lyceum. In 1959, he entered the University of Amsterdam, where he studied psychology. After his “candidate” exam (comparable to BSc) in 1963, he worked at the psychological laboratory of the University of Amsterdam until 1966. In 1965, he took up mathematics as a part-time study. After having obtained his master’s degree in 1971, he had a position at the psychological laboratory again until 1973, when he was appointed to the Mathematical Center in Amsterdam. There, he wrote between 1975 and 1979 his Ph.D. thesis with Kobus Oosterhoff as advisor, graduating in 1979. After a period of two years as visiting professor at the University of Washington (UW) in Seattle, Piet moved back to the Mathematical Center until he was appointed full professor of statistics at the University of Amsterdam in 1984. Four years later, he moved to Delft University of Technology where he became professor of statistics and stayed until his retirement in 2006. Between 2000 and 2006 he also held a part-time professorship at the Vrije Universiteit in Amsterdam. From 1999 till 2013 he was Affiliate Professor at the statistics department of UW, Seattle. Apart from being visiting professor at the UW in Seattle, he was also visiting professor at Stanford University, Université Paris 6 and ETH Zürich. Piet is well known for his work on shape constrained statistical inference. He worked on asymptotic theory for these problems, created algorithms to compute nonparametric estimates in such models and applied these models to real data. He also worked on interacting particle systems, extreme value analysis and efficiency theory for testing procedures. Piet (co-)authored four books and 64 papers and served as promotor of 13 students. He is the recipient of the 1985 Rollo Davidson prize, a fellow of the IMS and elected member of the ISI. In 2015, he delivered the Wald lecture at the Joint Statistical Meeting in Montreal. Piet and his wife Marijke live in Naarden. He has two sons, Thomas and Tim, and (since June 12, 2018) one grandson, Tarik. This conversation was held at Piet’s house in Naarden, on February 28 and April 24, 2018. Full Article
con Comment: Contributions of Model Features to BART Causal Inference Performance Using ACIC 2016 Competition Data By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Nicole Bohme Carnegie. Source: Statistical Science, Volume 34, Number 1, 90--93.Abstract: With a thorough exposition of the methods and results of the 2016 Atlantic Causal Inference Competition, Dorie et al. have set a new standard for reproducibility and comparability of evaluations of causal inference methods. In particular, the open-source R package aciccomp2016, which permits reproduction of all datasets used in the competition, will be an invaluable resource for evaluation of future methodological developments. Building upon results from Dorie et al., we examine whether a set of potential modifications to Bayesian Additive Regression Trees (BART)—multiple chains in model fitting, using the propensity score as a covariate, targeted maximum likelihood estimation (TMLE), and computing symmetric confidence intervals—have a stronger impact on bias, RMSE, and confidence interval coverage in combination than they do alone. We find that bias in the estimate of SATT is minimal, regardless of the BART formulation. For purposes of CI coverage, however, all proposed modifications are beneficial—alone and in combination—but use of TMLE is least beneficial for coverage and results in considerably wider confidence intervals. Full Article
con Dopamine D1 and D2 Receptor Family Contributions to Modafinil-Induced Wakefulness By www.jneurosci.org Published On :: 2009-03-04 Jared W. YoungMar 4, 2009; 29:2663-2665Journal Club Full Article
con Sleep Deprivation Biases the Neural Mechanisms Underlying Economic Preferences By www.jneurosci.org Published On :: 2011-03-09 Vinod VenkatramanMar 9, 2011; 31:3712-3718BehavioralSystemsCognitive Full Article
con Dissociable Intrinsic Connectivity Networks for Salience Processing and Executive Control By www.jneurosci.org Published On :: 2007-02-28 William W. SeeleyFeb 28, 2007; 27:2349-2356BehavioralSystemsCognitive Full Article
con Afferents and Homotypic Neighbors Regulate Horizontal Cell Morphology, Connectivity, and Retinal Coverage By www.jneurosci.org Published On :: 2005-03-02 Benjamin E. ReeseMar 2, 2005; 25:2167-2175BehavioralSystemsCognitive Full Article
con Circuit Stability to Perturbations Reveals Hidden Variability in the Balance of Intrinsic and Synaptic Conductances By www.jneurosci.org Published On :: 2020-04-15 Sebastian OnaschApr 15, 2020; 40:3186-3202Systems/Circuits Full Article
con White Matter Microstructure in Transsexuals and Controls Investigated by Diffusion Tensor Imaging By www.jneurosci.org Published On :: 2014-11-12 Georg S. KranzNov 12, 2014; 34:15466-15475Systems/Circuits Full Article
con Cortical Hubs Revealed by Intrinsic Functional Connectivity: Mapping, Assessment of Stability, and Relation to Alzheimer's Disease By www.jneurosci.org Published On :: 2009-02-11 Randy L. BucknerFeb 11, 2009; 29:1860-1873Neurobiology of Disease Full Article
con Neurons Containing Hypocretin (Orexin) Project to Multiple Neuronal Systems By www.jneurosci.org Published On :: 1998-12-01 Christelle PeyronDec 1, 1998; 18:9996-10015Articles Full Article
con The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs By www.jneurosci.org Published On :: 1993-01-01 WR SoftkyJan 1, 1993; 13:334-350Articles Full Article