ba Estimating customer impatience in a service system with balking. (arXiv:2005.03576v1 [math.PR]) By arxiv.org Published On :: This paper studies a service system in which arriving customers are provided with information about the delay they will experience. Based on this information they decide to wait for service or to leave the system. The main objective is to estimate the customers' patience-level distribution and the corresponding potential arrival rate, using knowledge of the actual workload process only. We cast the system as a queueing model, so as to evaluate the corresponding likelihood function. Estimating the unknown parameters relying on a maximum likelihood procedure, we prove strong consistency and derive the asymptotic distribution of the estimation error. Several applications and extensions of the method are discussed. In particular, we indicate how our method generalizes to a multi-server setting. The performance of our approach is assessed through a series of numerical experiments. By fitting parameters of hyperexponential and generalized-hyperexponential distributions our method provides a robust estimation framework for any continuous patience-level distribution. Full Article
ba Sequential Aggregation of Probabilistic Forecasts -- Applicaton to Wind Speed Ensemble Forecasts. (arXiv:2005.03540v1 [stat.AP]) By arxiv.org Published On :: In the field of numerical weather prediction (NWP), the probabilistic distribution of the future state of the atmosphere is sampled with Monte-Carlo-like simulations, called ensembles. These ensembles have deficiencies (such as conditional biases) that can be corrected thanks to statistical post-processing methods. Several ensembles exist and may be corrected with different statistiscal methods. A further step is to combine these raw or post-processed ensembles. The theory of prediction with expert advice allows us to build combination algorithms with theoretical guarantees on the forecast performance. This article adapts this theory to the case of probabilistic forecasts issued as step-wise cumulative distribution functions (CDF). The theory is applied to wind speed forecasting, by combining several raw or post-processed ensembles, considered as CDFs. The second goal of this study is to explore the use of two forecast performance criteria: the Continous ranked probability score (CRPS) and the Jolliffe-Primo test. Comparing the results obtained with both criteria leads to reconsidering the usual way to build skillful probabilistic forecasts, based on the minimization of the CRPS. Minimizing the CRPS does not necessarily produce reliable forecasts according to the Jolliffe-Primo test. The Jolliffe-Primo test generally selects reliable forecasts, but could lead to issuing suboptimal forecasts in terms of CRPS. It is proposed to use both criterion to achieve reliable and skillful probabilistic forecasts. Full Article
ba On unbalanced data and common shock models in stochastic loss reserving. (arXiv:2005.03500v1 [q-fin.RM]) By arxiv.org Published On :: Introducing common shocks is a popular dependence modelling approach, with some recent applications in loss reserving. The main advantage of this approach is the ability to capture structural dependence coming from known relationships. In addition, it helps with the parsimonious construction of correlation matrices of large dimensions. However, complications arise in the presence of "unbalanced data", that is, when (expected) magnitude of observations over a single triangle, or between triangles, can vary substantially. Specifically, if a single common shock is applied to all of these cells, it can contribute insignificantly to the larger values and/or swamp the smaller ones, unless careful adjustments are made. This problem is further complicated in applications involving negative claim amounts. In this paper, we address this problem in the loss reserving context using a common shock Tweedie approach for unbalanced data. We show that the solution not only provides a much better balance of the common shock proportions relative to the unbalanced data, but it is also parsimonious. Finally, the common shock Tweedie model also provides distributional tractability. Full Article
ba Transfer Learning for sEMG-based Hand Gesture Classification using Deep Learning in a Master-Slave Architecture. (arXiv:2005.03460v1 [eess.SP]) By arxiv.org Published On :: Recent advancements in diagnostic learning and development of gesture-based human machine interfaces have driven surface electromyography (sEMG) towards significant importance. Analysis of hand gestures requires an accurate assessment of sEMG signals. The proposed work presents a novel sequential master-slave architecture consisting of deep neural networks (DNNs) for classification of signs from the Indian sign language using signals recorded from multiple sEMG channels. The performance of the master-slave network is augmented by leveraging additional synthetic feature data generated by long short term memory networks. Performance of the proposed network is compared to that of a conventional DNN prior to and after the addition of synthetic data. Up to 14% improvement is observed in the conventional DNN and up to 9% improvement in master-slave network on addition of synthetic data with an average accuracy value of 93.5% asserting the suitability of the proposed approach. Full Article
ba Multi-Label Sampling based on Local Label Imbalance. (arXiv:2005.03240v1 [cs.LG]) By arxiv.org Published On :: Class imbalance is an inherent characteristic of multi-label data that hinders most multi-label learning methods. One efficient and flexible strategy to deal with this problem is to employ sampling techniques before training a multi-label learning model. Although existing multi-label sampling approaches alleviate the global imbalance of multi-label datasets, it is actually the imbalance level within the local neighbourhood of minority class examples that plays a key role in performance degradation. To address this issue, we propose a novel measure to assess the local label imbalance of multi-label datasets, as well as two multi-label sampling approaches based on the local label imbalance, namely MLSOL and MLUL. By considering all informative labels, MLSOL creates more diverse and better labeled synthetic instances for difficult examples, while MLUL eliminates instances that are harmful to their local region. Experimental results on 13 multi-label datasets demonstrate the effectiveness of the proposed measure and sampling approaches for a variety of evaluation metrics, particularly in the case of an ensemble of classifiers trained on repeated samples of the original data. Full Article
ba On the Optimality of Randomization in Experimental Design: How to Randomize for Minimax Variance and Design-Based Inference. (arXiv:2005.03151v1 [stat.ME]) By arxiv.org Published On :: I study the minimax-optimal design for a two-arm controlled experiment where conditional mean outcomes may vary in a given set. When this set is permutation symmetric, the optimal design is complete randomization, and using a single partition (i.e., the design that only randomizes the treatment labels for each side of the partition) has minimax risk larger by a factor of $n-1$. More generally, the optimal design is shown to be the mixed-strategy optimal design (MSOD) of Kallus (2018). Notably, even when the set of conditional mean outcomes has structure (i.e., is not permutation symmetric), being minimax-optimal for variance still requires randomization beyond a single partition. Nonetheless, since this targets precision, it may still not ensure sufficient uniformity in randomization to enable randomization (i.e., design-based) inference by Fisher's exact test to appropriately detect violations of null. I therefore propose the inference-constrained MSOD, which is minimax-optimal among all designs subject to such uniformity constraints. On the way, I discuss Johansson et al. (2020) who recently compared rerandomization of Morgan and Rubin (2012) and the pure-strategy optimal design (PSOD) of Kallus (2018). I point out some errors therein and set straight that randomization is minimax-optimal and that the "no free lunch" theorem and example in Kallus (2018) are correct. Full Article
ba Towards Frequency-Based Explanation for Robust CNN. (arXiv:2005.03141v1 [cs.LG]) By arxiv.org Published On :: Current explanation techniques towards a transparent Convolutional Neural Network (CNN) mainly focuses on building connections between the human-understandable input features with models' prediction, overlooking an alternative representation of the input, the frequency components decomposition. In this work, we present an analysis of the connection between the distribution of frequency components in the input dataset and the reasoning process the model learns from the data. We further provide quantification analysis about the contribution of different frequency components toward the model's prediction. We show that the vulnerability of the model against tiny distortions is a result of the model is relying on the high-frequency features, the target features of the adversarial (black and white-box) attackers, to make the prediction. We further show that if the model develops stronger association between the low-frequency component with true labels, the model is more robust, which is the explanation of why adversarially trained models are more robust against tiny distortions. Full Article
ba Joint Multi-Dimensional Model for Global and Time-Series Annotations. (arXiv:2005.03117v1 [cs.LG]) By arxiv.org Published On :: Crowdsourcing is a popular approach to collect annotations for unlabeled data instances. It involves collecting a large number of annotations from several, often naive untrained annotators for each data instance which are then combined to estimate the ground truth. Further, annotations for constructs such as affect are often multi-dimensional with annotators rating multiple dimensions, such as valence and arousal, for each instance. Most annotation fusion schemes however ignore this aspect and model each dimension separately. In this work we address this by proposing a generative model for multi-dimensional annotation fusion, which models the dimensions jointly leading to more accurate ground truth estimates. The model we propose is applicable to both global and time series annotation fusion problems and treats the ground truth as a latent variable distorted by the annotators. The model parameters are estimated using the Expectation-Maximization algorithm and we evaluate its performance using synthetic data and real emotion corpora as well as on an artificial task with human annotations Full Article
ba Bayesian Random-Effects Meta-Analysis Using the bayesmeta R Package By www.jstatsoft.org Published On :: Mon, 27 Apr 2020 00:00:00 +0000 The random-effects or normal-normal hierarchical model is commonly utilized in a wide range of meta-analysis applications. A Bayesian approach to inference is very attractive in this context, especially when a meta-analysis is based only on few studies. The bayesmeta R package provides readily accessible tools to perform Bayesian meta-analyses and generate plots and summaries, without having to worry about computational details. It allows for flexible prior specification and instant access to the resulting posterior distributions, including prediction and shrinkage estimation, and facilitating for example quick sensitivity checks. The present paper introduces the underlying theory and showcases its usage. Full Article
ba Urban landscape entomology By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Held, David W. (David Wayne), 1972- authorCallnumber: OnlineISBN: 9780128130728 (electronic bk.) Full Article
ba The tobacco plant genome By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030294939 Full Article
ba The Scientific basis of oral health education By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Levine, R. S., Dr., author.Callnumber: OnlineISBN: 9783319982076 (electronic bk.) Full Article
ba Temporomandibular disorders : a translational approach from basic science to clinical applicability By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319572475 (electronic bk.) Full Article
ba Pediatric allergy : a case-based collection with MCQs. By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030182823 (electronic bk.) Full Article
ba Intelligent wavelet based techniques for advanced multimedia applications By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Singh, Rajiv, authorCallnumber: OnlineISBN: 9783030318734 (electronic bk.) Full Article
ba In china's wake : how the commodity boom transformed development strategies in the global south By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Jepson, Nicholas, author.Callnumber: OnlineISBN: 9780231547598 electronic book Full Article
ba Handbook of immunosenescence : basic understanding and clinical implications By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319645971 (electronic bk.) Full Article
ba Handbook of geotechnical testing : basic theory, procedures and comparison of standards By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Li, Yanrong (Writer on geology), author.Callnumber: OnlineISBN: 0429323743 electronic book Full Article
ba Handbook of Global Health By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030053253 978-3-030-05325-3 Full Article
ba Geriatric Medicine : a Problem-Based Approach By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9789811032530 Full Article
ba Database design and implementation By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Sciore, Edward, authorCallnumber: OnlineISBN: 9783030338367 (electronic bk.) Full Article
ba Critical care : architecture and urbanism for a broken planet By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9780262352871 (electronic bk.) Full Article
ba Common problems in the newborn nursery : an evidence and case-based guide By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319956725 (electronic bk.) Full Article
ba Children’s Palliative Care: An International Case-Based Manual By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030273750 978-3-030-27375-0 Full Article
ba Basic Electrocardiography By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Petty, Brent G. author. aut http://id.loc.gov/vocabulary/relators/autCallnumber: OnlineISBN: 9783030328863 978-3-030-32886-3 Full Article
ba Bacteriophages : biology, technology, therapy By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319405988 electronic book Full Article
ba Arctic plants of Svalbard : what we learn from the green in the treeless white world By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Lee, Yoo Kyung, authorCallnumber: OnlineISBN: 9783030345600 (electronic bk.) Full Article
ba Advances in cyanobacterial biology By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9780128193129 (electronic bk.) Full Article
ba In Battle to Fight Coronavirus Pandemic, LeadingAge Nursing Home... By www.prweb.com Published On :: Aging Services Providers Dedicated to Fulfilling Their Critical Role in Public Health System(PRWeb April 18, 2020)Read the full story at https://www.prweb.com/releases/in_battle_to_fight_coronavirus_pandemic_leadingage_nursing_home_members_support_texas_action_to_gather_and_leverage_data/prweb17055806.htm Full Article
ba Gun Rights: California Gun Owners & Ammo Dealers Fire Back Against... By www.prweb.com Published On :: Ammunition Depot comments on Judge Roger T. Benitez ruling that Californians may again purchase ammo without a background check and order ammo online.(PRWeb April 24, 2020)Read the full story at https://www.prweb.com/releases/gun_rights_california_gun_owners_ammo_dealers_fire_back_against_proposition_63/prweb17075447.htm Full Article
ba Almost sure uniqueness of a global minimum without convexity By projecteuclid.org Published On :: Mon, 17 Feb 2020 04:02 EST Gregory Cox. Source: The Annals of Statistics, Volume 48, Number 1, 584--606.Abstract: This paper establishes the argmin of a random objective function to be unique almost surely. This paper first formulates a general result that proves almost sure uniqueness without convexity of the objective function. The general result is then applied to a variety of applications in statistics. Four applications are discussed, including uniqueness of M-estimators, both classical likelihood and penalized likelihood estimators, and two applications of the argmin theorem, threshold regression and weak identification. Full Article
ba The multi-armed bandit problem: An efficient nonparametric solution By projecteuclid.org Published On :: Mon, 17 Feb 2020 04:02 EST Hock Peng Chan. Source: The Annals of Statistics, Volume 48, Number 1, 346--373.Abstract: Lai and Robbins ( Adv. in Appl. Math. 6 (1985) 4–22) and Lai ( Ann. Statist. 15 (1987) 1091–1114) provided efficient parametric solutions to the multi-armed bandit problem, showing that arm allocation via upper confidence bounds (UCB) achieves minimum regret. These bounds are constructed from the Kullback–Leibler information of the reward distributions, estimated from specified parametric families. In recent years, there has been renewed interest in the multi-armed bandit problem due to new applications in machine learning algorithms and data analytics. Nonparametric arm allocation procedures like $epsilon $-greedy, Boltzmann exploration and BESA were studied, and modified versions of the UCB procedure were also analyzed under nonparametric settings. However, unlike UCB these nonparametric procedures are not efficient under general parametric settings. In this paper, we propose efficient nonparametric procedures. Full Article
ba Bootstrap confidence regions based on M-estimators under nonstandard conditions By projecteuclid.org Published On :: Mon, 17 Feb 2020 04:02 EST Stephen M. S. Lee, Puyudi Yang. Source: The Annals of Statistics, Volume 48, Number 1, 274--299.Abstract: Suppose that a confidence region is desired for a subvector $ heta $ of a multidimensional parameter $xi =( heta ,psi )$, based on an M-estimator $hat{xi }_{n}=(hat{ heta }_{n},hat{psi }_{n})$ calculated from a random sample of size $n$. Under nonstandard conditions $hat{xi }_{n}$ often converges at a nonregular rate $r_{n}$, in which case consistent estimation of the distribution of $r_{n}(hat{ heta }_{n}- heta )$, a pivot commonly chosen for confidence region construction, is most conveniently effected by the $m$ out of $n$ bootstrap. The above choice of pivot has three drawbacks: (i) the shape of the region is either subjectively prescribed or controlled by a computationally intensive depth function; (ii) the region is not transformation equivariant; (iii) $hat{xi }_{n}$ may not be uniquely defined. To resolve the above difficulties, we propose a one-dimensional pivot derived from the criterion function, and prove that its distribution can be consistently estimated by the $m$ out of $n$ bootstrap, or by a modified version of the perturbation bootstrap. This leads to a new method for constructing confidence regions which are transformation equivariant and have shapes driven solely by the criterion function. A subsampling procedure is proposed for selecting $m$ in practice. Empirical performance of the new method is illustrated with examples drawn from different nonstandard M-estimation settings. Extension of our theory to row-wise independent triangular arrays is also explored. Full Article
ba Envelope-based sparse partial least squares By projecteuclid.org Published On :: Mon, 17 Feb 2020 04:02 EST Guangyu Zhu, Zhihua Su. Source: The Annals of Statistics, Volume 48, Number 1, 161--182.Abstract: Sparse partial least squares (SPLS) is widely used in applied sciences as a method that performs dimension reduction and variable selection simultaneously in linear regression. Several implementations of SPLS have been derived, among which the SPLS proposed in Chun and Keleş ( J. R. Stat. Soc. Ser. B. Stat. Methodol. 72 (2010) 3–25) is very popular and highly cited. However, for all of these implementations, the theoretical properties of SPLS are largely unknown. In this paper, we propose a new version of SPLS, called the envelope-based SPLS, using a connection between envelope models and partial least squares (PLS). We establish the consistency, oracle property and asymptotic normality of the envelope-based SPLS estimator. The large-sample scenario and high-dimensional scenario are both considered. We also develop the envelope-based SPLS estimators under the context of generalized linear models, and discuss its theoretical properties including consistency, oracle property and asymptotic distribution. Numerical experiments and examples show that the envelope-based SPLS estimator has better variable selection and prediction performance over the SPLS estimator ( J. R. Stat. Soc. Ser. B. Stat. Methodol. 72 (2010) 3–25). Full Article
ba Minimax posterior convergence rates and model selection consistency in high-dimensional DAG models based on sparse Cholesky factors By projecteuclid.org Published On :: Wed, 30 Oct 2019 22:03 EDT Kyoungjae Lee, Jaeyong Lee, Lizhen Lin. Source: The Annals of Statistics, Volume 47, Number 6, 3413--3437.Abstract: In this paper we study the high-dimensional sparse directed acyclic graph (DAG) models under the empirical sparse Cholesky prior. Among our results, strong model selection consistency or graph selection consistency is obtained under more general conditions than those in the existing literature. Compared to Cao, Khare and Ghosh [ Ann. Statist. (2019) 47 319–348], the required conditions are weakened in terms of the dimensionality, sparsity and lower bound of the nonzero elements in the Cholesky factor. Furthermore, our result does not require the irrepresentable condition, which is necessary for Lasso-type methods. We also derive the posterior convergence rates for precision matrices and Cholesky factors with respect to various matrix norms. The obtained posterior convergence rates are the fastest among those of the existing Bayesian approaches. In particular, we prove that our posterior convergence rates for Cholesky factors are the minimax or at least nearly minimax depending on the relative size of true sparseness for the entire dimension. The simulation study confirms that the proposed method outperforms the competing methods. Full Article
ba Exact lower bounds for the agnostic probably-approximately-correct (PAC) machine learning model By projecteuclid.org Published On :: Fri, 02 Aug 2019 22:04 EDT Aryeh Kontorovich, Iosif Pinelis. Source: The Annals of Statistics, Volume 47, Number 5, 2822--2854.Abstract: We provide an exact nonasymptotic lower bound on the minimax expected excess risk (EER) in the agnostic probably-approximately-correct (PAC) machine learning classification model and identify minimax learning algorithms as certain maximally symmetric and minimally randomized “voting” procedures. Based on this result, an exact asymptotic lower bound on the minimax EER is provided. This bound is of the simple form $c_{infty}/sqrt{ u}$ as $ u oinfty$, where $c_{infty}=0.16997dots$ is a universal constant, $ u=m/d$, $m$ is the size of the training sample and $d$ is the Vapnik–Chervonenkis dimension of the hypothesis class. It is shown that the differences between these asymptotic and nonasymptotic bounds, as well as the differences between these two bounds and the maximum EER of any learning algorithms that minimize the empirical risk, are asymptotically negligible, and all these differences are due to ties in the mentioned “voting” procedures. A few easy to compute nonasymptotic lower bounds on the minimax EER are also obtained, which are shown to be close to the exact asymptotic lower bound $c_{infty}/sqrt{ u}$ even for rather small values of the ratio $ u=m/d$. As an application of these results, we substantially improve existing lower bounds on the tail probability of the excess risk. Among the tools used are Bayes estimation and apparently new identities and inequalities for binomial distributions. Full Article
ba Convergence complexity analysis of Albert and Chib’s algorithm for Bayesian probit regression By projecteuclid.org Published On :: Tue, 21 May 2019 04:00 EDT Qian Qin, James P. Hobert. Source: The Annals of Statistics, Volume 47, Number 4, 2320--2347.Abstract: The use of MCMC algorithms in high dimensional Bayesian problems has become routine. This has spurred so-called convergence complexity analysis, the goal of which is to ascertain how the convergence rate of a Monte Carlo Markov chain scales with sample size, $n$, and/or number of covariates, $p$. This article provides a thorough convergence complexity analysis of Albert and Chib’s [ J. Amer. Statist. Assoc. 88 (1993) 669–679] data augmentation algorithm for the Bayesian probit regression model. The main tools used in this analysis are drift and minorization conditions. The usual pitfalls associated with this type of analysis are avoided by utilizing centered drift functions, which are minimized in high posterior probability regions, and by using a new technique to suppress high-dimensionality in the construction of minorization conditions. The main result is that the geometric convergence rate of the underlying Markov chain is bounded below 1 both as $n ightarrowinfty$ (with $p$ fixed), and as $p ightarrowinfty$ (with $n$ fixed). Furthermore, the first computable bounds on the total variation distance to stationarity are byproducts of the asymptotic analysis. Full Article
ba Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem By projecteuclid.org Published On :: Thu, 05 Aug 2010 15:41 EDT James G. Scott, James O. BergerSource: Ann. Statist., Volume 38, Number 5, 2587--2619.Abstract: This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens automatically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham’s-razor effect. Our second goal is to contrast empirical-Bayes and fully Bayesian approaches to variable selection through examples, theoretical results and simulations. Considerable differences between the two approaches are found. In particular, we prove a theorem that characterizes a surprising aymptotic discrepancy between fully Bayes and empirical Bayes. This discrepancy arises from a different source than the failure to account for hyperparameter uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when the empirical-Bayes estimate converges asymptotically to the true variable-inclusion probability, the potential for a serious difference remains. Full Article
ba CORBA By looselycoupled.com Published On :: 2004-06-30T17:00:00-00:00 (Common Object Request Broker Architecture) Pioneering integration architecture. Developed during the 1990s by the Object Management Group (OMG), CORBA was the first major attempt to define a platform-neutral architecture for combining heterogenous software resources across a network. A forerunner of today's service-oriented architectures, CORBA was designed for high-end, transaction-heavy, enterprise deployments, and thus it works best for tight coupling of software resources written in traditional programming languages such as C, C++, Java, Smalltalk and COBOL. Although the addition of IIOP (Internet Inter-ORB Protocol) extended CORBA to run over the Internet, it is less flexible than today's more loosely coupled SOAs, which are based on the exchange of XML documents using web services. Full Article
ba Bayesian mixed effects models for zero-inflated compositions in microbiome data analysis By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Boyu Ren, Sergio Bacallado, Stefano Favaro, Tommi Vatanen, Curtis Huttenhower, Lorenzo Trippa. Source: The Annals of Applied Statistics, Volume 14, Number 1, 494--517.Abstract: Detecting associations between microbial compositions and sample characteristics is one of the most important tasks in microbiome studies. Most of the existing methods apply univariate models to single microbial species separately, with adjustments for multiple hypothesis testing. We propose a Bayesian analysis for a generalized mixed effects linear model tailored to this application. The marginal prior on each microbial composition is a Dirichlet process, and dependence across compositions is induced through a linear combination of individual covariates, such as disease biomarkers or the subject’s age, and latent factors. The latent factors capture residual variability and their dimensionality is learned from the data in a fully Bayesian procedure. The proposed model is tested in data analyses and simulation studies with zero-inflated compositions. In these settings and within each sample, a large proportion of counts per microbial species are equal to zero. In our Bayesian model a priori the probability of compositions with absent microbial species is strictly positive. We propose an efficient algorithm to sample from the posterior and visualizations of model parameters which reveal associations between covariates and microbial compositions. We evaluate the proposed method in simulation studies, and then analyze a microbiome dataset for infants with type 1 diabetes which contains a large proportion of zeros in the sample-specific microbial compositions. Full Article
ba Optimal asset allocation with multivariate Bayesian dynamic linear models By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Jared D. Fisher, Davide Pettenuzzo, Carlos M. Carvalho. Source: The Annals of Applied Statistics, Volume 14, Number 1, 299--338.Abstract: We introduce a fast, closed-form, simulation-free method to model and forecast multiple asset returns and employ it to investigate the optimal ensemble of features to include when jointly predicting monthly stock and bond excess returns. Our approach builds on the Bayesian dynamic linear models of West and Harrison ( Bayesian Forecasting and Dynamic Models (1997) Springer), and it can objectively determine, through a fully automated procedure, both the optimal set of regressors to include in the predictive system and the degree to which the model coefficients, volatilities and covariances should vary over time. When applied to a portfolio of five stock and bond returns, we find that our method leads to large forecast gains, both in statistical and economic terms. In particular, we find that relative to a standard no-predictability benchmark, the optimal combination of predictors, stochastic volatility and time-varying covariances increases the annualized certainty equivalent returns of a leverage-constrained power utility investor by more than 500 basis points. Full Article
ba Estimating the health effects of environmental mixtures using Bayesian semiparametric regression and sparsity inducing priors By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Joseph Antonelli, Maitreyi Mazumdar, David Bellinger, David Christiani, Robert Wright, Brent Coull. Source: The Annals of Applied Statistics, Volume 14, Number 1, 257--275.Abstract: Humans are routinely exposed to mixtures of chemical and other environmental factors, making the quantification of health effects associated with environmental mixtures a critical goal for establishing environmental policy sufficiently protective of human health. The quantification of the effects of exposure to an environmental mixture poses several statistical challenges. It is often the case that exposure to multiple pollutants interact with each other to affect an outcome. Further, the exposure-response relationship between an outcome and some exposures, such as some metals, can exhibit complex, nonlinear forms, since some exposures can be beneficial and detrimental at different ranges of exposure. To estimate the health effects of complex mixtures, we propose a flexible Bayesian approach that allows exposures to interact with each other and have nonlinear relationships with the outcome. We induce sparsity using multivariate spike and slab priors to determine which exposures are associated with the outcome and which exposures interact with each other. The proposed approach is interpretable, as we can use the posterior probabilities of inclusion into the model to identify pollutants that interact with each other. We utilize our approach to study the impact of exposure to metals on child neurodevelopment in Bangladesh and find a nonlinear, interactive relationship between arsenic and manganese. Full Article
ba Bayesian factor models for probabilistic cause of death assessment with verbal autopsies By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Tsuyoshi Kunihama, Zehang Richard Li, Samuel J. Clark, Tyler H. McCormick. Source: The Annals of Applied Statistics, Volume 14, Number 1, 241--256.Abstract: The distribution of deaths by cause provides crucial information for public health planning, response and evaluation. About 60% of deaths globally are not registered or given a cause, limiting our ability to understand disease epidemiology. Verbal autopsy (VA) surveys are increasingly used in such settings to collect information on the signs, symptoms and medical history of people who have recently died. This article develops a novel Bayesian method for estimation of population distributions of deaths by cause using verbal autopsy data. The proposed approach is based on a multivariate probit model where associations among items in questionnaires are flexibly induced by latent factors. Using the Population Health Metrics Research Consortium labeled data that include both VA and medically certified causes of death, we assess performance of the proposed method. Further, we estimate important questionnaire items that are highly associated with causes of death. This framework provides insights that will simplify future data Full Article
ba A hierarchical Bayesian model for predicting ecological interactions using scaled evolutionary relationships By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Mohamad Elmasri, Maxwell J. Farrell, T. Jonathan Davies, David A. Stephens. Source: The Annals of Applied Statistics, Volume 14, Number 1, 221--240.Abstract: Identifying undocumented or potential future interactions among species is a challenge facing modern ecologists. Recent link prediction methods rely on trait data; however, large species interaction databases are typically sparse and covariates are limited to only a fraction of species. On the other hand, evolutionary relationships, encoded as phylogenetic trees, can act as proxies for underlying traits and historical patterns of parasite sharing among hosts. We show that, using a network-based conditional model, phylogenetic information provides strong predictive power in a recently published global database of host-parasite interactions. By scaling the phylogeny using an evolutionary model, our method allows for biological interpretation often missing from latent variable models. To further improve on the phylogeny-only model, we combine a hierarchical Bayesian latent score framework for bipartite graphs that accounts for the number of interactions per species with host dependence informed by phylogeny. Combining the two information sources yields significant improvement in predictive accuracy over each of the submodels alone. As many interaction networks are constructed from presence-only data, we extend the model by integrating a correction mechanism for missing interactions which proves valuable in reducing uncertainty in unobserved interactions. Full Article
ba BART with targeted smoothing: An analysis of patient-specific stillbirth risk By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Jennifer E. Starling, Jared S. Murray, Carlos M. Carvalho, Radek K. Bukowski, James G. Scott. Source: The Annals of Applied Statistics, Volume 14, Number 1, 28--50.Abstract: This article introduces BART with Targeted Smoothing, or tsBART, a new Bayesian tree-based model for nonparametric regression. The goal of tsBART is to introduce smoothness over a single target covariate $t$ while not necessarily requiring smoothness over other covariates $x$. tsBART is based on the Bayesian Additive Regression Trees (BART) model, an ensemble of regression trees. tsBART extends BART by parameterizing each tree’s terminal nodes with smooth functions of $t$ rather than independent scalars. Like BART, tsBART captures complex nonlinear relationships and interactions among the predictors. But unlike BART, tsBART guarantees that the response surface will be smooth in the target covariate. This improves interpretability and helps to regularize the estimate. After introducing and benchmarking the tsBART model, we apply it to our motivating example—pregnancy outcomes data from the National Center for Health Statistics. Our aim is to provide patient-specific estimates of stillbirth risk across gestational age $(t)$ and based on maternal and fetal risk factors $(x)$. Obstetricians expect stillbirth risk to vary smoothly over gestational age but not necessarily over other covariates, and tsBART has been designed precisely to reflect this structural knowledge. The results of our analysis show the clear superiority of the tsBART model for quantifying stillbirth risk, thereby providing patients and doctors with better information for managing the risk of fetal mortality. All methods described here are implemented in the R package tsbart . Full Article
ba SHOPPER: A probabilistic model of consumer choice with substitutes and complements By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Francisco J. R. Ruiz, Susan Athey, David M. Blei. Source: The Annals of Applied Statistics, Volume 14, Number 1, 1--27.Abstract: We develop SHOPPER, a sequential probabilistic model of shopping data. SHOPPER uses interpretable components to model the forces that drive how a customer chooses products; in particular, we designed SHOPPER to capture how items interact with other items. We develop an efficient posterior inference algorithm to estimate these forces from large-scale data, and we analyze a large dataset from a major chain grocery store. We are interested in answering counterfactual queries about changes in prices. We found that SHOPPER provides accurate predictions even under price interventions, and that it helps identify complementary and substitutable pairs of products. Full Article
ba Bayesian indicator variable selection to incorporate hierarchical overlapping group structure in multi-omics applications By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Li Zhu, Zhiguang Huo, Tianzhou Ma, Steffi Oesterreich, George C. Tseng. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2611--2636.Abstract: Variable selection is a pervasive problem in modern high-dimensional data analysis where the number of features often exceeds the sample size (a.k.a. small-n-large-p problem). Incorporation of group structure knowledge to improve variable selection has been widely studied. Here, we consider prior knowledge of a hierarchical overlapping group structure to improve variable selection in regression setting. In genomics applications, for instance, a biological pathway contains tens to hundreds of genes and a gene can be mapped to multiple experimentally measured features (such as its mRNA expression, copy number variation and methylation levels of possibly multiple sites). In addition to the hierarchical structure, the groups at the same level may overlap (e.g., two pathways can share common genes). Incorporating such hierarchical overlapping groups in traditional penalized regression setting remains a difficult optimization problem. Alternatively, we propose a Bayesian indicator model that can elegantly serve the purpose. We evaluate the model in simulations and two breast cancer examples, and demonstrate its superior performance over existing models. The result not only enhances prediction accuracy but also improves variable selection and model interpretation that lead to deeper biological insight of the disease. Full Article
ba On Bayesian new edge prediction and anomaly detection in computer networks By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Silvia Metelli, Nicholas Heard. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2586--2610.Abstract: Monitoring computer network traffic for anomalous behaviour presents an important security challenge. Arrivals of new edges in a network graph represent connections between a client and server pair not previously observed, and in rare cases these might suggest the presence of intruders or malicious implants. We propose a Bayesian model and anomaly detection method for simultaneously characterising existing network structure and modelling likely new edge formation. The method is demonstrated on real computer network authentication data and successfully identifies some machines which are known to be compromised. Full Article
ba A hierarchical curve-based approach to the analysis of manifold data By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Liberty Vittert, Adrian W. Bowman, Stanislav Katina. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2539--2563.Abstract: One of the data structures generated by medical imaging technology is high resolution point clouds representing anatomical surfaces. Stereophotogrammetry and laser scanning are two widely available sources of this kind of data. A standardised surface representation is required to provide a meaningful correspondence across different images as a basis for statistical analysis. Point locations with anatomical definitions, referred to as landmarks, have been the traditional approach. Landmarks can also be taken as the starting point for more general surface representations, often using templates which are warped on to an observed surface by matching landmark positions and subsequent local adjustment of the surface. The aim of the present paper is to provide a new approach which places anatomical curves at the heart of the surface representation and its analysis. Curves provide intermediate structures which capture the principal features of the manifold (surface) of interest through its ridges and valleys. As landmarks are often available these are used as anchoring points, but surface curvature information is the principal guide in estimating the curve locations. The surface patches between these curves are relatively flat and can be represented in a standardised manner by appropriate surface transects to give a complete surface model. This new approach does not require the use of a template, reference sample or any external information to guide the method and, when compared with a surface based approach, the estimation of curves is shown to have improved performance. In addition, examples involving applications to mussel shells and human faces show that the analysis of curve information can deliver more targeted and effective insight than the use of full surface information. Full Article
ba Empirical Bayes analysis of RNA sequencing experiments with auxiliary information By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Kun Liang. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2452--2482.Abstract: Finding differentially expressed genes is a common task in high-throughput transcriptome studies. While traditional statistical methods rank the genes by their test statistics alone, we analyze an RNA sequencing dataset using the auxiliary information of gene length and the test statistics from a related microarray study. Given the auxiliary information, we propose a novel nonparametric empirical Bayes procedure to estimate the posterior probability of differential expression for each gene. We demonstrate the advantage of our procedure in extensive simulation studies and a psoriasis RNA sequencing study. The companion R package calm is available at Bioconductor. Full Article