is Item 01: Ellis Ashmead-Bartlett diary, 1915-1917 By feedproxy.google.com Published On :: 10/07/2015 12:00:00 AM Full Article
is Smart research for HSC students: Citing your work and avoiding plagiarism By feedproxy.google.com Published On :: Mon, 04 May 2020 01:33:47 +0000 This session brings together the key resources for HSC subjects, including those that are useful for studying Advanced and Extension courses. Full Article
is Art Around the Library - Zine to Artist's Book By feedproxy.google.com Published On :: Mon, 04 May 2020 01:43:33 +0000 Find out how easy it is to make a ‘zine’ and you’re well on your way to producing your own mini books. Full Article
is Where do I start? Discover Your State Library Online By feedproxy.google.com Published On :: Wed, 06 May 2020 01:31:35 +0000 Whether you're looking for a new book to read, a binge-worthy podcast, inspiring stories, or a fun activity to do at home – you can get all of this and more online at your State Library Full Article
is Where do I start? Discover Your State Library Online By feedproxy.google.com Published On :: Wed, 06 May 2020 07:16:13 +0000 Whether you’re looking for a new book to read, a binge-worthy podcast, inspiring stories, or a fun activity to do at home — you can get all of this and more online at your State Library. Full Article
is Pence aimed to project normalcy during his trip to Iowa, but coronavirus got in the way By news.yahoo.com Published On :: Fri, 08 May 2020 21:35:24 -0400 Vice President Pence’s trip to Iowa shows how the Trump administration’s aims to move past coronavirus are sometimes complicated by the virus itself. Full Article
is India uses drones to disinfect virus hotspot as cases surge By news.yahoo.com Published On :: Sat, 09 May 2020 11:19:33 -0400 Indian authorities used drones and fire engines to disinfect the pandemic-hit city of Ahmedabad on Saturday, as virus cases surged and police clashed with migrant workers protesting against a reinforced lockdown. The western city of 5.5 million people in Prime Minister Narendra Modi's home state has become a major concern for authorities as they battle an uptick in coronavirus deaths and cases across India. Full Article
is U.S. chief justice puts hold on disclosure of Russia investigation materials By news.yahoo.com Published On :: Fri, 08 May 2020 11:50:21 -0400 U.S. Chief Justice John Roberts on Friday put a temporary hold on the disclosure to a Democratic-led House of Representatives committee of grand jury material redacted from former Special Counsel Robert Mueller's report on Russian interference in the 2016 election. The U.S. Court of Appeals for the District of Columbia Circuit ruled in March that the materials had to be disclosed to the House Judiciary Committee and refused to put that decision on hold. The appeals court said the materials had to be handed over by May 11 if the Supreme Court did not intervene. Full Article
is Delta, citing health concerns, drops service to 10 US airports. Is yours on the list? By news.yahoo.com Published On :: Fri, 08 May 2020 18:41:45 -0400 Delta said it is making the move to protect employees amid the coronavirus pandemic, but planes have been flying near empty Full Article
is As Trump returns to the road, some Democrats want to bust Biden out of his basement By news.yahoo.com Published On :: Fri, 08 May 2020 17:49:42 -0400 While President Donald Trump traveled to the battleground state of Arizona this week, his Democratic opponent for the White House, Joe Biden, campaigned from his basement as he has done throughout the coronavirus pandemic. The freeze on in-person campaigning during the outbreak has had an upside for Biden, giving the former vice president more time to court donors and shielding him from on-the-trail gaffes. "I personally would like to see him out more because he's in his element when he's meeting people," said Tom Sacks-Wilner, a fundraiser for Biden who is on the campaign's finance committee. Full Article
is 'We Cannot Police Our Way Out of a Pandemic.' Experts, Police Union Say NYPD Should Not Be Enforcing Social Distance Rules Amid COVID-19 By news.yahoo.com Published On :: Thu, 07 May 2020 17:03:38 -0400 The New York City police department (NYPD) is conducting an internal investigation into a May 2 incident involving the violent arrests of multiple people, allegedly members of a group who were not social distancing Full Article
is Pence staffer who tested positive for coronavirus is Stephen Miller's wife By news.yahoo.com Published On :: Fri, 08 May 2020 15:33:00 -0400 The staffer of Vice President Mike Pence who tested positive for coronavirus is apparently his press secretary and the wife of White House senior adviser Stephen Miller.Reports emerged on Friday that a member of Pence's staff had tested positive for COVID-19, creating a delay in his flight to Iowa amid concern over who may have been exposed. Later in the day, Trump said the staffer is a "press person" named Katie.Politico reported he was referring to Katie Miller, Pence's press secretary and the wife of Stephen Miller. This report noted this raises the risk that "a large swath of the West Wing's senior aides may also have been exposed." She confirmed her positive diagnosis to NBC News, saying she does not have symptoms.Trump spilled the beans to reporters, saying Katie Miller "hasn't come into contact with me" but has "spent some time with the vice president." This news comes one day after a personal valet to Trump tested positive for COVID-19, which reportedly made the president "lava level mad." Pence and Trump are being tested for COVID-19 every day.Asked Friday if he's concerned about the potential spread of coronavirus in the White House, Trump said "I'm not worried, no," adding that "we've taken very strong precautions."More stories from theweek.com Outed CIA agent Valerie Plame is running for Congress, and her launch video looks like a spy movie trailer 7 scathing cartoons about America's rush to reopen Trump says he couldn't have exposed WWII vets to COVID-19 because the wind was blowing the wrong way Full Article
is ‘Selfish, tribal and divided’: Barack Obama warns of changes to American way of life in leaked audio slamming Trump administration By news.yahoo.com Published On :: Sat, 09 May 2020 07:22:00 -0400 Barack Obama said the “rule of law is at risk” following the justice department’s decision to drop charges against former Trump advisor Mike Flynn, as he issued a stark warning about the long-term impact on the American way of life by his successor. Full Article
is Cruz gets his hair cut at salon whose owner was jailed for defying Texas coronavirus restrictions By news.yahoo.com Published On :: Fri, 08 May 2020 19:28:43 -0400 After his haircut, Sen. Ted Cruz said, "It was ridiculous to see somebody sentenced to seven days in jail for cutting hair." Full Article
is The McMichaels can't be charged with a hate crime by the state in the shooting death of Ahmaud Arbery because the law doesn't exist in Georgia By news.yahoo.com Published On :: Fri, 08 May 2020 17:07:36 -0400 Georgia is one of four states that doesn't have a hate crime law. Arbery's killing has reignited calls for legislation. Full Article
is CNN legal analysts say Barr dropping the Flynn case shows 'the fix was in.' Barr says winners write history. By news.yahoo.com Published On :: Fri, 08 May 2020 08:23:00 -0400 The Justice Department announced Thursday that it is dropping its criminal case against President Trump's first national security adviser Michael Flynn. Flynn twice admitted in court he lied to the FBI about his conversations with Russia's U.S. ambassador, and then cooperated in Special Counsel Robert Mueller's investigation. It was an unusual move by the Justice Department, and CNN's legal and political analysts smelled a rat."Attorney General [William] Barr is already being accused of creating a special justice system just for President Trump's friends," and this will only feed that perception, CNN's Jake Tapper suggested. Political correspondent Sara Murray agreed, noting that the prosecutor in the case, Brandon Van Grack, withdrew right before the Justice Department submitted its filing, just like when Barr intervened to request a reduced sentence for Roger Stone.National security correspondent Jim Sciutto laid out several reason why the substance of Flynn's admitted lie was a big deal, and chief legal analyst Jeffrey Toobin was appalled. "It is one of the most incredible legal documents I have read, and certainly something that I never expected to see from the United States Department of Justice," Toobin said. "The idea that the Justice Department would invent an argument -- an argument that the judge in this case has already rejected -- and say that's a basis for dropping a case where a defendant admitted his guilt shows that this is a case where the fix was in."Barr told CBS News' Cathrine Herridge on Thursday that dropping Flynn's case actually "sends the message that there is one standard of justice in this country." Herridge told Barr he would take flak for this, asking: "When history looks back on this decision, how do you think it will be written?" Barr laughed: "Well, history's written by the winners. So it largely depends on who's writing the history." Watch below. More stories from theweek.com Outed CIA agent Valerie Plame is running for Congress, and her launch video looks like a spy movie trailer 7 scathing cartoons about America's rush to reopen Trump says he couldn't have exposed WWII vets to COVID-19 because the wind was blowing the wrong way Full Article
is The accusation against Joe Biden has Democrats rediscovering the value of due process By news.yahoo.com Published On :: Sat, 09 May 2020 08:37:00 -0400 Some Democrats took "Believe Women" literally until Joe Biden was accused. Now they're relearning that guilt-by-accusation doesn't serve justice. Full Article
is Nearly one-third of Americans believe a coronavirus vaccine exists and is being withheld, survey finds By news.yahoo.com Published On :: Fri, 08 May 2020 16:49:35 -0400 The Democracy Fund + UCLA Nationscape Project found some misinformation about the coronavirus is more widespread that you might think. Full Article
is Neighbor of father and son arrested in Ahmaud Arbery killing is also under investigation By news.yahoo.com Published On :: Fri, 08 May 2020 11:42:19 -0400 The ongoing investigation of the fatal shooting in Brunswick, Georgia, will also look at a neighbor of suspects Gregory and Travis McMichael who recorded video of the incident, authorities said. Full Article
is Bayesian Quantile Regression with Mixed Discrete and Nonignorable Missing Covariates By projecteuclid.org Published On :: Thu, 19 Mar 2020 22:02 EDT Zhi-Qiang Wang, Nian-Sheng Tang. Source: Bayesian Analysis, Volume 15, Number 2, 579--604.Abstract: Bayesian inference on quantile regression (QR) model with mixed discrete and non-ignorable missing covariates is conducted by reformulating QR model as a hierarchical structure model. A probit regression model is adopted to specify missing covariate mechanism. A hybrid algorithm combining the Gibbs sampler and the Metropolis-Hastings algorithm is developed to simultaneously produce Bayesian estimates of unknown parameters and latent variables as well as their corresponding standard errors. Bayesian variable selection method is proposed to recognize significant covariates. A Bayesian local influence procedure is presented to assess the effect of minor perturbations to the data, priors and sampling distributions on posterior quantities of interest. Several simulation studies and an example are presented to illustrate the proposed methodologies. Full Article
is Bayesian Sparse Multivariate Regression with Asymmetric Nonlocal Priors for Microbiome Data Analysis By projecteuclid.org Published On :: Thu, 19 Mar 2020 22:02 EDT Kurtis Shuler, Marilou Sison-Mangus, Juhee Lee. Source: Bayesian Analysis, Volume 15, Number 2, 559--578.Abstract: We propose a Bayesian sparse multivariate regression method to model the relationship between microbe abundance and environmental factors for microbiome data. We model abundance counts of operational taxonomic units (OTUs) with a negative binomial distribution and relate covariates to the counts through regression. Extending conventional nonlocal priors, we construct asymmetric nonlocal priors for regression coefficients to efficiently identify relevant covariates and their effect directions. We build a hierarchical model to facilitate pooling of information across OTUs that produces parsimonious results with improved accuracy. We present simulation studies that compare variable selection performance under the proposed model to those under Bayesian sparse regression models with asymmetric and symmetric local priors and two frequentist models. The simulations show the proposed model identifies important covariates and yields coefficient estimates with favorable accuracy compared with the alternatives. The proposed model is applied to analyze an ocean microbiome dataset collected over time to study the association of harmful algal bloom conditions with microbial communities. Full Article
is Additive Multivariate Gaussian Processes for Joint Species Distribution Modeling with Heterogeneous Data By projecteuclid.org Published On :: Thu, 19 Mar 2020 22:02 EDT Jarno Vanhatalo, Marcelo Hartmann, Lari Veneranta. Source: Bayesian Analysis, Volume 15, Number 2, 415--447.Abstract: Species distribution models (SDM) are a key tool in ecology, conservation and management of natural resources. Two key components of the state-of-the-art SDMs are the description for species distribution response along environmental covariates and the spatial random effect that captures deviations from the distribution patterns explained by environmental covariates. Joint species distribution models (JSDMs) additionally include interspecific correlations which have been shown to improve their descriptive and predictive performance compared to single species models. However, current JSDMs are restricted to hierarchical generalized linear modeling framework. Their limitation is that parametric models have trouble in explaining changes in abundance due, for example, highly non-linear physical tolerance limits which is particularly important when predicting species distribution in new areas or under scenarios of environmental change. On the other hand, semi-parametric response functions have been shown to improve the predictive performance of SDMs in these tasks in single species models. Here, we propose JSDMs where the responses to environmental covariates are modeled with additive multivariate Gaussian processes coded as linear models of coregionalization. These allow inference for wide range of functional forms and interspecific correlations between the responses. We propose also an efficient approach for inference with Laplace approximation and parameterization of the interspecific covariance matrices on the Euclidean space. We demonstrate the benefits of our model with two small scale examples and one real world case study. We use cross-validation to compare the proposed model to analogous semi-parametric single species models and parametric single and joint species models in interpolation and extrapolation tasks. The proposed model outperforms the alternative models in all cases. We also show that the proposed model can be seen as an extension of the current state-of-the-art JSDMs to semi-parametric models. Full Article
is A Novel Algorithmic Approach to Bayesian Logic Regression (with Discussion) By projecteuclid.org Published On :: Tue, 17 Mar 2020 04:00 EDT Aliaksandr Hubin, Geir Storvik, Florian Frommlet. Source: Bayesian Analysis, Volume 15, Number 1, 263--333.Abstract: Logic regression was developed more than a decade ago as a tool to construct predictors from Boolean combinations of binary covariates. It has been mainly used to model epistatic effects in genetic association studies, which is very appealing due to the intuitive interpretation of logic expressions to describe the interaction between genetic variations. Nevertheless logic regression has (partly due to computational challenges) remained less well known than other approaches to epistatic association mapping. Here we will adapt an advanced evolutionary algorithm called GMJMCMC (Genetically modified Mode Jumping Markov Chain Monte Carlo) to perform Bayesian model selection in the space of logic regression models. After describing the algorithmic details of GMJMCMC we perform a comprehensive simulation study that illustrates its performance given logic regression terms of various complexity. Specifically GMJMCMC is shown to be able to identify three-way and even four-way interactions with relatively large power, a level of complexity which has not been achieved by previous implementations of logic regression. We apply GMJMCMC to reanalyze QTL (quantitative trait locus) mapping data for Recombinant Inbred Lines in Arabidopsis thaliana and from a backcross population in Drosophila where we identify several interesting epistatic effects. The method is implemented in an R package which is available on github. Full Article
is High-Dimensional Posterior Consistency for Hierarchical Non-Local Priors in Regression By projecteuclid.org Published On :: Mon, 13 Jan 2020 04:00 EST Xuan Cao, Kshitij Khare, Malay Ghosh. Source: Bayesian Analysis, Volume 15, Number 1, 241--262.Abstract: The choice of tuning parameters in Bayesian variable selection is a critical problem in modern statistics. In particular, for Bayesian linear regression with non-local priors, the scale parameter in the non-local prior density is an important tuning parameter which reflects the dispersion of the non-local prior density around zero, and implicitly determines the size of the regression coefficients that will be shrunk to zero. Current approaches treat the scale parameter as given, and suggest choices based on prior coverage/asymptotic considerations. In this paper, we consider the fully Bayesian approach introduced in (Wu, 2016) with the pMOM non-local prior and an appropriate Inverse-Gamma prior on the tuning parameter to analyze the underlying theoretical property. Under standard regularity assumptions, we establish strong model selection consistency in a high-dimensional setting, where $p$ is allowed to increase at a polynomial rate with $n$ or even at a sub-exponential rate with $n$ . Through simulation studies, we demonstrate that our model selection procedure can outperform other Bayesian methods which treat the scale parameter as given, and commonly used penalized likelihood methods, in a range of simulation settings. Full Article
is Learning Semiparametric Regression with Missing Covariates Using Gaussian Process Models By projecteuclid.org Published On :: Mon, 13 Jan 2020 04:00 EST Abhishek Bishoyi, Xiaojing Wang, Dipak K. Dey. Source: Bayesian Analysis, Volume 15, Number 1, 215--239.Abstract: Missing data often appear as a practical problem while applying classical models in the statistical analysis. In this paper, we consider a semiparametric regression model in the presence of missing covariates for nonparametric components under a Bayesian framework. As it is known that Gaussian processes are a popular tool in nonparametric regression because of their flexibility and the fact that much of the ensuing computation is parametric Gaussian computation. However, in the absence of covariates, the most frequently used covariance functions of a Gaussian process will not be well defined. We propose an imputation method to solve this issue and perform our analysis using Bayesian inference, where we specify the objective priors on the parameters of Gaussian process models. Several simulations are conducted to illustrate effectiveness of our proposed method and further, our method is exemplified via two real datasets, one through Langmuir equation, commonly used in pharmacokinetic models, and another through Auto-mpg data taken from the StatLib library. Full Article
is Latent Nested Nonparametric Priors (with Discussion) By projecteuclid.org Published On :: Thu, 19 Dec 2019 22:10 EST Federico Camerlenghi, David B. Dunson, Antonio Lijoi, Igor Prünster, Abel Rodríguez. Source: Bayesian Analysis, Volume 14, Number 4, 1303--1356.Abstract: Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalizing to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop a Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by-product. The results and their inferential implications are showcased on synthetic and real data. Full Article
is Spatial Disease Mapping Using Directed Acyclic Graph Auto-Regressive (DAGAR) Models By projecteuclid.org Published On :: Thu, 19 Dec 2019 22:10 EST Abhirup Datta, Sudipto Banerjee, James S. Hodges, Leiwen Gao. Source: Bayesian Analysis, Volume 14, Number 4, 1221--1244.Abstract: Hierarchical models for regionally aggregated disease incidence data commonly involve region specific latent random effects that are modeled jointly as having a multivariate Gaussian distribution. The covariance or precision matrix incorporates the spatial dependence between the regions. Common choices for the precision matrix include the widely used ICAR model, which is singular, and its nonsingular extension which lacks interpretability. We propose a new parametric model for the precision matrix based on a directed acyclic graph (DAG) representation of the spatial dependence. Our model guarantees positive definiteness and, hence, in addition to being a valid prior for regional spatially correlated random effects, can also directly model the outcome from dependent data like images and networks. Theoretical results establish a link between the parameters in our model and the variance and covariances of the random effects. Simulation studies demonstrate that the improved interpretability of our model reaps benefits in terms of accurately recovering the latent spatial random effects as well as for inference on the spatial covariance parameters. Under modest spatial correlation, our model far outperforms the CAR models, while the performances are similar when the spatial correlation is strong. We also assess sensitivity to the choice of the ordering in the DAG construction using theoretical and empirical results which testify to the robustness of our model. We also present a large-scale public health application demonstrating the competitive performance of the model. Full Article
is Post-Processing Posteriors Over Precision Matrices to Produce Sparse Graph Estimates By projecteuclid.org Published On :: Thu, 19 Dec 2019 22:10 EST Amir Bashir, Carlos M. Carvalho, P. Richard Hahn, M. Beatrix Jones. Source: Bayesian Analysis, Volume 14, Number 4, 1075--1090.Abstract: A variety of computationally efficient Bayesian models for the covariance matrix of a multivariate Gaussian distribution are available. However, all produce a relatively dense estimate of the precision matrix, and are therefore unsatisfactory when one wishes to use the precision matrix to consider the conditional independence structure of the data. This paper considers the posterior predictive distribution of model fit for these covariance models. We then undertake post-processing of the Bayes point estimate for the precision matrix to produce a sparse model whose expected fit lies within the upper 95% of the posterior predictive distribution of fit. The impact of the method for selecting the zero elements of the precision matrix is evaluated. Good results were obtained using models that encouraged a sparse posterior (G-Wishart, Bayesian adaptive graphical lasso) and selection using credible intervals. We also find that this approach is easily extended to the problem of finding a sparse set of elements that differ across a set of precision matrices, a natural summary when a common set of variables is observed under multiple conditions. We illustrate our findings with moderate dimensional data examples from finance and metabolomics. Full Article
is Beyond Whittle: Nonparametric Correction of a Parametric Likelihood with a Focus on Bayesian Time Series Analysis By projecteuclid.org Published On :: Thu, 19 Dec 2019 22:10 EST Claudia Kirch, Matthew C. Edwards, Alexander Meier, Renate Meyer. Source: Bayesian Analysis, Volume 14, Number 4, 1037--1073.Abstract: Nonparametric Bayesian inference has seen a rapid growth over the last decade but only few nonparametric Bayesian approaches to time series analysis have been developed. Most existing approaches use Whittle’s likelihood for Bayesian modelling of the spectral density as the main nonparametric characteristic of stationary time series. It is known that the loss of efficiency using Whittle’s likelihood can be substantial. On the other hand, parametric methods are more powerful than nonparametric methods if the observed time series is close to the considered model class but fail if the model is misspecified. Therefore, we suggest a nonparametric correction of a parametric likelihood that takes advantage of the efficiency of parametric models while mitigating sensitivities through a nonparametric amendment. We use a nonparametric Bernstein polynomial prior on the spectral density with weights induced by a Dirichlet process and prove posterior consistency for Gaussian stationary time series. Bayesian posterior computations are implemented via an MH-within-Gibbs sampler and the performance of the nonparametrically corrected likelihood for Gaussian time series is illustrated in a simulation study and in three astronomy applications, including estimating the spectral density of gravitational wave data from the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO). Full Article
is A Bayesian Conjugate Gradient Method (with Discussion) By projecteuclid.org Published On :: Mon, 02 Dec 2019 04:00 EST Jon Cockayne, Chris J. Oates, Ilse C.F. Ipsen, Mark Girolami. Source: Bayesian Analysis, Volume 14, Number 3, 937--1012.Abstract: A fundamental task in numerical computation is the solution of large linear systems. The conjugate gradient method is an iterative method which offers rapid convergence to the solution, particularly when an effective preconditioner is employed. However, for more challenging systems a substantial error can be present even after many iterations have been performed. The estimates obtained in this case are of little value unless further information can be provided about, for example, the magnitude of the error. In this paper we propose a novel statistical model for this error, set in a Bayesian framework. Our approach is a strict generalisation of the conjugate gradient method, which is recovered as the posterior mean for a particular choice of prior. The estimates obtained are analysed with Krylov subspace methods and a contraction result for the posterior is presented. The method is then analysed in a simulation study as well as being applied to a challenging problem in medical imaging. Full Article
is Model Criticism in Latent Space By projecteuclid.org Published On :: Tue, 11 Jun 2019 04:00 EDT Sohan Seth, Iain Murray, Christopher K. I. Williams. Source: Bayesian Analysis, Volume 14, Number 3, 703--725.Abstract: Model criticism is usually carried out by assessing if replicated data generated under the fitted model looks similar to the observed data, see e.g. Gelman, Carlin, Stern, and Rubin (2004, p. 165). This paper presents a method for latent variable models by pulling back the data into the space of latent variables, and carrying out model criticism in that space. Making use of a model's structure enables a more direct assessment of the assumptions made in the prior and likelihood. We demonstrate the method with examples of model criticism in latent space applied to factor analysis, linear dynamical systems and Gaussian processes. Full Article
is Alleviating Spatial Confounding for Areal Data Problems by Displacing the Geographical Centroids By projecteuclid.org Published On :: Fri, 31 May 2019 22:05 EDT Marcos Oliveira Prates, Renato Martins Assunção, Erica Castilho Rodrigues. Source: Bayesian Analysis, Volume 14, Number 2, 623--647.Abstract: Spatial confounding between the spatial random effects and fixed effects covariates has been recently discovered and showed that it may bring misleading interpretation to the model results. Techniques to alleviate this problem are based on decomposing the spatial random effect and fitting a restricted spatial regression. In this paper, we propose a different approach: a transformation of the geographic space to ensure that the unobserved spatial random effect added to the regression is orthogonal to the fixed effects covariates. Our approach, named SPOCK, has the additional benefit of providing a fast and simple computational method to estimate the parameters. Also, it does not constrain the distribution class assumed for the spatial error term. A simulation study and real data analyses are presented to better understand the advantages of the new method in comparison with the existing ones. Full Article
is Efficient Acquisition Rules for Model-Based Approximate Bayesian Computation By projecteuclid.org Published On :: Wed, 13 Mar 2019 22:00 EDT Marko Järvenpää, Michael U. Gutmann, Arijus Pleska, Aki Vehtari, Pekka Marttinen. Source: Bayesian Analysis, Volume 14, Number 2, 595--622.Abstract: Approximate Bayesian computation (ABC) is a method for Bayesian inference when the likelihood is unavailable but simulating from the model is possible. However, many ABC algorithms require a large number of simulations, which can be costly. To reduce the computational cost, Bayesian optimisation (BO) and surrogate models such as Gaussian processes have been proposed. Bayesian optimisation enables one to intelligently decide where to evaluate the model next but common BO strategies are not designed for the goal of estimating the posterior distribution. Our paper addresses this gap in the literature. We propose to compute the uncertainty in the ABC posterior density, which is due to a lack of simulations to estimate this quantity accurately, and define a loss function that measures this uncertainty. We then propose to select the next evaluation location to minimise the expected loss. Experiments show that the proposed method often produces the most accurate approximations as compared to common BO strategies. Full Article
is Constrained Bayesian Optimization with Noisy Experiments By projecteuclid.org Published On :: Wed, 13 Mar 2019 22:00 EDT Benjamin Letham, Brian Karrer, Guilherme Ottoni, Eytan Bakshy. Source: Bayesian Analysis, Volume 14, Number 2, 495--519.Abstract: Randomized experiments are the gold standard for evaluating the effects of changes to real-world systems. Data in these tests may be difficult to collect and outcomes may have high variance, resulting in potentially large measurement error. Bayesian optimization is a promising technique for efficiently optimizing multiple continuous parameters, but existing approaches degrade in performance when the noise level is high, limiting its applicability to many randomized experiments. We derive an expression for expected improvement under greedy batch optimization with noisy observations and noisy constraints, and develop a quasi-Monte Carlo approximation that allows it to be efficiently optimized. Simulations with synthetic functions show that optimization performance on noisy, constrained problems outperforms existing methods. We further demonstrate the effectiveness of the method with two real-world experiments conducted at Facebook: optimizing a ranking system, and optimizing server compiler flags. Full Article
is Analysis of the Maximal a Posteriori Partition in the Gaussian Dirichlet Process Mixture Model By projecteuclid.org Published On :: Wed, 13 Mar 2019 22:00 EDT Łukasz Rajkowski. Source: Bayesian Analysis, Volume 14, Number 2, 477--494.Abstract: Mixture models are a natural choice in many applications, but it can be difficult to place an a priori upper bound on the number of components. To circumvent this, investigators are turning increasingly to Dirichlet process mixture models (DPMMs). It is therefore important to develop an understanding of the strengths and weaknesses of this approach. This work considers the MAP (maximum a posteriori) clustering for the Gaussian DPMM (where the cluster means have Gaussian distribution and, for each cluster, the observations within the cluster have Gaussian distribution). Some desirable properties of the MAP partition are proved: ‘almost disjointness’ of the convex hulls of clusters (they may have at most one point in common) and (with natural assumptions) the comparability of sizes of those clusters that intersect any fixed ball with the number of observations (as the latter goes to infinity). Consequently, the number of such clusters remains bounded. Furthermore, if the data arises from independent identically distributed sampling from a given distribution with bounded support then the asymptotic MAP partition of the observation space maximises a function which has a straightforward expression, which depends only on the within-group covariance parameter. As the operator norm of this covariance parameter decreases, the number of clusters in the MAP partition becomes arbitrarily large, which may lead to the overestimation of the number of mixture components. Full Article
is A Bayesian Approach to Statistical Shape Analysis via the Projected Normal Distribution By projecteuclid.org Published On :: Wed, 13 Mar 2019 22:00 EDT Luis Gutiérrez, Eduardo Gutiérrez-Peña, Ramsés H. Mena. Source: Bayesian Analysis, Volume 14, Number 2, 427--447.Abstract: This work presents a Bayesian predictive approach to statistical shape analysis. A modeling strategy that starts with a Gaussian distribution on the configuration space, and then removes the effects of location, rotation and scale, is studied. This boils down to an application of the projected normal distribution to model the configurations in the shape space, which together with certain identifiability constraints, facilitates parameter interpretation. Having better control over the parameters allows us to generalize the model to a regression setting where the effect of predictors on shapes can be considered. The methodology is illustrated and tested using both simulated scenarios and a real data set concerning eight anatomical landmarks on a sagittal plane of the corpus callosum in patients with autism and in a group of controls. Full Article
is Maximum Independent Component Analysis with Application to EEG Data By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Ruosi Guo, Chunming Zhang, Zhengjun Zhang. Source: Statistical Science, Volume 35, Number 1, 145--157.Abstract: In many scientific disciplines, finding hidden influential factors behind observational data is essential but challenging. The majority of existing approaches, such as the independent component analysis (${mathrm{ICA}}$), rely on linear transformation, that is, true signals are linear combinations of hidden components. Motivated from analyzing nonlinear temporal signals in neuroscience, genetics, and finance, this paper proposes the “maximum independent component analysis” (${mathrm{MaxICA}}$), based on max-linear combinations of components. In contrast to existing methods, ${mathrm{MaxICA}}$ benefits from focusing on significant major components while filtering out ignorable components. A major tool for parameter learning of ${mathrm{MaxICA}}$ is an augmented genetic algorithm, consisting of three schemes for the elite weighted sum selection, randomly combined crossover, and dynamic mutation. Extensive empirical evaluations demonstrate the effectiveness of ${mathrm{MaxICA}}$ in either extracting max-linearly combined essential sources in many applications or supplying a better approximation for nonlinearly combined source signals, such as $mathrm{EEG}$ recordings analyzed in this paper. Full Article
is Statistical Inference for the Evolutionary History of Cancer Genomes By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Khanh N. Dinh, Roman Jaksik, Marek Kimmel, Amaury Lambert, Simon Tavaré. Source: Statistical Science, Volume 35, Number 1, 129--144.Abstract: Recent years have seen considerable work on inference about cancer evolution from mutations identified in cancer samples. Much of the modeling work has been based on classical models of population genetics, generalized to accommodate time-varying cell population size. Reverse-time, genealogical views of such models, commonly known as coalescents, have been used to infer aspects of the past of growing populations. Another approach is to use branching processes, the simplest scenario being the classical linear birth-death process. Inference from evolutionary models of DNA often exploits summary statistics of the sequence data, a common one being the so-called Site Frequency Spectrum (SFS). In a bulk tumor sequencing experiment, we can estimate for each site at which a novel somatic point mutation has arisen, the proportion of cells that carry that mutation. These numbers are then grouped into collections of sites which have similar mutant fractions. We examine how the SFS based on birth-death processes differs from those based on the coalescent model. This may stem from the different sampling mechanisms in the two approaches. However, we also show that despite this, they are quantitatively comparable for the range of parameters typical for tumor cell populations. We also present a model of tumor evolution with selective sweeps, and demonstrate how it may help in understanding the history of a tumor as well as the influence of data pre-processing. We illustrate the theory with applications to several examples from The Cancer Genome Atlas tumors. Full Article
is Data Denoising and Post-Denoising Corrections in Single Cell RNA Sequencing By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Divyansh Agarwal, Jingshu Wang, Nancy R. Zhang. Source: Statistical Science, Volume 35, Number 1, 112--128.Abstract: Single cell sequencing technologies are transforming biomedical research. However, due to the inherent nature of the data, single cell RNA sequencing analysis poses new computational and statistical challenges. We begin with a survey of a selection of topics in this field, with a gentle introduction to the biology and a more detailed exploration of the technical noise. We consider in detail the problem of single cell data denoising, sometimes referred to as “imputation” in the relevant literature. We discuss why this is not a typical statistical imputation problem, and review current approaches to this problem. We then explore why the use of denoised values in downstream analyses invites novel statistical insights, and how denoising uncertainty should be accounted for to yield valid statistical inference. The utilization of denoised or imputed matrices in statistical inference is not unique to single cell genomics, and arises in many other fields. We describe the challenges in this type of analysis, discuss some preliminary solutions, and highlight unresolved issues. Full Article
is Statistical Molecule Counting in Super-Resolution Fluorescence Microscopy: Towards Quantitative Nanoscopy By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Thomas Staudt, Timo Aspelmeier, Oskar Laitenberger, Claudia Geisler, Alexander Egner, Axel Munk. Source: Statistical Science, Volume 35, Number 1, 92--111.Abstract: Super-resolution microscopy is rapidly gaining importance as an analytical tool in the life sciences. A compelling feature is the ability to label biological units of interest with fluorescent markers in (living) cells and to observe them with considerably higher resolution than conventional microscopy permits. The images obtained this way, however, lack an absolute intensity scale in terms of numbers of fluorophores observed. In this article, we discuss state of the art methods to count such fluorophores and statistical challenges that come along with it. In particular, we suggest a modeling scheme for time series generated by single-marker-switching (SMS) microscopy that makes it possible to quantify the number of markers in a statistically meaningful manner from the raw data. To this end, we model the entire process of photon generation in the fluorophore, their passage through the microscope, detection and photoelectron amplification in the camera, and extraction of time series from the microscopic images. At the heart of these modeling steps is a careful description of the fluorophore dynamics by a novel hidden Markov model that operates on two timescales (HTMM). Besides the fluorophore number, information about the kinetic transition rates of the fluorophore’s internal states is also inferred during estimation. We comment on computational issues that arise when applying our model to simulated or measured fluorescence traces and illustrate our methodology on simulated data. Full Article
is Statistical Methodology in Single-Molecule Experiments By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Chao Du, S. C. Kou. Source: Statistical Science, Volume 35, Number 1, 75--91.Abstract: Toward the last quarter of the 20th century, the emergence of single-molecule experiments enabled scientists to track and study individual molecules’ dynamic properties in real time. Unlike macroscopic systems’ dynamics, those of single molecules can only be properly described by stochastic models even in the absence of external noise. Consequently, statistical methods have played a key role in extracting hidden information about molecular dynamics from data obtained through single-molecule experiments. In this article, we survey the major statistical methodologies used to analyze single-molecule experimental data. Our discussion is organized according to the types of stochastic models used to describe single-molecule systems as well as major experimental data collection techniques. We also highlight challenges and future directions in the application of statistical methodologies to single-molecule experiments. Full Article
is A Tale of Two Parasites: Statistical Modelling to Support Disease Control Programmes in Africa By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Peter J. Diggle, Emanuele Giorgi, Julienne Atsame, Sylvie Ntsame Ella, Kisito Ogoussan, Katherine Gass. Source: Statistical Science, Volume 35, Number 1, 42--50.Abstract: Vector-borne diseases have long presented major challenges to the health of rural communities in the wet tropical regions of the world, but especially in sub-Saharan Africa. In this paper, we describe the contribution that statistical modelling has made to the global elimination programme for one vector-borne disease, onchocerciasis. We explain why information on the spatial distribution of a second vector-borne disease, Loa loa, is needed before communities at high risk of onchocerciasis can be treated safely with mass distribution of ivermectin, an antifiarial medication. We show how a model-based geostatistical analysis of Loa loa prevalence survey data can be used to map the predictive probability that each location in the region of interest meets a WHO policy guideline for safe mass distribution of ivermectin and describe two applications: one is to data from Cameroon that assesses prevalence using traditional blood-smear microscopy; the other is to Africa-wide data that uses a low-cost questionnaire-based method. We describe how a recent technological development in image-based microscopy has resulted in a change of emphasis from prevalence alone to the bivariate spatial distribution of prevalence and the intensity of infection among infected individuals. We discuss how statistical modelling of the kind described here can contribute to health policy guidelines and decision-making in two ways. One is to ensure that, in a resource-limited setting, prevalence surveys are designed, and the resulting data analysed, as efficiently as possible. The other is to provide an honest quantification of the uncertainty attached to any binary decision by reporting predictive probabilities that a policy-defined condition for action is or is not met. Full Article
is Some Statistical Issues in Climate Science By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Michael L. Stein. Source: Statistical Science, Volume 35, Number 1, 31--41.Abstract: Climate science is a field that is arguably both data-rich and data-poor. Data rich in that huge and quickly increasing amounts of data about the state of the climate are collected every day. Data poor in that important aspects of the climate are still undersampled, such as the deep oceans and some characteristics of the upper atmosphere. Data rich in that modern climate models can produce climatological quantities over long time periods with global coverage, including quantities that are difficult to measure and under conditions for which there is no data presently. Data poor in that the correspondence between climate model output to the actual climate, especially for future climate change due to human activities, is difficult to assess. The scope for fruitful interactions between climate scientists and statisticians is great, but requires serious commitments from researchers in both disciplines to understand the scientific and statistical nuances arising from the complex relationships between the data and the real-world problems. This paper describes a small fraction of some of the intellectual challenges that occur at the interface between climate science and statistics, including inferences for extremes for processes with seasonality and long-term trends, the use of climate model ensembles for studying extremes, the scope for using new data sources for studying space-time characteristics of environmental processes and a discussion of non-Gaussian space-time process models for climate variables. The paper concludes with a call to the statistical community to become more engaged in one of the great scientific and policy issues of our time, anthropogenic climate change and its impacts. Full Article
is Risk Models for Breast Cancer and Their Validation By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Adam R. Brentnall, Jack Cuzick. Source: Statistical Science, Volume 35, Number 1, 14--30.Abstract: Strategies to prevent cancer and diagnose it early when it is most treatable are needed to reduce the public health burden from rising disease incidence. Risk assessment is playing an increasingly important role in targeting individuals in need of such interventions. For breast cancer many individual risk factors have been well understood for a long time, but the development of a fully comprehensive risk model has not been straightforward, in part because there have been limited data where joint effects of an extensive set of risk factors may be estimated with precision. In this article we first review the approach taken to develop the IBIS (Tyrer–Cuzick) model, and describe recent updates. We then review and develop methods to assess calibration of models such as this one, where the risk of disease allowing for competing mortality over a long follow-up time or lifetime is estimated. The breast cancer risk model model and calibration assessment methods are demonstrated using a cohort of 132,139 women attending mammography screening in the State of Washington, USA. Full Article
is Model-Based Approach to the Joint Analysis of Single-Cell Data on Chromatin Accessibility and Gene Expression By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Zhixiang Lin, Mahdi Zamanighomi, Timothy Daley, Shining Ma, Wing Hung Wong. Source: Statistical Science, Volume 35, Number 1, 2--13.Abstract: Unsupervised methods, including clustering methods, are essential to the analysis of single-cell genomic data. Model-based clustering methods are under-explored in the area of single-cell genomics, and have the advantage of quantifying the uncertainty of the clustering result. Here we develop a model-based approach for the integrative analysis of single-cell chromatin accessibility and gene expression data. We show that combining these two types of data, we can achieve a better separation of the underlying cell types. An efficient Markov chain Monte Carlo algorithm is also developed. Full Article
is Introduction to the Special Issue By projecteuclid.org Published On :: Tue, 03 Mar 2020 04:00 EST Source: Statistical Science, Volume 35, Number 1, 1--1. Full Article
is Statistical Theory Powering Data Science By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Junhui Cai, Avishai Mandelbaum, Chaitra H. Nagaraja, Haipeng Shen, Linda Zhao. Source: Statistical Science, Volume 34, Number 4, 669--691.Abstract: Statisticians are finding their place in the emerging field of data science. However, many issues considered “new” in data science have long histories in statistics. Examples of using statistical thinking are illustrated, which range from exploratory data analysis to measuring uncertainty to accommodating nonrandom samples. These examples are then applied to service networks, baseball predictions and official statistics. Full Article
is Larry Brown’s Work on Admissibility By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Iain M. Johnstone. Source: Statistical Science, Volume 34, Number 4, 657--668.Abstract: Many papers in the early part of Brown’s career focused on the admissibility or otherwise of estimators of a vector parameter. He established that inadmissibility of invariant estimators in three and higher dimensions is a general phenomenon, and found deep and beautiful connections between admissibility and other areas of mathematics. This review touches on several of his major contributions, with a focus on his celebrated 1971 paper connecting admissibility, recurrence and elliptic partial differential equations. Full Article
is Larry Brown’s Contributions to Parametric Inference, Decision Theory and Foundations: A Survey By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST James O. Berger, Anirban DasGupta. Source: Statistical Science, Volume 34, Number 4, 621--634.Abstract: This article gives a panoramic survey of the general area of parametric statistical inference, decision theory and foundations of statistics for the period 1965–2010 through the lens of Larry Brown’s contributions to varied aspects of this massive area. The article goes over sufficiency, shrinkage estimation, admissibility, minimaxity, complete class theorems, estimated confidence, conditional confidence procedures, Edgeworth and higher order asymptotic expansions, variational Bayes, Stein’s SURE, differential inequalities, geometrization of convergence rates, asymptotic equivalence, aspects of empirical process theory, inference after model selection, unified frequentist and Bayesian testing, and Wald’s sequential theory. A reasonably comprehensive bibliography is provided. Full Article
is Discussion: Models as Approximations By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Dalia Ghanem, Todd A. Kuffner. Source: Statistical Science, Volume 34, Number 4, 604--605. Full Article