man

Oregon State women's basketball receives Pac-12 Sportsmanship Award for supporting rival Oregon in tragedy

On the day Kobe Bryant suddenly passed away, the Beavers embraced their rivals at midcourt in a moment of strength to support the Ducks, many of whom had personal connections to Bryant and his daughter, Gigi. For this, Oregon State is the 2020 recipient of the Pac-12 Sportsmanship Award.




man

Stanford's Tara VanDerveer on Haley Jones' versatile freshman year: 'It was really incredible'

During Friday's "Pac-12 Perspective," Stanford head coach Tara VanDerveer spoke about Haley Jones' positionless game and how the Cardinal used the dynamic freshman in 2019-20. Download and listen wherever you get your podcasts.




man

Sparse equisigned PCA: Algorithms and performance bounds in the noisy rank-1 setting

Arvind Prasadan, Raj Rao Nadakuditi, Debashis Paul.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 345--385.

Abstract:
Singular value decomposition (SVD) based principal component analysis (PCA) breaks down in the high-dimensional and limited sample size regime below a certain critical eigen-SNR that depends on the dimensionality of the system and the number of samples. Below this critical eigen-SNR, the estimates returned by the SVD are asymptotically uncorrelated with the latent principal components. We consider a setting where the left singular vector of the underlying rank one signal matrix is assumed to be sparse and the right singular vector is assumed to be equisigned, that is, having either only nonnegative or only nonpositive entries. We consider six different algorithms for estimating the sparse principal component based on different statistical criteria and prove that by exploiting sparsity, we recover consistent estimates in the low eigen-SNR regime where the SVD fails. Our analysis reveals conditions under which a coordinate selection scheme based on a sum-type decision statistic outperforms schemes that utilize the $ell _{1}$ and $ell _{2}$ norm-based statistics. We derive lower bounds on the size of detectable coordinates of the principal left singular vector and utilize these lower bounds to derive lower bounds on the worst-case risk. Finally, we verify our findings with numerical simulations and a illustrate the performance with a video data where the interest is in identifying objects.




man

Neyman-Pearson classification: parametrics and sample size requirement

The Neyman-Pearson (NP) paradigm in binary classification seeks classifiers that achieve a minimal type II error while enforcing the prioritized type I error controlled under some user-specified level $alpha$. This paradigm serves naturally in applications such as severe disease diagnosis and spam detection, where people have clear priorities among the two error types. Recently, Tong, Feng, and Li (2018) proposed a nonparametric umbrella algorithm that adapts all scoring-type classification methods (e.g., logistic regression, support vector machines, random forest) to respect the given type I error (i.e., conditional probability of classifying a class $0$ observation as class $1$ under the 0-1 coding) upper bound $alpha$ with high probability, without specific distributional assumptions on the features and the responses. Universal the umbrella algorithm is, it demands an explicit minimum sample size requirement on class $0$, which is often the more scarce class, such as in rare disease diagnosis applications. In this work, we employ the parametric linear discriminant analysis (LDA) model and propose a new parametric thresholding algorithm, which does not need the minimum sample size requirements on class $0$ observations and thus is suitable for small sample applications such as rare disease diagnosis. Leveraging both the existing nonparametric and the newly proposed parametric thresholding rules, we propose four LDA-based NP classifiers, for both low- and high-dimensional settings. On the theoretical front, we prove NP oracle inequalities for one proposed classifier, where the rate for excess type II error benefits from the explicit parametric model assumption. Furthermore, as NP classifiers involve a sample splitting step of class $0$ observations, we construct a new adaptive sample splitting scheme that can be applied universally to NP classifiers, and this adaptive strategy reduces the type II error of these classifiers. The proposed NP classifiers are implemented in the R package nproc.




man

Perturbation Bounds for Procrustes, Classical Scaling, and Trilateration, with Applications to Manifold Learning

One of the common tasks in unsupervised learning is dimensionality reduction, where the goal is to find meaningful low-dimensional structures hidden in high-dimensional data. Sometimes referred to as manifold learning, this problem is closely related to the problem of localization, which aims at embedding a weighted graph into a low-dimensional Euclidean space. Several methods have been proposed for localization, and also manifold learning. Nonetheless, the robustness property of most of them is little understood. In this paper, we obtain perturbation bounds for classical scaling and trilateration, which are then applied to derive performance bounds for Isomap, Landmark Isomap, and Maximum Variance Unfolding. A new perturbation bound for procrustes analysis plays a key role.




man

Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions

We consider the standard model of distributed optimization of a sum of functions $F(mathbf z) = sum_{i=1}^n f_i(mathbf z)$, where node $i$ in a network holds the function $f_i(mathbf z)$. We allow for a harsh network model characterized by asynchronous updates, message delays, unpredictable message losses, and directed communication among nodes. In this setting, we analyze a modification of the Gradient-Push method for distributed optimization, assuming that (i) node $i$ is capable of generating gradients of its function $f_i(mathbf z)$ corrupted by zero-mean bounded-support additive noise at each step, (ii) $F(mathbf z)$ is strongly convex, and (iii) each $f_i(mathbf z)$ has Lipschitz gradients. We show that our proposed method asymptotically performs as well as the best bounds on centralized gradient descent that takes steps in the direction of the sum of the noisy gradients of all the functions $f_1(mathbf z), ldots, f_n(mathbf z)$ at each step.




man

Researching the Pacific: The Pacific Manuscripts Bureau

The State Library holds a superb collection of original documents, illustrations, photographs and books about the Pacifi




man

Branching random walks with uncountably many extinction probability vectors

Daniela Bertacchi, Fabio Zucca.

Source: Brazilian Journal of Probability and Statistics, Volume 34, Number 2, 426--438.

Abstract:
Given a branching random walk on a set $X$, we study its extinction probability vectors $mathbf{q}(cdot,A)$. Their components are the probability that the process goes extinct in a fixed $Asubseteq X$, when starting from a vertex $xin X$. The set of extinction probability vectors (obtained letting $A$ vary among all subsets of $X$) is a subset of the set of the fixed points of the generating function of the branching random walk. In particular here we are interested in the cardinality of the set of extinction probability vectors. We prove results which allow to understand whether the probability of extinction in a set $A$ is different from the one of extinction in another set $B$. In many cases there are only two possible extinction probability vectors and so far, in more complicated examples, only a finite number of distinct extinction probability vectors had been explicitly found. Whether a branching random walk could have an infinite number of distinct extinction probability vectors was not known. We apply our results to construct examples of branching random walks with uncountably many distinct extinction probability vectors.




man

Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules

Michael P. Fay, Michael A. Proschan

Source: Statist. Surv., Volume 4, 1--39.

Abstract:
In a mathematical approach to hypothesis tests, we start with a clearly defined set of hypotheses and choose the test with the best properties for those hypotheses. In practice, we often start with less precise hypotheses. For example, often a researcher wants to know which of two groups generally has the larger responses, and either a t-test or a Wilcoxon-Mann-Whitney (WMW) test could be acceptable. Although both t-tests and WMW tests are usually associated with quite different hypotheses, the decision rule and p-value from either test could be associated with many different sets of assumptions, which we call perspectives. It is useful to have many of the different perspectives to which a decision rule may be applied collected in one place, since each perspective allows a different interpretation of the associated p-value. Here we collect many such perspectives for the two-sample t-test, the WMW test and other related tests. We discuss validity and consistency under each perspective and discuss recommendations between the tests in light of these many different perspectives. Finally, we briefly discuss a decision rule for testing genetic neutrality where knowledge of the many perspectives is vital to the proper interpretation of the decision rule.




man

Holtermann and the A&A Photographic Company

We recently received a comment about authorship of the Holtermann Collection. Although it may seem a purely historica




man

How many modes can a constrained Gaussian mixture have?. (arXiv:2005.01580v2 [math.ST] UPDATED)

We show, by an explicit construction, that a mixture of univariate Gaussians with variance 1 and means in $[-A,A]$ can have $Omega(A^2)$ modes. This disproves a recent conjecture of Dytso, Yagli, Poor and Shamai [IEEE Trans. Inform. Theory, Apr. 2020], who showed that such a mixture can have at most $O(A^2)$ modes and surmised that the upper bound could be improved to $O(A)$. Our result holds even if an additional variance constraint is imposed on the mixing distribution. Extending the result to higher dimensions, we exhibit a mixture of Gaussians in $mathbb{R}^d$, with identity covariances and means inside $[-A,A]^d$, that has $Omega(A^{2d})$ modes.




man

On the impact of selected modern deep-learning techniques to the performance and celerity of classification models in an experimental high-energy physics use case. (arXiv:2002.01427v3 [physics.data-an] UPDATED)

Beginning from a basic neural-network architecture, we test the potential benefits offered by a range of advanced techniques for machine learning, in particular deep learning, in the context of a typical classification problem encountered in the domain of high-energy physics, using a well-studied dataset: the 2014 Higgs ML Kaggle dataset. The advantages are evaluated in terms of both performance metrics and the time required to train and apply the resulting models. Techniques examined include domain-specific data-augmentation, learning rate and momentum scheduling, (advanced) ensembling in both model-space and weight-space, and alternative architectures and connection methods.

Following the investigation, we arrive at a model which achieves equal performance to the winning solution of the original Kaggle challenge, whilst being significantly quicker to train and apply, and being suitable for use with both GPU and CPU hardware setups. These reductions in timing and hardware requirements potentially allow the use of more powerful algorithms in HEP analyses, where models must be retrained frequently, sometimes at short notice, by small groups of researchers with limited hardware resources. Additionally, a new wrapper library for PyTorch called LUMINis presented, which incorporates all of the techniques studied.




man

Reference and Document Aware Semantic Evaluation Methods for Korean Language Summarization. (arXiv:2005.03510v1 [cs.CL])

Text summarization refers to the process that generates a shorter form of text from the source document preserving salient information. Recently, many models for text summarization have been proposed. Most of those models were evaluated using recall-oriented understudy for gisting evaluation (ROUGE) scores. However, as ROUGE scores are computed based on n-gram overlap, they do not reflect semantic meaning correspondences between generated and reference summaries. Because Korean is an agglutinative language that combines various morphemes into a word that express several meanings, ROUGE is not suitable for Korean summarization. In this paper, we propose evaluation metrics that reflect semantic meanings of a reference summary and the original document, Reference and Document Aware Semantic Score (RDASS). We then propose a method for improving the correlation of the metrics with human judgment. Evaluation results show that the correlation with human judgment is significantly higher for our evaluation metrics than for ROUGE scores.




man

Training and Classification using a Restricted Boltzmann Machine on the D-Wave 2000Q. (arXiv:2005.03247v1 [cs.LG])

Restricted Boltzmann Machine (RBM) is an energy based, undirected graphical model. It is commonly used for unsupervised and supervised machine learning. Typically, RBM is trained using contrastive divergence (CD). However, training with CD is slow and does not estimate exact gradient of log-likelihood cost function. In this work, the model expectation of gradient learning for RBM has been calculated using a quantum annealer (D-Wave 2000Q), which is much faster than Markov chain Monte Carlo (MCMC) used in CD. Training and classification results are compared with CD. The classification accuracy results indicate similar performance of both methods. Image reconstruction as well as log-likelihood calculations are used to compare the performance of quantum and classical algorithms for RBM training. It is shown that the samples obtained from quantum annealer can be used to train a RBM on a 64-bit `bars and stripes' data set with classification performance similar to a RBM trained with CD. Though training based on CD showed improved learning performance, training using a quantum annealer eliminates computationally expensive MCMC steps of CD.




man

Subdomain Adaptation with Manifolds Discrepancy Alignment. (arXiv:2005.03229v1 [cs.LG])

Reducing domain divergence is a key step in transfer learning problems. Existing works focus on the minimization of global domain divergence. However, two domains may consist of several shared subdomains, and differ from each other in each subdomain. In this paper, we take the local divergence of subdomains into account in transfer. Specifically, we propose to use low-dimensional manifold to represent subdomain, and align the local data distribution discrepancy in each manifold across domains. A Manifold Maximum Mean Discrepancy (M3D) is developed to measure the local distribution discrepancy in each manifold. We then propose a general framework, called Transfer with Manifolds Discrepancy Alignment (TMDA), to couple the discovery of data manifolds with the minimization of M3D. We instantiate TMDA in the subspace learning case considering both the linear and nonlinear mappings. We also instantiate TMDA in the deep learning framework. Extensive experimental studies demonstrate that TMDA is a promising method for various transfer learning tasks.




man

Learning on dynamic statistical manifolds. (arXiv:2005.03223v1 [math.ST])

Hyperbolic balance laws with uncertain (random) parameters and inputs are ubiquitous in science and engineering. Quantification of uncertainty in predictions derived from such laws, and reduction of predictive uncertainty via data assimilation, remain an open challenge. That is due to nonlinearity of governing equations, whose solutions are highly non-Gaussian and often discontinuous. To ameliorate these issues in a computationally efficient way, we use the method of distributions, which here takes the form of a deterministic equation for spatiotemporal evolution of the cumulative distribution function (CDF) of the random system state, as a means of forward uncertainty propagation. Uncertainty reduction is achieved by recasting the standard loss function, i.e., discrepancy between observations and model predictions, in distributional terms. This step exploits the equivalence between minimization of the square error discrepancy and the Kullback-Leibler divergence. The loss function is regularized by adding a Lagrangian constraint enforcing fulfillment of the CDF equation. Minimization is performed sequentially, progressively updating the parameters of the CDF equation as more measurements are assimilated.




man

Public libraries report spike in demand for books in language

Tuesday 17 March 2020
NSW residents are reading more and more books in languages other than English than ever before with the State Library of NSW reporting a 20% increase in requests from public libraries for multicultural material just in the last 12 months.




man

ManifoldOptim: An R Interface to the ROPTLIB Library for Riemannian Manifold Optimization

Manifold optimization appears in a wide variety of computational problems in the applied sciences. In recent statistical methodologies such as sufficient dimension reduction and regression envelopes, estimation relies on the optimization of likelihood functions over spaces of matrices such as the Stiefel or Grassmann manifolds. Recently, Huang, Absil, Gallivan, and Hand (2016) have introduced the library ROPTLIB, which provides a framework and state of the art algorithms to optimize real-valued objective functions over commonly used matrix-valued Riemannian manifolds. This article presents ManifoldOptim, an R package that wraps the C++ library ROPTLIB. ManifoldOptim enables users to access functionality in ROPTLIB through R so that optimization problems can easily be constructed, solved, and integrated into larger R codes. Computationally intensive problems can be programmed with Rcpp and RcppArmadillo, and otherwise accessed through R. We illustrate the practical use of ManifoldOptim through several motivating examples involving dimension reduction and envelope methods in regression.




man

Close encounters: a manuscripts workshop

A free manuscripts workshop for PhD students at Wellcome Collection, 01 June 2018 Engaging with an artefact from the past is often a powerful experience, eliciting emotional and sensory, as well as analytical, responses. Researchers in the library at Wellcome… Continue reading




man

The root canal anatomy in permanent dentition

9783319734446 (electronic bk.)




man

The public policy primer : managing the policy process

Wu, Xun, author.
9781315624754 (electronic bk.)




man

The Washington manual internship survival guide

9781975116859




man

The Startup Owner's Manual : the Step-By-Step Guide for Building a Great Company

Blank, Steven G. (Steven Gary), author.
9781119690726 (electronic book)




man

The Best and Worst Places to be a Woman in Canada 2019 : The Gender Gap in Canada’s 26 Biggest Cities

9781771254434 (print)




man

Temporomandibular disorders : a translational approach from basic science to clinical applicability

9783319572475 (electronic bk.)




man

Semantic technology : 9th Joint International Conference, JIST 2019, Hangzhou, China, November 25-27, 2019, Revised selected papers

Joint International Semantic Technology Conference (9th : 2019 : Hangzhou, China)
9789811534126 (electronic bk.)




man

Science and practice of pressure ulcer management

9781447174134 (electronic bk.)




man

Plant-fire interactions : applying ecophysiology to wildfire management

Resco de Dios, Víctor, author
9783030411923 (electronic book)




man

Phytomanagement of fly ash

Pandey, Vimal Chandra, author
9780128185452 (electronic bk.)




man

Personalized food intervention and therapy for autism spectrum disorder management

9783030304027 (electronic bk.)




man

Ocular therapeutics handbook : a clinical manual

Onofrey, Bruce E., author.
197510904X




man

Milk and dairy foods : their functionality in human health and disease

9780128156049 (electronic bk.)




man

Manual of valvular heart disease

9781496310125 paperback




man

Manual of Screeners for Dementia

Larner, A. J. author. aut http://id.loc.gov/vocabulary/relators/aut
9783030416362 978-3-030-41636-2




man

Management of fractured endodontic instruments : a clinical guide

9783319606514 (electronic bk.)




man

Management of Hereditary Colorectal Cancer

9783030262341 978-3-030-26234-1




man

Integrated pest and disease management in greenhouse crops

9783030223045 electronic book




man

Imaging of the temporomandibular joint

9783319994680 (electronic book)




man

Human behavior analysis : sensing and understanding

Yu, Zhiwen, author
9789811521096 (electronic bk.)




man

Healthcare-associated infections in children : a guide to prevention and management

9783319981222 (electronic bk.)




man

Gapenski's understanding healthcare financial management

Pink, George H., author.
9781640551145 (electronic bk.)




man

Fractures in the elderly : a guide to practical management

9783319722283 (electronic bk.)




man

Dietary sugar, salt and fat in human health

9780128169193 (electronic bk.)




man

Current microbiological research in Africa : selected applications for sustainable environmental management

9783030352967 (electronic bk.)




man

Compression and chronic wound management

9783030011956 (electronic book)




man

Clinical manual of fever in children

El-Radhi, A. Sahib, author.
9783319923369 (electronic book)




man

Clinical Manual of Dermatology

Huang, William W. author.
9783030239404




man

Children’s Palliative Care: An International Case-Based Manual

9783030273750 978-3-030-27375-0




man

Atlas of ulcers in systemic sclerosis : diagnosis and management

9783319984773 (electronic bk.)




man

Anomalies of the Developing Dentition : a Clinical Guide to Diagnosis and Management

Soxman, Jane A., author.
9783030031640 (electronic bk.)