ul Nights below Foord Street : literature and popular culture in postindustrial Nova Scotia By dal.novanet.ca Published On :: Fri, 1 May 2020 19:34:09 -0300 Author: Thompson, Peter, 1981- author.Callnumber: PS 8131 N6 T56 2019ISBN: 0773559345 Full Article
ul Fully grown : why a stagnant economy is a sign of success By dal.novanet.ca Published On :: Fri, 1 May 2020 19:34:09 -0300 Author: Vollrath, Dietrich, author.Callnumber: HC 110 E44 V65 2020ISBN: 9780226666006 hardcover Full Article
ul Can $p$-values be meaningfully interpreted without random sampling? By projecteuclid.org Published On :: Thu, 26 Mar 2020 22:02 EDT Norbert Hirschauer, Sven Grüner, Oliver Mußhoff, Claudia Becker, Antje Jantsch. Source: Statistics Surveys, Volume 14, 71--91.Abstract: Besides the inferential errors that abound in the interpretation of $p$-values, the probabilistic pre-conditions (i.e. random sampling or equivalent) for using them at all are not often met by observational studies in the social sciences. This paper systematizes different sampling designs and discusses the restrictive requirements of data collection that are the indispensable prerequisite for using $p$-values. Full Article
ul PLS for Big Data: A unified parallel algorithm for regularised group PLS By projecteuclid.org Published On :: Mon, 02 Sep 2019 04:00 EDT Pierre Lafaye de Micheaux, Benoît Liquet, Matthew Sutton. Source: Statistics Surveys, Volume 13, 119--149.Abstract: Partial Least Squares (PLS) methods have been heavily exploited to analyse the association between two blocks of data. These powerful approaches can be applied to data sets where the number of variables is greater than the number of observations and in the presence of high collinearity between variables. Different sparse versions of PLS have been developed to integrate multiple data sets while simultaneously selecting the contributing variables. Sparse modeling is a key factor in obtaining better estimators and identifying associations between multiple data sets. The cornerstone of the sparse PLS methods is the link between the singular value decomposition (SVD) of a matrix (constructed from deflated versions of the original data) and least squares minimization in linear regression. We review four popular PLS methods for two blocks of data. A unified algorithm is proposed to perform all four types of PLS including their regularised versions. We present various approaches to decrease the computation time and show how the whole procedure can be scalable to big data sets. The bigsgPLS R package implements our unified algorithm and is available at https://github.com/matt-sutton/bigsgPLS . Full Article
ul Measuring multivariate association and beyond By projecteuclid.org Published On :: Wed, 16 Nov 2016 22:00 EST Julie Josse, Susan Holmes. Source: Statistics Surveys, Volume 10, 132--167.Abstract: Simple correlation coefficients between two variables have been generalized to measure association between two matrices in many ways. Coefficients such as the RV coefficient, the distance covariance (dCov) coefficient and kernel based coefficients are being used by different research communities. Scientists use these coefficients to test whether two random vectors are linked. Once it has been ascertained that there is such association through testing, then a next step, often ignored, is to explore and uncover the association’s underlying patterns. This article provides a survey of various measures of dependence between random vectors and tests of independence and emphasizes the connections and differences between the various approaches. After providing definitions of the coefficients and associated tests, we present the recent improvements that enhance their statistical properties and ease of interpretation. We summarize multi-table approaches and provide scenarii where the indices can provide useful summaries of heterogeneous multi-block data. We illustrate these different strategies on several examples of real data and suggest directions for future research. Full Article
ul A comparison of spatial predictors when datasets could be very large By projecteuclid.org Published On :: Tue, 19 Jul 2016 14:13 EDT Jonathan R. Bradley, Noel Cressie, Tao Shi. Source: Statistics Surveys, Volume 10, 100--131.Abstract: In this article, we review and compare a number of methods of spatial prediction, where each method is viewed as an algorithm that processes spatial data. To demonstrate the breadth of available choices, we consider both traditional and more-recently-introduced spatial predictors. Specifically, in our exposition we review: traditional stationary kriging, smoothing splines, negative-exponential distance-weighting, fixed rank kriging, modified predictive processes, a stochastic partial differential equation approach, and lattice kriging. This comparison is meant to provide a service to practitioners wishing to decide between spatial predictors. Hence, we provide technical material for the unfamiliar, which includes the definition and motivation for each (deterministic and stochastic) spatial predictor. We use a benchmark dataset of $mathrm{CO}_{2}$ data from NASA’s AIRS instrument to address computational efficiencies that include CPU time and memory usage. Furthermore, the predictive performance of each spatial predictor is assessed empirically using a hold-out subset of the AIRS data. Full Article
ul A survey of bootstrap methods in finite population sampling By projecteuclid.org Published On :: Tue, 15 Mar 2016 09:17 EDT Zeinab Mashreghi, David Haziza, Christian Léger. Source: Statistics Surveys, Volume 10, 1--52.Abstract: We review bootstrap methods in the context of survey data where the effect of the sampling design on the variability of estimators has to be taken into account. We present the methods in a unified way by classifying them in three classes: pseudo-population, direct, and survey weights methods. We cover variance estimation and the construction of confidence intervals for stratified simple random sampling as well as some unequal probability sampling designs. We also address the problem of variance estimation in presence of imputation to compensate for item non-response. Full Article
ul $M$-functionals of multivariate scatter By projecteuclid.org Published On :: Fri, 20 Mar 2015 09:11 EDT Lutz Dümbgen, Markus Pauly, Thomas Schweizer. Source: Statistics Surveys, Volume 9, 32--105.Abstract: This survey provides a self-contained account of $M$-estimation of multivariate scatter. In particular, we present new proofs for existence of the underlying $M$-functionals and discuss their weak continuity and differentiability. This is done in a rather general framework with matrix-valued random variables. By doing so we reveal a connection between Tyler’s (1987a) $M$-functional of scatter and the estimation of proportional covariance matrices. Moreover, this general framework allows us to treat a new class of scatter estimators, based on symmetrizations of arbitrary order. Finally these results are applied to $M$-estimation of multivariate location and scatter via multivariate $t$-distributions. Full Article
ul Semi-parametric estimation for conditional independence multivariate finite mixture models By projecteuclid.org Published On :: Fri, 06 Feb 2015 08:39 EST Didier Chauveau, David R. Hunter, Michael Levine. Source: Statistics Surveys, Volume 9, 1--31.Abstract: The conditional independence assumption for nonparametric multivariate finite mixture models, a weaker form of the well-known conditional independence assumption for random effects models for longitudinal data, is the subject of an increasing number of theoretical and algorithmic developments in the statistical literature. After presenting a survey of this literature, including an in-depth discussion of the all-important identifiability results, this article describes and extends an algorithm for estimation of the parameters in these models. The algorithm works for any number of components in three or more dimensions. It possesses a descent property and can be easily adapted to situations where the data are grouped in blocks of conditionally independent variables. We discuss how to adapt this algorithm to various location-scale models that link component densities, and we even adapt it to a particular class of univariate mixture problems in which the components are assumed symmetric. We give a bandwidth selection procedure for our algorithm. Finally, we demonstrate the effectiveness of our algorithm using a simulation study and two psychometric datasets. Full Article
ul Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules By projecteuclid.org Published On :: Thu, 05 Aug 2010 15:41 EDT Michael P. Fay, Michael A. ProschanSource: Statist. Surv., Volume 4, 1--39.Abstract: In a mathematical approach to hypothesis tests, we start with a clearly defined set of hypotheses and choose the test with the best properties for those hypotheses. In practice, we often start with less precise hypotheses. For example, often a researcher wants to know which of two groups generally has the larger responses, and either a t-test or a Wilcoxon-Mann-Whitney (WMW) test could be acceptable. Although both t-tests and WMW tests are usually associated with quite different hypotheses, the decision rule and p-value from either test could be associated with many different sets of assumptions, which we call perspectives. It is useful to have many of the different perspectives to which a decision rule may be applied collected in one place, since each perspective allows a different interpretation of the associated p-value. Here we collect many such perspectives for the two-sample t-test, the WMW test and other related tests. We discuss validity and consistency under each perspective and discuss recommendations between the tests in light of these many different perspectives. Finally, we briefly discuss a decision rule for testing genetic neutrality where knowledge of the many perspectives is vital to the proper interpretation of the decision rule. Full Article
ul Can a powerful neural network be a teacher for a weaker neural network?. (arXiv:2005.00393v2 [cs.LG] UPDATED) By arxiv.org Published On :: The transfer learning technique is widely used to learning in one context and applying it to another, i.e. the capacity to apply acquired knowledge and skills to new situations. But is it possible to transfer the learning from a deep neural network to a weaker neural network? Is it possible to improve the performance of a weak neural network using the knowledge acquired by a more powerful neural network? In this work, during the training process of a weak network, we add a loss function that minimizes the distance between the features previously learned from a strong neural network with the features that the weak network must try to learn. To demonstrate the effectiveness and robustness of our approach, we conducted a large number of experiments using three known datasets and demonstrated that a weak neural network can increase its performance if its learning process is driven by a more powerful neural network. Full Article
ul Capturing and Explaining Trajectory Singularities using Composite Signal Neural Networks. (arXiv:2003.10810v2 [cs.LG] UPDATED) By arxiv.org Published On :: Spatial trajectories are ubiquitous and complex signals. Their analysis is crucial in many research fields, from urban planning to neuroscience. Several approaches have been proposed to cluster trajectories. They rely on hand-crafted features, which struggle to capture the spatio-temporal complexity of the signal, or on Artificial Neural Networks (ANNs) which can be more efficient but less interpretable. In this paper we present a novel ANN architecture designed to capture the spatio-temporal patterns characteristic of a set of trajectories, while taking into account the demographics of the navigators. Hence, our model extracts markers linked to both behaviour and demographics. We propose a composite signal analyser (CompSNN) combining three simple ANN modules. Each of these modules uses different signal representations of the trajectory while remaining interpretable. Our CompSNN performs significantly better than its modules taken in isolation and allows to visualise which parts of the signal were most useful to discriminate the trajectories. Full Article
ul Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A Multi-Agent Deep Reinforcement Learning Approach. (arXiv:2003.02157v2 [physics.soc-ph] UPDATED) By arxiv.org Published On :: In recent years, multi-access edge computing (MEC) is a key enabler for handling the massive expansion of Internet of Things (IoT) applications and services. However, energy consumption of a MEC network depends on volatile tasks that induces risk for energy demand estimations. As an energy supplier, a microgrid can facilitate seamless energy supply. However, the risk associated with energy supply is also increased due to unpredictable energy generation from renewable and non-renewable sources. Especially, the risk of energy shortfall is involved with uncertainties in both energy consumption and generation. In this paper, we study a risk-aware energy scheduling problem for a microgrid-powered MEC network. First, we formulate an optimization problem considering the conditional value-at-risk (CVaR) measurement for both energy consumption and generation, where the objective is to minimize the loss of energy shortfall of the MEC networks and we show this problem is an NP-hard problem. Second, we analyze our formulated problem using a multi-agent stochastic game that ensures the joint policy Nash equilibrium, and show the convergence of the proposed model. Third, we derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based asynchronous advantage actor-critic (A3C) algorithm with shared neural networks. This method mitigates the curse of dimensionality of the state space and chooses the best policy among the agents for the proposed problem. Finally, the experimental results establish a significant performance gain by considering CVaR for high accuracy energy scheduling of the proposed model than both the single and random agent models. Full Article
ul Mnemonics Training: Multi-Class Incremental Learning without Forgetting. (arXiv:2002.10211v3 [cs.CV] UPDATED) By arxiv.org Published On :: Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes. Full Article
ul Bayesian factor models for multivariate categorical data obtained from questionnaires. (arXiv:1910.04283v2 [stat.AP] UPDATED) By arxiv.org Published On :: Factor analysis is a flexible technique for assessment of multivariate dependence and codependence. Besides being an exploratory tool used to reduce the dimensionality of multivariate data, it allows estimation of common factors that often have an interesting theoretical interpretation in real problems. However, standard factor analysis is only applicable when the variables are scaled, which is often inappropriate, for example, in data obtained from questionnaires in the field of psychology,where the variables are often categorical. In this framework, we propose a factor model for the analysis of multivariate ordered and non-ordered polychotomous data. The inference procedure is done under the Bayesian approach via Markov chain Monte Carlo methods. Two Monte-Carlo simulation studies are presented to investigate the performance of this approach in terms of estimation bias, precision and assessment of the number of factors. We also illustrate the proposed method to analyze participants' responses to the Motivational State Questionnaire dataset, developed to study emotions in laboratory and field settings. Full Article
ul Multi-scale analysis of lead-lag relationships in high-frequency financial markets. (arXiv:1708.03992v3 [stat.ME] UPDATED) By arxiv.org Published On :: We propose a novel estimation procedure for scale-by-scale lead-lag relationships of financial assets observed at high-frequency in a non-synchronous manner. The proposed estimation procedure does not require any interpolation processing of original datasets and is applicable to those with highest time resolution available. Consistency of the proposed estimators is shown under the continuous-time framework that has been developed in our previous work Hayashi and Koike (2018). An empirical application to a quote dataset of the NASDAQ-100 assets identifies two types of lead-lag relationships at different time scales. Full Article
ul Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes. (arXiv:1212.4137v2 [stat.ML] UPDATED) By arxiv.org Published On :: Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract several linear combinations of the variables that together explain the variance in the data as much as possible, while controlling the number of nonzero loadings in these combinations. In this paper we consider 8 different optimization formulations for computing a single sparse loading vector; these are obtained by combining the following factors: we employ two norms for measuring variance (L2, L1) and two sparsity-inducing norms (L0, L1), which are used in two different ways (constraint, penalty). Three of our formulations, notably the one with L0 constraint and L1 variance, have not been considered in the literature. We give a unifying reformulation which we propose to solve via a natural alternating maximization (AM) method. We show the the AM method is nontrivially equivalent to GPower (Journ'{e}e et al; JMLR 11:517--553, 2010) for all our formulations. Besides this, we provide 24 efficient parallel SPCA implementations: 3 codes (multi-core, GPU and cluster) for each of the 8 problems. Parallelism in the methods is aimed at i) speeding up computations (our GPU code can be 100 times faster than an efficient serial code written in C++), ii) obtaining solutions explaining more variance and iii) dealing with big data problems (our cluster code is able to solve a 357 GB problem in about a minute). Full Article
ul Deep Learning on Point Clouds for False Positive Reduction at Nodule Detection in Chest CT Scans. (arXiv:2005.03654v1 [eess.IV]) By arxiv.org Published On :: The paper focuses on a novel approach for false-positive reduction (FPR) of nodule candidates in Computer-aided detection (CADe) system after suspicious lesions proposing stage. Unlike common decisions in medical image analysis, the proposed approach considers input data not as 2d or 3d image, but as a point cloud and uses deep learning models for point clouds. We found out that models for point clouds require less memory and are faster on both training and inference than traditional CNN 3D, achieves better performance and does not impose restrictions on the size of the input image, thereby the size of the nodule candidate. We propose an algorithm for transforming 3d CT scan data to point cloud. In some cases, the volume of the nodule candidate can be much smaller than the surrounding context, for example, in the case of subpleural localization of the nodule. Therefore, we developed an algorithm for sampling points from a point cloud constructed from a 3D image of the candidate region. The algorithm guarantees to capture both context and candidate information as part of the point cloud of the nodule candidate. An experiment with creating a dataset from an open LIDC-IDRI database for a feature of the FPR task was accurately designed, set up and described in detail. The data augmentation technique was applied to avoid overfitting and as an upsampling method. Experiments are conducted with PointNet, PointNet++ and DGCNN. We show that the proposed approach outperforms baseline CNN 3D models and demonstrates 85.98 FROC versus 77.26 FROC for baseline models. Full Article
ul Local Cascade Ensemble for Multivariate Data Classification. (arXiv:2005.03645v1 [cs.LG]) By arxiv.org Published On :: We present LCE, a Local Cascade Ensemble for traditional (tabular) multivariate data classification, and its extension LCEM for Multivariate Time Series (MTS) classification. LCE is a new hybrid ensemble method that combines an explicit boosting-bagging approach to handle the usual bias-variance tradeoff faced by machine learning models and an implicit divide-and-conquer approach to individualize classifier errors on different parts of the training data. Our evaluation firstly shows that the hybrid ensemble method LCE outperforms the state-of-the-art classifiers on the UCI datasets and that LCEM outperforms the state-of-the-art MTS classifiers on the UEA datasets. Furthermore, LCEM provides explainability by design and manifests robust performance when faced with challenges arising from continuous data collection (different MTS length, missing data and noise). Full Article
ul A simulation study of disaggregation regression for spatial disease mapping. (arXiv:2005.03604v1 [stat.AP]) By arxiv.org Published On :: Disaggregation regression has become an important tool in spatial disease mapping for making fine-scale predictions of disease risk from aggregated response data. By including high resolution covariate information and modelling the data generating process on a fine scale, it is hoped that these models can accurately learn the relationships between covariates and response at a fine spatial scale. However, validating these high resolution predictions can be a challenge, as often there is no data observed at this spatial scale. In this study, disaggregation regression was performed on simulated data in various settings and the resulting fine-scale predictions are compared to the simulated ground truth. Performance was investigated with varying numbers of data points, sizes of aggregated areas and levels of model misspecification. The effectiveness of cross validation on the aggregate level as a measure of fine-scale predictive performance was also investigated. Predictive performance improved as the number of observations increased and as the size of the aggregated areas decreased. When the model was well-specified, fine-scale predictions were accurate even with small numbers of observations and large aggregated areas. Under model misspecification predictive performance was significantly worse for large aggregated areas but remained high when response data was aggregated over smaller regions. Cross-validation correlation on the aggregate level was a moderately good predictor of fine-scale predictive performance. While the simulations are unlikely to capture the nuances of real-life response data, this study gives insight into the effectiveness of disaggregation regression in different contexts. Full Article
ul Physics-informed neural network for ultrasound nondestructive quantification of surface breaking cracks. (arXiv:2005.03596v1 [cs.LG]) By arxiv.org Published On :: We introduce an optimized physics-informed neural network (PINN) trained to solve the problem of identifying and characterizing a surface breaking crack in a metal plate. PINNs are neural networks that can combine data and physics in the learning process by adding the residuals of a system of Partial Differential Equations to the loss function. Our PINN is supervised with realistic ultrasonic surface acoustic wave data acquired at a frequency of 5 MHz. The ultrasonic surface wave data is represented as a surface deformation on the top surface of a metal plate, measured by using the method of laser vibrometry. The PINN is physically informed by the acoustic wave equation and its convergence is sped up using adaptive activation functions. The adaptive activation function uses a scalable hyperparameter in the activation function, which is optimized to achieve best performance of the network as it changes dynamically the topology of the loss function involved in the optimization process. The usage of adaptive activation function significantly improves the convergence, notably observed in the current study. We use PINNs to estimate the speed of sound of the metal plate, which we do with an error of 1\%, and then, by allowing the speed of sound to be space dependent, we identify and characterize the crack as the positions where the speed of sound has decreased. Our study also shows the effect of sub-sampling of the data on the sensitivity of sound speed estimates. More broadly, the resulting model shows a promising deep neural network model for ill-posed inverse problems. Full Article
ul Diffusion Copulas: Identification and Estimation. (arXiv:2005.03513v1 [econ.EM]) By arxiv.org Published On :: We propose a new semiparametric approach for modelling nonlinear univariate diffusions, where the observed process is a nonparametric transformation of an underlying parametric diffusion (UPD). This modelling strategy yields a general class of semiparametric Markov diffusion models with parametric dynamic copulas and nonparametric marginal distributions. We provide primitive conditions for the identification of the UPD parameters together with the unknown transformations from discrete samples. Likelihood-based estimators of both parametric and nonparametric components are developed and we analyze the asymptotic properties of these. Kernel-based drift and diffusion estimators are also proposed and shown to be normally distributed in large samples. A simulation study investigates the finite sample performance of our estimators in the context of modelling US short-term interest rates. We also present a simple application of the proposed method for modelling the CBOE volatility index data. Full Article
ul Distributional Robustness of K-class Estimators and the PULSE. (arXiv:2005.03353v1 [econ.EM]) By arxiv.org Published On :: In causal settings, such as instrumental variable settings, it is well known that estimators based on ordinary least squares (OLS) can yield biased and non-consistent estimates of the causal parameters. This is partially overcome by two-stage least squares (TSLS) estimators. These are, under weak assumptions, consistent but do not have desirable finite sample properties: in many models, for example, they do not have finite moments. The set of K-class estimators can be seen as a non-linear interpolation between OLS and TSLS and are known to have improved finite sample properties. Recently, in causal discovery, invariance properties such as the moment criterion which TSLS estimators leverage have been exploited for causal structure learning: e.g., in cases, where the causal parameter is not identifiable, some structure of the non-zero components may be identified, and coverage guarantees are available. Subsequently, anchor regression has been proposed to trade-off invariance and predictability. The resulting estimator is shown to have optimal predictive performance under bounded shift interventions. In this paper, we show that the concepts of anchor regression and K-class estimators are closely related. Establishing this connection comes with two benefits: (1) It enables us to prove robustness properties for existing K-class estimators when considering distributional shifts. And, (2), we propose a novel estimator in instrumental variable settings by minimizing the mean squared prediction error subject to the constraint that the estimator lies in an asymptotically valid confidence region of the causal parameter. We call this estimator PULSE (p-uncorrelated least squares estimator) and show that it can be computed efficiently, even though the underlying optimization problem is non-convex. We further prove that it is consistent. Full Article
ul On a computationally-scalable sparse formulation of the multidimensional and non-stationary maximum entropy principle. (arXiv:2005.03253v1 [stat.CO]) By arxiv.org Published On :: Data-driven modelling and computational predictions based on maximum entropy principle (MaxEnt-principle) aim at finding as-simple-as-possible - but not simpler then necessary - models that allow to avoid the data overfitting problem. We derive a multivariate non-parametric and non-stationary formulation of the MaxEnt-principle and show that its solution can be approximated through a numerical maximisation of the sparse constrained optimization problem with regularization. Application of the resulting algorithm to popular financial benchmarks reveals memoryless models allowing for simple and qualitative descriptions of the major stock market indexes data. We compare the obtained MaxEnt-models to the heteroschedastic models from the computational econometrics (GARCH, GARCH-GJR, MS-GARCH, GARCH-PML4) in terms of the model fit, complexity and prediction quality. We compare the resulting model log-likelihoods, the values of the Bayesian Information Criterion, posterior model probabilities, the quality of the data autocorrelation function fits as well as the Value-at-Risk prediction quality. We show that all of the considered seven major financial benchmark time series (DJI, SPX, FTSE, STOXX, SMI, HSI and N225) are better described by conditionally memoryless MaxEnt-models with nonstationary regime-switching than by the common econometric models with finite memory. This analysis also reveals a sparse network of statistically-significant temporal relations for the positive and negative latent variance changes among different markets. The code is provided for open access. Full Article
ul Fast multivariate empirical cumulative distribution function with connection to kernel density estimation. (arXiv:2005.03246v1 [cs.DS]) By arxiv.org Published On :: This paper revisits the problem of computing empirical cumulative distribution functions (ECDF) efficiently on large, multivariate datasets. Computing an ECDF at one evaluation point requires $mathcal{O}(N)$ operations on a dataset composed of $N$ data points. Therefore, a direct evaluation of ECDFs at $N$ evaluation points requires a quadratic $mathcal{O}(N^2)$ operations, which is prohibitive for large-scale problems. Two fast and exact methods are proposed and compared. The first one is based on fast summation in lexicographical order, with a $mathcal{O}(N{log}N)$ complexity and requires the evaluation points to lie on a regular grid. The second one is based on the divide-and-conquer principle, with a $mathcal{O}(Nlog(N)^{(d-1){vee}1})$ complexity and requires the evaluation points to coincide with the input points. The two fast algorithms are described and detailed in the general $d$-dimensional case, and numerical experiments validate their speed and accuracy. Secondly, the paper establishes a direct connection between cumulative distribution functions and kernel density estimation (KDE) for a large class of kernels. This connection paves the way for fast exact algorithms for multivariate kernel density estimation and kernel regression. Numerical tests with the Laplacian kernel validate the speed and accuracy of the proposed algorithms. A broad range of large-scale multivariate density estimation, cumulative distribution estimation, survival function estimation and regression problems can benefit from the proposed numerical methods. Full Article
ul Multi-Label Sampling based on Local Label Imbalance. (arXiv:2005.03240v1 [cs.LG]) By arxiv.org Published On :: Class imbalance is an inherent characteristic of multi-label data that hinders most multi-label learning methods. One efficient and flexible strategy to deal with this problem is to employ sampling techniques before training a multi-label learning model. Although existing multi-label sampling approaches alleviate the global imbalance of multi-label datasets, it is actually the imbalance level within the local neighbourhood of minority class examples that plays a key role in performance degradation. To address this issue, we propose a novel measure to assess the local label imbalance of multi-label datasets, as well as two multi-label sampling approaches based on the local label imbalance, namely MLSOL and MLUL. By considering all informative labels, MLSOL creates more diverse and better labeled synthetic instances for difficult examples, while MLUL eliminates instances that are harmful to their local region. Experimental results on 13 multi-label datasets demonstrate the effectiveness of the proposed measure and sampling approaches for a variety of evaluation metrics, particularly in the case of an ensemble of classifiers trained on repeated samples of the original data. Full Article
ul Efficient Characterization of Dynamic Response Variation Using Multi-Fidelity Data Fusion through Composite Neural Network. (arXiv:2005.03213v1 [stat.ML]) By arxiv.org Published On :: Uncertainties in a structure is inevitable, which generally lead to variation in dynamic response predictions. For a complex structure, brute force Monte Carlo simulation for response variation analysis is infeasible since one single run may already be computationally costly. Data driven meta-modeling approaches have thus been explored to facilitate efficient emulation and statistical inference. The performance of a meta-model hinges upon both the quality and quantity of training dataset. In actual practice, however, high-fidelity data acquired from high-dimensional finite element simulation or experiment are generally scarce, which poses significant challenge to meta-model establishment. In this research, we take advantage of the multi-level response prediction opportunity in structural dynamic analysis, i.e., acquiring rapidly a large amount of low-fidelity data from reduced-order modeling, and acquiring accurately a small amount of high-fidelity data from full-scale finite element analysis. Specifically, we formulate a composite neural network fusion approach that can fully utilize the multi-level, heterogeneous datasets obtained. It implicitly identifies the correlation of the low- and high-fidelity datasets, which yields improved accuracy when compared with the state-of-the-art. Comprehensive investigations using frequency response variation characterization as case example are carried out to demonstrate the performance. Full Article
ul Active Learning with Multiple Kernels. (arXiv:2005.03188v1 [cs.LG]) By arxiv.org Published On :: Online multiple kernel learning (OMKL) has provided an attractive performance in nonlinear function learning tasks. Leveraging a random feature approximation, the major drawback of OMKL, known as the curse of dimensionality, has been recently alleviated. In this paper, we introduce a new research problem, termed (stream-based) active multiple kernel learning (AMKL), in which a learner is allowed to label selected data from an oracle according to a selection criterion. This is necessary in many real-world applications as acquiring true labels is costly or time-consuming. We prove that AMKL achieves an optimal sublinear regret, implying that the proposed selection criterion indeed avoids unuseful label-requests. Furthermore, we propose AMKL with an adaptive kernel selection (AMKL-AKS) in which irrelevant kernels can be excluded from a kernel dictionary 'on the fly'. This approach can improve the efficiency of active learning as well as the accuracy of a function approximation. Via numerical tests with various real datasets, it is demonstrated that AMKL-AKS yields a similar or better performance than the best-known OMKL, with a smaller number of labeled data. Full Article
ul Joint Multi-Dimensional Model for Global and Time-Series Annotations. (arXiv:2005.03117v1 [cs.LG]) By arxiv.org Published On :: Crowdsourcing is a popular approach to collect annotations for unlabeled data instances. It involves collecting a large number of annotations from several, often naive untrained annotators for each data instance which are then combined to estimate the ground truth. Further, annotations for constructs such as affect are often multi-dimensional with annotators rating multiple dimensions, such as valence and arousal, for each instance. Most annotation fusion schemes however ignore this aspect and model each dimension separately. In this work we address this by proposing a generative model for multi-dimensional annotation fusion, which models the dimensions jointly leading to more accurate ground truth estimates. The model we propose is applicable to both global and time series annotation fusion problems and treats the ground truth as a latent variable distorted by the annotators. The model parameters are estimated using the Expectation-Maximization algorithm and we evaluate its performance using synthetic data and real emotion corpora as well as on an artificial task with human annotations Full Article
ul Adaptive Invariance for Molecule Property Prediction. (arXiv:2005.03004v1 [q-bio.QM]) By arxiv.org Published On :: Effective property prediction methods can help accelerate the search for COVID-19 antivirals either through accurate in-silico screens or by effectively guiding on-going at-scale experimental efforts. However, existing prediction tools have limited ability to accommodate scarce or fragmented training data currently available. In this paper, we introduce a novel approach to learn predictors that can generalize or extrapolate beyond the heterogeneous data. Our method builds on and extends recently proposed invariant risk minimization, adaptively forcing the predictor to avoid nuisance variation. We achieve this by continually exercising and manipulating latent representations of molecules to highlight undesirable variation to the predictor. To test the method we use a combination of three data sources: SARS-CoV-2 antiviral screening data, molecular fragments that bind to SARS-CoV-2 main protease and large screening data for SARS-CoV-1. Our predictor outperforms state-of-the-art transfer learning methods by significant margin. We also report the top 20 predictions of our model on Broad drug repurposing hub. Full Article
ul State Library creates a new space for Aboriginal communities to connect with their cultural heritage By feedproxy.google.com Published On :: Wed, 19 Feb 2020 23:11:15 +0000 Thursday 20 February 2020 In an Australian first, the State Library of NSW launched a new digital space for Aboriginal communities to connect with their histories and cultures. Full Article
ul mvord: An R Package for Fitting Multivariate Ordinal Regression Models By www.jstatsoft.org Published On :: Sat, 18 Apr 2020 03:35:08 +0000 The R package mvord implements composite likelihood estimation in the class of multivariate ordinal regression models with a multivariate probit and a multivariate logit link. A flexible modeling framework for multiple ordinal measurements on the same subject is set up, which takes into consideration the dependence among the multiple observations by employing different error structures. Heterogeneity in the error structure across the subjects can be accounted for by the package, which allows for covariate dependent error structures. In addition, different regression coefficients and threshold parameters for each response are supported. If a reduction of the parameter space is desired, constraints on the threshold as well as on the regression coefficients can be specified by the user. The proposed multivariate framework is illustrated by means of a credit risk application. Full Article
ul Water hyacinth : a potential lignocellulosic biomass for bioethanol By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Sharma, Anuja, authorCallnumber: OnlineISBN: 9783030356323 (electronic bk.) Full Article
ul Uflacker's atlas of vascular anatomy By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Uflacker, Andre, author.Callnumber: OnlineISBN: 9781496356017 (hardback) Full Article
ul Terrestrial hermit crab populations in the Maldives : ecology, distribution and anthropogenic impact By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Steibl, Sebastian, authorCallnumber: OnlineISBN: 9783658295417 (electronic bk.) Full Article
ul Temporomandibular disorders : a translational approach from basic science to clinical applicability By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319572475 (electronic bk.) Full Article
ul Sustainable agriculture : advances in plant metabolome and microbiome By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Parray, Javid Ahmad, authorCallnumber: OnlineISBN: 9780128173749 (electronic bk.) Full Article
ul Science and practice of pressure ulcer management By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9781447174134 (electronic bk.) Full Article
ul Salt, fat and sugar reduction : sensory approaches for nutritional reformulation of foods and beverages By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: O'Sullivan, Maurice G., authorCallnumber: OnlineISBN: 9780128226124 (electronic bk.) Full Article
ul Regulation of cancer immune checkpoints : molecular and cellular mechanisms and therapy By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9789811532665 Full Article
ul Primary care for older adults : models and challenges By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319613291 Full Article
ul Population genomics : marine organisms By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 3030379361 electronic book Full Article
ul Plant small RNA : biogenesis, regulation and application By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9780128173367 (electronic bk.) Full Article
ul Plant microbiomes for sustainable agriculture By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030384531 (electronic bk.) Full Article
ul Ocular therapeutics handbook : a clinical manual By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Onofrey, Bruce E., author.Callnumber: OnlineISBN: 197510904X Full Article
ul Neonatal lung ultrasonography By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9789402415490 (electronic bk.) Full Article
ul Nanoencapsulation of food ingredients by specialized equipment By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9780128156728 (electronic bk.) Full Article
ul Multi-body dynamic modeling of multi-legged robots By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Mahapatra, Abhijit, authorCallnumber: OnlineISBN: 9789811529535 (electronic bk.) Full Article
ul Molecular biology By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9781493902637 Full Article
ul Molecular aspects of plant beneficial microbes in agriculture By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9780128184707 (electronic bk.) Full Article