ia Errata: A survey of Bayesian predictive methods for model assessment, selection and comparison By projecteuclid.org Published On :: Wed, 26 Feb 2014 09:10 EST Aki Vehtari, Janne Ojanen. Source: Statistics Surveys, Volume 8, , 1--1.Abstract: Errata for “A survey of Bayesian predictive methods for model assessment, selection and comparison” by A. Vehtari and J. Ojanen, Statistics Surveys , 6 (2012), 142–228. doi:10.1214/12-SS102. Full Article
ia A survey of Bayesian predictive methods for model assessment, selection and comparison By projecteuclid.org Published On :: Thu, 27 Dec 2012 12:22 EST Aki Vehtari, Janne OjanenSource: Statist. Surv., Volume 6, 142--228.Abstract: To date, several methods exist in the statistical literature for model assessment, which purport themselves specifically as Bayesian predictive methods. The decision theoretic assumptions on which these methods are based are not always clearly stated in the original articles, however. The aim of this survey is to provide a unified review of Bayesian predictive model assessment and selection methods, and of methods closely related to them. We review the various assumptions that are made in this context and discuss the connections between different approaches, with an emphasis on how each method approximates the expected utility of using a Bayesian model for the purpose of predicting future data. Full Article
ia Data confidentiality: A review of methods for statistical disclosure limitation and methods for assessing privacy By projecteuclid.org Published On :: Fri, 04 Feb 2011 09:16 EST Gregory J. Matthews, Ofer HarelSource: Statist. Surv., Volume 5, 1--29.Abstract: There is an ever increasing demand from researchers for access to useful microdata files. However, there are also growing concerns regarding the privacy of the individuals contained in the microdata. Ideally, microdata could be released in such a way that a balance between usefulness of the data and privacy is struck. This paper presents a review of proposed methods of statistical disclosure control and techniques for assessing the privacy of such methods under different definitions of disclosure. References:Abowd, J., Woodcock, S., 2001. Disclosure limitation in longitudinal linked data. Confidentiality, Disclosure, and Data Access: Theory and Practical Applications for Statistical Agencies, 215–277.Adam, N.R., Worthmann, J.C., 1989. Security-control methods for statistical databases: a comparative study. ACM Comput. Surv. 21 (4), 515–556.Armstrong, M., Rushton, G., Zimmerman, D.L., 1999. Geographically masking health data to preserve confidentiality. Statistics in Medicine 18 (5), 497–525.Bethlehem, J.G., Keller, W., Pannekoek, J., 1990. Disclosure control of microdata. Jorunal of the American Statistical Association 85, 38–45.Blum, A., Dwork, C., McSherry, F., Nissam, K., 2005. Practical privacy: The sulq framework. In: Proceedings of the 24th ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems. pp. 128–138.Bowden, R.J., Sim, A.B., 1992. The privacy bootstrap. Journal of Business and Economic Statistics 10 (3), 337–345.Carlson, M., Salabasis, M., 2002. A data-swapping technique for generating synthetic samples; a method for disclosure control. Res. Official Statist. (5), 35–64.Cox, L.H., 1980. Suppression methodology and statistical disclosure control. Journal of the American Statistical Association 75, 377–385.Cox, L.H., 1984. Disclosure control methods for frequency count data. Tech. rep., U.S. Bureau of the Census.Cox, L.H., 1987. A constructive procedure for unbiased controlled rounding. Journal of the American Statistical Association 82, 520–524.Cox, L.H., 1994. Matrix masking methods for disclosure limitation in microdata. Survey Methodology 6, 165–169.Cox, L.H., Fagan, J.T., Greenberg, B., Hemmig, R., 1987. Disclosure avoidance techniques for tabular data. Tech. rep., U.S. Bureau of the Census.Dalenius, T., 1977. Towards a methodology for statistical disclosure control. Statistik Tidskrift 15, 429–444.Dalenius, T., 1986. Finding a needle in a haystack - or identifying anonymous census record. Journal of Official Statistics 2 (3), 329–336.Dalenius, T., Denning, D., 1982. A hybrid scheme for release of statistics. Statistisk Tidskrift.Dalenius, T., Reiss, S.P., 1982. Data-swapping: A technique for disclosure control. Journal of Statistical Planning and Inference 6, 73–85.De Waal, A., Hundepool, A., Willenborg, L., 1995. Argus: Software for statistical disclosure control of microdata. U.S. Census Bureau.DeGroot, M.H., 1962. Uncertainty, information, and sequential experiments. Annals of Mathematical Statistics 33, 404–419.DeGroot, M.H., 1970. Optimal Statistical Decisions. Mansell, London.Dinur, I., Nissam, K., 2003. Revealing information while preserving privacy. In: Proceedings of the 22nd ACM SIGMOD-SIGACT-SIGART Symposium on Principlesof Database Systems. pp. 202–210.Domingo-Ferrer, J., Torra, V., 2001a. A Quantitative Comparison of Disclosure Control Methods for Microdata. In: Doyle, P., Lane, J., Theeuwes, J., Zayatz, L. (Eds.), Confidentiality, Disclosure and Data Access - Theory and Practical Applications for Statistical Agencies. North-Holland, Amsterdam, Ch. 6, pp. 113–135.Domingo-Ferrer, J., Torra, V., 2001b. Disclosure control methods and information loss for microdata. In: Doyle, P., Lane, J., Theeuwes, J., Zayatz, L. (Eds.), Confidentiality, Disclosure and Data Access - Theory and Practical Applications for Statistical Agencies. North-Holland, Amsterdam, Ch. 5, pp. 93–112.Duncan, G., Lambert, D., 1986. Disclosure-limited data dissemination. Journal of the American Statistical Association 81, 10–28.Duncan, G., Lambert, D., 1989. The risk of disclosure for microdata. Journal of Business & Economic Statistics 7, 207–217. Duncan, G., Pearson, R., 1991. Enhancing access to microdata while protecting confidentiality: prospects for the future (with discussion). Statistical Science 6, 219–232.Dwork, C., 2006. Differential privacy. In: ICALP. Springer, pp. 1–12.Dwork, C., 2008. An ad omnia approach to defining and achieving private data analysis. In: Lecture Notes in Computer Science. Springer, p. 10.Dwork, C., Lei, J., 2009. Differential privacy and robust statistics. In: Proceedings of the 41th Annual ACM Symposium on Theory of Computing (STOC). pp. 371–380.Dwork, C., Mcsherry, F., Nissim, K., Smith, A., 2006. Calibrating noise to sensitivity in private data analysis. In: Proceedings of the 3rd Theory of Cryptography Conference. Springer, pp. 265–284.Dwork, C., Nissam, K., 2004. Privacy-preserving datamining on vertically partitioned databases. In: Advances in Cryptology: Proceedings of Crypto. pp. 528–544.Elliot, M., 2000. DIS: a new approach to the measurement of statistical disclosure risk. International Journal of Risk Assessment and Management 2, 39–48.Federal Committee on Statistical Methodology (FCSM), 2005. Statistical policy working group 22 - report on statistical disclosure limitation methodology. U.S. Census Bureau.Fellegi, I.P., 1972. On the question of statistical confidentiality. Journal of the American Statistical Association 67 (337), 7–18.Fienberg, S.E., McIntyre, J., 2004. Data swapping: Variations on a theme by Dalenius and Reiss. In: Domingo-Ferrer, J., Torra, V. (Eds.), Privacy in Statistical Databases. Vol. 3050 of Lecture Notes in Computer Science. Springer Berlin/Heidelberg, pp. 519, http://dx.doi.org/10.1007/ 978-3-540-25955-8_2Fuller, W., 1993. Masking procedurse for microdata disclosure limitation. Journal of Official Statistics 9, 383–406.General Assembly of the United Nations, 1948. Universal declaration of human rights.Gouweleeuw, J., P. Kooiman, L.W., de Wolf, P.-P., 1998. Post randomisation for statistical disclosure control: Theory and implementation. Journal of Official Statistics 14 (4), 463–478.Greenberg, B., 1987. Rank swapping for masking ordinal microdata. Tech. rep., U.S. Bureau of the Census (unpublished manuscript), Suitland, Maryland, USA.Greenberg, B.G., Abul-Ela, A.-L.A., Simmons, W.R., Horvitz, D.G., 1969. The unrelated question randomized response model: Theoretical framework. Journal of the American Statistical Association 64 (326), 520–539.Harel, O., Zhou, X.-H., 2007. Multiple imputation: Review and theory, implementation and software. Statistics in Medicine 26, 3057–3077. Hundepool, A., Domingo-ferrer, J., Franconi, L., Giessing, S., Lenz, R., Longhurst, J., Nordholt, E.S., Seri, G., paul De Wolf, P., 2006. A CENtre of EXcellence for Statistical Disclosure Control Handbook on Statistical Disclosure Control Version 1.01.Hundepool, A., Wetering, A. v.d., Ramaswamy, R., Wolf, P.d., Giessing, S., Fischetti, M., Salazar, J., Castro, J., Lowthian, P., Feb. 2005. τ-argus 3.1 user manual. Statistics Netherlands, Voorburg NL.Hundepool, A., Willenborg, L., 1996. μ- and τ-argus: Software for statistical disclosure control. Third International Seminar on Statistical Confidentiality, Bled.Karr, A., Kohnen, C.N., Oganian, A., Reiter, J.P., Sanil, A.P., 2006. A framework for evaluating the utility of data altered to protect confidentiality. American Statistician 60 (3), 224–232.Kaufman, S., Seastrom, M., Roey, S., 2005. Do disclosure controls to protect confidentiality degrade the quality of the data? In: American Statistical Association, Proceedings of the Section on Survey Research.Kennickell, A.B., 1997. Multiple imputation and disclosure protection: the case of the 1995 survey of consumer finances. Record Linkage Techniques, 248–267.Kim, J., 1986. Limiting disclosure in microdata based on random noise and transformation. Bureau of the Census.Krumm, J., 2007. Inference attacks on location tracks. Proceedings of Fifth International Conference on Pervasive Computingy, 127–143.Li, N., Li, T., Venkatasubramanian, S., 2007. t-closeness: Privacy beyond k-anonymity and l-diversity. In: Data Engineering, 2007. ICDE 2007. IEEE 23rd International Conference on. pp. 106–115.Liew, C.K., Choi, U.J., Liew, C.J., 1985. A data distortion by probability distribution. ACM Trans. Database Syst. 10 (3), 395–411.Little, R.J.A., 1993. Statistical analysis of masked data. Journal of Official Statistics 9, 407–426.Little, R.J.A., Rubin, D.B., 1987. Statistical Analysis with Missing Data. John Wiley & Sons.Liu, F., Little, R.J.A., 2002. Selective multiple mputation of keys for statistical disclosure control in microdata. In: Proceedings Joint Statistical Meet. pp. 2133–2138.Machanavajjhala, A., Kifer, D., Abowd, J., Gehrke, J., Vilhuber, L., April 2008. Privacy: Theory meets practice on the map. In: International Conference on Data Engineering. Cornell University Comuputer Science Department, Cornell, USA, p. 10.Machanavajjhala, A., Kifer, D., Gehrke, J., Venkitasubramaniam, M., 2007. L-diversity: Privacy beyond k-anonymity. ACM Trans. Knowl. Discov. Data 1 (1), 3.Manning, A.M., Haglin, D.J., Keane, J.A., 2008. A recursive search algorithm for statistical disclosure assessment. Data Min. Knowl. Discov. 16 (2), 165–196. Marsh, C., Skinner, C., Arber, S., Penhale, B., Openshaw, S., Hobcraft, J., Lievesley, D., Walford, N., 1991. The case for samples of anonymized records from the 1991 census. Journal of the Royal Statistical Society 154 (2), 305–340.Matthews, G.J., Harel, O., Aseltine, R.H., 2010a. Assessing database privacy using the area under the receiver-operator characteristic curve. Health Services and Outcomes Research Methodology 10 (1), 1–15.Matthews, G.J., Harel, O., Aseltine, R.H., 2010b. Examining the robustness of fully synthetic data techniques for data with binary variables. Journal of Statistical Computation and Simulation 80 (6), 609–624.Moore, Jr., R., 1996. Controlled data-swapping techniques for masking public use microdata. Census Tech Report.Mugge, R., 1983. Issues in protecting confidentiality in national health statistics. Proceedings of the Section on Survey Research Methods.Nissim, K., Raskhodnikova, S., Smith, A., 2007. Smooth sensitivity and sampling in private data analysis. In: STOC ’07: Proceedings of the thirty-ninth annual ACM symposium on Theory of computing. pp. 75–84.Paass, G., 1988. Disclosure risk and disclosure avoidance for microdata. Journal of Business and Economic Statistics 6 (4), 487–500.Palley, M., Simonoff, J., 1987. The use of regression methodology for the compromise of confidential information in statistical databases. ACM Trans. Database Systems 12 (4), 593–608.Raghunathan, T.E., Reiter, J.P., Rubin, D.B., 2003. Multiple imputation for statistical disclosure limitation. Journal of Official Statistics 19 (1), 1–16.Rajasekaran, S., Harel, O., Zuba, M., Matthews, G.J., Aseltine, Jr., R., 2009. Responsible data releases. In: Proceedings 9th Industrial Conference on Data Mining (ICDM). Springer LNCS, pp. 388–400.Reiss, S.P., 1984. Practical data-swapping: The first steps. CM Transactions on Database Systems 9, 20–37.Reiter, J.P., 2002. Satisfying disclosure restriction with synthetic data sets. Journal of Official Statistics 18 (4), 531–543.Reiter, J.P., 2003. Inference for partially synthetic, public use microdata sets. Survey Methodology 29 (2), 181–188.Reiter, J.P., 2004a. New approaches to data dissemination: A glimpse into the future (?). Chance 17 (3), 11–15.Reiter, J.P., 2004b. Simultaneous use of multiple imputation for missing data and disclosure limitation. Survey Methodology 30 (2), 235–242.Reiter, J.P., 2005a. Estimating risks of identification disclosure in microdata. Journal of the American Statistical Association 100, 1103–1112.Reiter, J.P., 2005b. Releasing multiply imputed, synthetic public use microdata: An illustration and empirical study. Journal of the Royal Statistical Society, Series A: Statistics in Society 168 (1), 185–205.Reiter, J.P., 2005c. Using CART to generate partially synthetic public use microdata. Journal of Official Statistics 21 (3), 441–462. Rubin, D.B., 1987. Multiple Imputation for Nonresponse in Surveys. John Wiley & Sons.Rubin, D.B., 1993. Comment on “Statistical disclosure limitation”. Journal of Official Statistics 9, 461–468.Rubner, Y., Tomasi, C., Guibas, L.J., 1998. A metric for distributions with applications to image databases. Computer Vision, IEEE International Conference on 0, 59.Sarathy, R., Muralidhar, K., 2002a. The security of confidential numerical data in databases. Information Systems Research 13 (4), 389–403.Sarathy, R., Muralidhar, K., 2002b. The security of confidential numerical data in databases. Info. Sys. Research 13 (4), 389–403.Schafer, J.L., Graham, J.W., 2002. Missing data: Our view of state of the art. Psychological Methods 7 (2), 147–177.Singh, A., Yu, F., Dunteman, G., 2003. MASSC: A new data mask for limiting statistical information loss and disclosure. In: Proceedings of the Joint UNECE/EUROSTAT Work Session on Statistical Data Confidentiality. pp. 373–394.Skinner, C., 2009. Statistical disclosure control for survey data. In: Pfeffermann, D and Rao, C.R. eds. Handbook of Statistics Vol. 29A: Sample Surveys: Design, Methods and Applications. pp. 381–396.Skinner, C., Marsh, C., Openshaw, S., Wymer, C., 1994. Disclosure control for census microdata. Journal of Official Statistics 10, 31–51.Skinner, C., Shlomo, N., 2008. Assessing identification risk in survey microdata using log-linear models. Journal of the American Statistical Association 103, 989–1001.Skinner, C.J., Elliot, M.J., 2002. A measure of disclosure risk for microdata. Journal of the Royal Statistical Society. Series B (Statistical Methodology) 64 (4), 855–867.Smith, A., 2008. Efficient, dfferentially private point estimators. arXiv:0809.4794v1 [cs.CR].Spruill, N.L., 1982. Measures of confidentiality. Statistics of Income and Related Administrative Record Research, 131–136.Spruill, N.L., 1983. The confidentiality and analytic usefulness of masked business microdata. In: Proceedings of the Section on Survey Reserach Microdata. American Statistical Association, pp. 602–607.Sweeney, L., 1996. Replacing personally-identifying information in medical records, the scrub system. In: American Medical Informatics Association. Hanley and Belfus, Inc., pp. 333–337.Sweeney, L., 1997. Guaranteeing anonymity when sharing medical data, the datafly system. Journal of the American Medical Informatics Association 4, 51–55.Sweeney, L., 2002a. Achieving k-anonymity privacy protection using generalization and suppression. International Journal of Uncertainty, Fuzziness and Knowledge Based Systems 10 (5), 571–588. Sweeney, L., 2002b. k-anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge Based Systems 10 (5), 557–570.Tendick, P., 1991. Optimal noise addition for preserving confidentiality in multivariate data. Journal of Statistical Planning and Inference 27 (2), 341–353.United Nations Economic Comission for Europe (UNECE), 2007. Manging statistical cinfidentiality and microdata access: Principles and guidlinesof good practice.Warner, S.L., 1965. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association 60 (309), 63–69.Wasserman, L., Zhou, S., 2010. A statistical framework for differential privacy. Journal of the American Statistical Association 105 (489), 375–389.Willenborg, L., de Waal, T., 2001. Elements of Statistical Disclosure Control. Springer-Verlag.Woodward, B., 1995. The computer-based patient record and confidentiality. The New England Journal of Medicine, 1419–1422. Full Article
ia The ARMA alphabet soup: A tour of ARMA model variants By projecteuclid.org Published On :: Tue, 07 Dec 2010 09:23 EST Scott H. Holan, Robert Lund, Ginger DavisSource: Statist. Surv., Volume 4, 232--274.Abstract: Autoregressive moving-average (ARMA) difference equations are ubiquitous models for short memory time series and have parsimoniously described many stationary series. Variants of ARMA models have been proposed to describe more exotic series features such as long memory autocovariances, periodic autocovariances, and count support set structures. This review paper enumerates, compares, and contrasts the common variants of ARMA models in today’s literature. After the basic properties of ARMA models are reviewed, we tour ARMA variants that describe seasonal features, long memory behavior, multivariate series, changing variances (stochastic volatility) and integer counts. A list of ARMA variant acronyms is provided. References:Aknouche, A. and Guerbyenne, H. (2006). Recursive estimation of GARCH models. Communications in Statistics-Simulation and Computation 35 925–938.Alzaid, A. A. and Al-Osh, M. (1990). An integer-valued pth-order autoregressive structure (INAR (p)) process. Journal of Applied Probability 27 314–324.Anderson, P. L., Tesfaye, Y. G. and Meerschaert, M. M. (2007). Fourier-PARMA models and their application to river flows. Journal of Hydrologic Engineering 12 462–472.Ansley, C. F. (1979). An algorithm for the exact likelihood of a mixed autoregressive-moving average process. Biometrika 66 59–65.Basawa, I. V. and Lund, R. (2001). Large sample properties of parameter estimates for periodic ARMA models. Journal of Time Series Analysis 22 651–663.Bauwens, L., Laurent, S. and Rombouts, J. V. K. (2006). Multivariate GARCH models: A survey. Journal of Applied Econometrics 21 79–109.Bertelli, S. and Caporin, M. (2002). A note on calculating autocovariances of long-memory processes. Journal of Time Series Analysis 23 503–508.Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics 31 307–327.Bollerslev, T. (2008). Glossary to ARCH (GARCH). CREATES Research Paper 2008-49.Bollerslev, T., Engle, R. F. and Wooldridge, J. M. (1988). A capital asset pricing model with time-varying covariances. The Journal of Political Economy 96 116–131.Bondon, P. and Palma, W. (2007). A class of antipersistent processes. Journal of Time Series Analysis 28 261–273.Bougerol, P. and Picard, N. (1992). Strict stationarity of generalized autoregressive processes. The Annals of Probability 20 1714–1730.Box, G. E. P., Jenkins, G. M. and Reinsel, G. C. (2008). Time Series Analysis: Forecasting and Control, 4th ed. Wiley, New Jersey.Breidt, F. J., Davis, R. A. and Trindade, A. A. (2001). Least absolute deviation estimation for all-pass time series models. Annals of Statistics 29 919–946.Brockwell, P. J. (1994). On continuous-time threshold ARMA processes. Journal of Statistical Planning and Inference 39 291–303.Brockwell, P. J. (2001). Continuous-time ARMA processes. In Stochastic Processes: Theory and Methods, ( D. N. Shanbhag and C. R. Rao, eds.). Handbook of Statistics 19 249–276. Elsevier.Brockwell, P. J. and Davis, R. A. (1991). Time Series: Theory and Methods, 2nd ed. Springer, New York.Brockwell, P. J. and Davis, R. A. (2002). Introduction to Time Series and Forecasting, 2nd ed. Springer, New York.Brockwell, P. J. and Marquardt, T. (2005). Lèvy-driven and fractionally integrated ARMA processes with continuous-time paramaters. Statistica Sinica 15 477–494.Chan, K. S. (1990). Testing for threshold autoregression. Annals of Statistics 18 1886–1894.Chan, N. H. (2002). Time Series: Applications to Finance. John Wiley & Sons, New York.Chan, N. H. and Palma, W. (1998). State space modeling of long-memory processes. Annals of Statistics 26 719–740.Chan, N. H. and Palma, W. (2006). Estimation of long-memory time series models: A survey of different likelihood-based methods. Advances in Econometrics 20 89–121.Chatfield, C. (2003). The Analysis of Time Series: An Introduction, 6th ed. Chapman & Hall/CRC, Boca Raton.Chen, W., Hurvich, C. M. and Lu, Y. (2006). On the correlation matrix of the discrete Fourier transform and the fast solution of large Toeplitz systems for long-memory time series. Journal of the American Statistical Association 101 812–822.Chernick, M. R., Hsing, T. and McCormick, W. P. (1991). Calculating the extremal index for a class of stationary sequences. Advances in Applied Probability 23 835–850.Chib, S., Nardari, F. and Shephard, N. (2006). Analysis of high dimensional multivariate stochastic volatility models. Journal of Econometrics 134 341–371.Cryer, J. D. and Chan, K. S. (2008). Time Series Analysis: With Applications in R. Springer, New York.Cui, Y. and Lund, R. (2009). A new look at time series of counts. Biometrika 96 781–792.Davis, R. A., Dunsmuir, W. T. M. and Wang, Y. (1999). Modeling time series of count data. In Asymptotics, Nonparametrics and Time Series, ( S. Ghosh, ed.). Statistics Textbooks Monograph 63–113. Marcel Dekker, New York.Davis, R. A., Dunsmuir, W. and Streett, S. B. (2003). Observation-driven models for Poisson counts. Biometrika 90 777–790.Davis, R. A. and Resnick, S. I. (1996). Limit theory for bilinear processes with heavy-tailed noise. The Annals of Applied Probability 6 1191–1210.Deistler, M. and Hannan, E. J. (1981). Some properties of the parameterization of ARMA systems with unknown order. Journal of Multivariate Analysis 11 474–484.Dufour, J. M. and Jouini, T. (2005). Asymptotic distribution of a simple linear estimator for VARMA models in echelon form. Statistical Modeling and Analysis for Complex Data Problems 209–240.Dunsmuir, W. and Hannan, E. J. (1976). Vector linear time series models. Advances in Applied Probability 8 339–364.Durbin, J. and Koopman, S. J. (2001). Time Series Analysis by State Space Methods. Oxford University Press, Oxford.Engle, R. F. (1982). Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econometrica 50 987–1007.Engle, R. F. (2002). Dynamic conditional correlation. Journal of Business and Economic Statistics 20 339–350.Engle, R. F. and Bollerslev, T. (1986). Modelling the persistence of conditional variances. Econometric Reviews 5 1–50.Fuller, W. A. (1996). Introduction to Statistical Time Series, 2nd ed. John Wiley & Sons, New York.Geweke, J. and Porter-Hudak, S. (1983). The estimation and application of long memory time series models. Journal of Time Series Analysis 4 221–238.Gladyšhev, E. G. (1961). Periodically correlated random sequences. Soviet Math 2 385–388.Granger, C. W. J. (1982). Acronyms in time series analysis (ATSA). Journal of Time Series Analysis 3 103–107.Granger, C. W. J. and Andersen, A. P. (1978). An Introduction to Bilinear Time Series Models. Vandenhoeck and Ruprecht Göttingen.Granger, C. W. J. and Joyeux, R. (1980). An introduction to long-memory time series models and fractional differencing. Journal of Time Series Analysis 1 15–29.Gray, H. L., Zhang, N. F. and Woodward, W. A. (1989). On generalized fractional processes. Journal of Time Series Analysis 10 233–257.Hamilton, J. D. (1994). Time Series Analysis. Princeton University Press, Princeton, New Jersey.Hannan, E. J. (1955). A test for singularities in Sydney rainfall. Australian Journal of Physics 8 289–297.Hannan, E. J. (1969). The identification of vector mixed autoregressive-moving average system. Biometrika 56 223–225.Hannan, E. J. (1970). Multiple Time Series. John Wiley & Sons, New York.Hannan, E. J. (1976). The identification and parameterization of ARMAX and state space forms. Econometrica 44 713–723.Hannan, E. J. (1979). The Statistical Theory of Linear Systems. In Developments in Statistics ( P. R. Krishnaiah, ed.) 83–121. Academic Press, New York.Hannan, E. J. and Deistler, M. (1987). The Statistical Theory of Linear Systems. John Wiley & Sons, New York.Harvey, A. C. (1989). Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press, Cambridge.Haslett, J. and Raftery, A. E. (1989). Space-time modelling with long-memory dependence: Assessing Ireland’s wind power resource. Applied Statistics 38 1–50.Hosking, J. R. M. (1981). Fractional differencing. Biometrika 68 165–176.Hui, Y. V. and Li, W. K. (1995). On fractionally differenced periodic processes. Sankhyā: The Indian Journal of Statistics, Series B 57 19–31.Jacobs, P. A. and Lewis, P. A. W. (1978a). Discrete time series generated by mixtures. I: Correlational and runs properties. Journal of the Royal Statistical Society. Series B (Methodological) 40 94–105.Jacobs, P. A. and Lewis, P. A. W. (1978b). Discrete time series generated by mixtures II: Asymptotic properties. Journal of the Royal Statistical Society. Series B (Methodological) 40 222–228.Jacobs, P. A. and Lewis, P. A. W. (1983). Stationary discrete autoregressive-moving average time series generated by mixtures. Journal of Time Series Analysis 4 19–36.Jones, R. H. (1980). Maximum likelihood fitting of ARMA models to time series with missing observations. Technometrics 22 389–395.Jones, R. H. and Brelsford, W. M. (1967). Time series with periodic structure. Biometrika 54 403–408.Kedem, B. and Fokianos, K. (2002). Regression Models for Time Series Analysis. John Wiley & Sons, New Jersey.Ko, K. and Vannucci, M. (2006). Bayesian wavelet-based methods for the detection of multiple changes of the long memory parameter. IEEE Transactions on Signal Processing 54 4461–4470.Kohn, R. (1979). Asymptotic estimation and hypothesis testing results for vector linear time series models. Econometrica 47 1005–1030.Kokoszka, P. S. and Taqqu, M. S. (1995). Fractional ARIMA with stable innovations. Stochastic Processes and their Applications 60 19–47.Kokoszka, P. S. and Taqqu, M. S. (1996). Parameter estimation for infinite variance fractional ARIMA. Annals of Statistics 24 1880–1913.Lawrance, A. J. and Lewis, P. A. W. (1980). The exponential autoregressive-moving average EARMA(p,q) process. Journal of the Royal Statistical Society. Series B (Methodological) 42 150–161.Ling, S. and Li, W. K. (1997). On fractionally integrated autoregressive moving-average time series models with conditional heteroscedasticity. Journal of the American Statistical Association 92 1184–1194.Liu, J. and Brockwell, P. J. (1988). On the general bilinear time series model. Journal of Applied Probability 25 553–564.Lund, R. and Basawa, I. V. (2000). Recursive prediction and likelihood evaluation for periodic ARMA models. Journal of Time Series Analysis 21 75–93.Lund, R., Shao, Q. and Basawa, I. (2006). Parsimonious periodic time series modeling. Australian & New Zealand Journal of Statistics 48 33–47.Lütkepohl, H. (1991). Introduction to Multiple Time Series Analysis. Springer-Verlag, New York.Lütkepohl, H. (2005). New Introduction to Multiple Time Series Analysis. Springer, New York.MacDonald, I. L. and Zucchini, W. (1997). Hidden Markov and Other Models for Discrete-Valued Time Series. Chapman & Hall/CRC, Boca Raton.Mann, H. B. and Wald, A. (1943). On the statistical treatment of linear stochastic difference equations. Econometrica 11 173–220.Marriott, J., Ravishanker, N., Gelfand, A. and Pai, J. (1996). Bayesian analysis of ARMA processes: Complete sampling-based inference under exact likelihoods. In Bayesian Analysis in Statistics and Econometrics: Essays in Honor of Arnold Zellner ( D. Berry, K. Challoner and J. Geweke, eds.) 243–256. Wiley, New York.McKenzie, E. (1988). Some ARMA models for dependent sequences of Poisson counts. Advances in Applied Probability 20 822–835.Mikosch, T. and Starica, C. (2004). Nonstationarities in financial time series, the long-range dependence, and the IGARCH effects. Review of Economics and Statistics 86 378–390.Nelson, D. B. (1991). Conditional heteroskedasticity in asset returns: A new approach. Econometrica 59 347–370.Nelson, D. B. and Cao, C. Q. (1992). Inequality constraints in the univariate GARCH model. Journal of Business and Economic Statistics 10 229–235.Ooms, M. and Franses, P. H. (2001). A seasonal periodic long memory model for monthly river flows. Environmental Modelling & Software 16 559–569.Pagano, M. (1978). On periodic and multiple autoregressions. Annals of Statistics 6 1310–1317.Pai, J. S. and Ravishanker, N. (1998). Bayesian analysis of autoregressive fractionally integrated moving-average processes. Journal of Time Series Analysis 19 99–112.Palma, W. (2007). Long-Memory Time Series: Theory and Methods. John Wiley & Sons, New Jersey.Palma, W. and Chan, N. H. (2005). Efficient estimation of seasonal long-range-dependent processes. Journal of Time Series Analysis 26 863–892.Pfeifer, P. E. and Deutsch, S. J. (1980). A three-stage iterative procedure for space-time modeling. Technometrics 22 35–47.Prado, R. and West, M. (2010). Time Series Modeling, Computation and Inference. Chapman & Hall/CRC, Boca Raton.Quoreshi, A. M. M. S. (2008). A long memory count data time series model for financial application. Preprint.R Development Core Team, (2010). R: A Language and Environment for Statistical Computing. http://www.R-project.org.Ravishanker, N. and Ray, B. K. (1997). Bayesian analysis of vector ARMA models using Gibbs sampling. Journal of Forecasting 16 177–194.Ravishanker, N. and Ray, B. K. (2002). Bayesian prediction for vector ARFIMA processes. International Journal of Forecasting 18 207–214.Reinsel, G. C. (1997). Elements of Multivariate Time Series Analysis. Springer, New York.Resnick, S. I. and Willekens, E. (1991). Moving averages with random coefficients and random coefficient autoregressive models. Communications in Statistics. Stochastic Models 7 511–525.Rootzén, H. (1986). Extreme value theory for moving average processes. The Annals of Probability 14 612–652.Scotto, M. G. (2007). Extremes for solutions to stochastic difference equations with regularly varying tails. REVSTAT–Statistical Journal 5 229–247.Shao, Q. and Lund, R. (2004). Computation and characterization of autocorrelations and partial autocorrelations in periodic ARMA models. Journal of Time Series Analysis 25 359–372.Shumway, R. H. and Stoffer, D. S. (2006). Time Series Analysis and its Applications: With R Examples, 2nd ed. Springer, New York.Silvennoinen, A. and Teräsvirta, T. (2009). Multivariate GARCH models. In Handbook of Financial Time Series ( T. Andersen, R. Davis, J. Kreib, and T. Mikosch, eds.) Springer, New York.Sowell, F. (1992). Maximum likelihood estimation of stationary univariate fractionally integrated time series models. Journal of Econometrics 53 165–188.Startz, R. (2008). Binomial autoregressive moving average models with an application to U.S. recessions. Journal of Business and Economic Statistics 26 1–8.Stramer, O., Tweedie, R. L. and Brockwell, P. J. (1996). Existence and stability of continuous time threshold ARMA processes. Statistica Sinica 6 715–732.Subba Rao, T. (1981). On the theory of bilinear time series models. Journal of the Royal Statistical Society. Series B (Methodological) 43 244–255.Tong, H. and Lim, K. S. (1980). Threshold autoregression, limit cycles and cyclical data. Journal of the Royal Statistical Society. Series B (Methodological) 42 245–292.Troutman, B. M. (1979). Some results in periodic autoregression. Biometrika 66 219–228.Tsai, H. (2009). On continuous-time autoregressive fractionally integrated moving average processes. Bernoulli 15 178–194.Tsai, H. and Chan, K. S. (2000). A note on the covariance structure of a continuous-time ARMA process. Statistica Sinica 10 989–998.Tsai, H. and Chan, K. S. (2005). Maximum likelihood estimation of linear continuous time long memory processes with discrete time data. Journal of the Royal Statistical Society. Series B (Statistical Methodology) 67 703–716.Tsai, H. and Chan, K. S. (2008). A note on inequality constraints in the GARCH model. Econometric Theory 24 823–828.Tsay, R. S. (1989). Parsimonious parameterization of vector autoregressive moving average models. Journal of Business and Economic Statistics 7 327–341.Tunnicliffe-Wilson, G. (1979). Some efficient computational procedures for high order ARMA models. Journal of Statistical Computation and Simulation 8 301–309.Ursu, E. and Duchesne, P. (2009). On modelling and diagnostic checking of vector periodic autoregressive time series models. Journal of Time Series Analysis 30 70–96.Vecchia, A. V. (1985a). Maximum likelihood estimation for periodic autoregressive moving average models. Technometrics 27 375–384.Vecchia, A. V. (1985b). Periodic autoregressive-moving average (PARMA) modeling with applications to water resources. Journal of the American Water Resources Association 21 721–730.Vidakovic, B. (1999). Statistical Modeling by Wavelets. John Wiley & Sons, New York.West, M. and Harrison, J. (1997). Bayesian Forecasting and Dynamic Models, 2nd ed. Springer, New York.Wold, H. (1954). A Study in the Analysis of Stationary Time Series. Almquist & Wiksell, Stockholm.Woodward, W. A., Cheng, Q. C. and Gray, H. L. (1998). A k-factor GARMA long-memory model. Journal of Time Series Analysis 19 485–504.Zivot, E. and Wang, J. (2006). Modeling Financial Time Series with S-PLUS, 2nd ed. Springer, New York. Full Article
ia Discrete variations of the fractional Brownian motion in the presence of outliers and an additive noise By projecteuclid.org Published On :: Thu, 05 Aug 2010 15:41 EDT Sophie Achard, Jean-François CoeurjollySource: Statist. Surv., Volume 4, 117--147.Abstract: This paper gives an overview of the problem of estimating the Hurst parameter of a fractional Brownian motion when the data are observed with outliers and/or with an additive noise by using methods based on discrete variations. We show that the classical estimation procedure based on the log-linearity of the variogram of dilated series is made more robust to outliers and/or an additive noise by considering sample quantiles and trimmed means of the squared series or differences of empirical variances. These different procedures are compared and discussed through a large simulation study and are implemented in the R package dvfBm. Full Article
ia Generating Thermal Image Data Samples using 3D Facial Modelling Techniques and Deep Learning Methodologies. (arXiv:2005.01923v2 [cs.CV] UPDATED) By arxiv.org Published On :: Methods for generating synthetic data have become of increasing importance to build large datasets required for Convolution Neural Networks (CNN) based deep learning techniques for a wide range of computer vision applications. In this work, we extend existing methodologies to show how 2D thermal facial data can be mapped to provide 3D facial models. For the proposed research work we have used tufts datasets for generating 3D varying face poses by using a single frontal face pose. The system works by refining the existing image quality by performing fusion based image preprocessing operations. The refined outputs have better contrast adjustments, decreased noise level and higher exposedness of the dark regions. It makes the facial landmarks and temperature patterns on the human face more discernible and visible when compared to original raw data. Different image quality metrics are used to compare the refined version of images with original images. In the next phase of the proposed study, the refined version of images is used to create 3D facial geometry structures by using Convolution Neural Networks (CNN). The generated outputs are then imported in blender software to finally extract the 3D thermal facial outputs of both males and females. The same technique is also used on our thermal face data acquired using prototype thermal camera (developed under Heliaus EU project) in an indoor lab environment which is then used for generating synthetic 3D face data along with varying yaw face angles and lastly facial depth map is generated. Full Article
ia Interpreting Rate-Distortion of Variational Autoencoder and Using Model Uncertainty for Anomaly Detection. (arXiv:2005.01889v2 [cs.LG] UPDATED) By arxiv.org Published On :: Building a scalable machine learning system for unsupervised anomaly detection via representation learning is highly desirable. One of the prevalent methods is using a reconstruction error from variational autoencoder (VAE) via maximizing the evidence lower bound. We revisit VAE from the perspective of information theory to provide some theoretical foundations on using the reconstruction error, and finally arrive at a simpler and more effective model for anomaly detection. In addition, to enhance the effectiveness of detecting anomalies, we incorporate a practical model uncertainty measure into the metric. We show empirically the competitive performance of our approach on benchmark datasets. Full Article
ia How many modes can a constrained Gaussian mixture have?. (arXiv:2005.01580v2 [math.ST] UPDATED) By arxiv.org Published On :: We show, by an explicit construction, that a mixture of univariate Gaussians with variance 1 and means in $[-A,A]$ can have $Omega(A^2)$ modes. This disproves a recent conjecture of Dytso, Yagli, Poor and Shamai [IEEE Trans. Inform. Theory, Apr. 2020], who showed that such a mixture can have at most $O(A^2)$ modes and surmised that the upper bound could be improved to $O(A)$. Our result holds even if an additional variance constraint is imposed on the mixing distribution. Extending the result to higher dimensions, we exhibit a mixture of Gaussians in $mathbb{R}^d$, with identity covariances and means inside $[-A,A]^d$, that has $Omega(A^{2d})$ modes. Full Article
ia Short-term forecasts of COVID-19 spread across Indian states until 1 May 2020. (arXiv:2004.13538v2 [q-bio.PE] UPDATED) By arxiv.org Published On :: The very first case of corona-virus illness was recorded on 30 January 2020, in India and the number of infected cases, including the death toll, continues to rise. In this paper, we present short-term forecasts of COVID-19 for 28 Indian states and five union territories using real-time data from 30 January to 21 April 2020. Applying Holt's second-order exponential smoothing method and autoregressive integrated moving average (ARIMA) model, we generate 10-day ahead forecasts of the likely number of infected cases and deaths in India for 22 April to 1 May 2020. Our results show that the number of cumulative cases in India will rise to 36335.63 [PI 95% (30884.56, 42918.87)], concurrently the number of deaths may increase to 1099.38 [PI 95% (959.77, 1553.76)] by 1 May 2020. Further, we have divided the country into severity zones based on the cumulative cases. According to this analysis, Maharashtra is likely to be the most affected states with around 9787.24 [PI 95% (6949.81, 13757.06)] cumulative cases by 1 May 2020. However, Kerala and Karnataka are likely to shift from the red zone (i.e. highly affected) to the lesser affected region. On the other hand, Gujarat and Madhya Pradesh will move to the red zone. These results mark the states where lockdown by 3 May 2020, can be loosened. Full Article
ia A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced Cardiac Magnetic Resonance Imaging. (arXiv:2004.12314v3 [cs.CV] UPDATED) By arxiv.org Published On :: Segmentation of cardiac images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) widely used for visualizing diseased cardiac structures, is a crucial first step for clinical diagnosis and treatment. However, direct segmentation of LGE-MRIs is challenging due to its attenuated contrast. Since most clinical studies have relied on manual and labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the "2018 Left Atrium Segmentation Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double, sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved far superior results than traditional methods and pipelines containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for cardiac LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field. Full Article
ia Covariance Matrix Adaptation for the Rapid Illumination of Behavior Space. (arXiv:1912.02400v2 [cs.LG] UPDATED) By arxiv.org Published On :: We focus on the challenge of finding a diverse collection of quality solutions on complex continuous domains. While quality diver-sity (QD) algorithms like Novelty Search with Local Competition (NSLC) and MAP-Elites are designed to generate a diverse range of solutions, these algorithms require a large number of evaluations for exploration of continuous spaces. Meanwhile, variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are among the best-performing derivative-free optimizers in single-objective continuous domains. This paper proposes a new QD algorithm called Covariance Matrix Adaptation MAP-Elites (CMA-ME). Our new algorithm combines the self-adaptation techniques of CMA-ES with archiving and mapping techniques for maintaining diversity in QD. Results from experiments based on standard continuous optimization benchmarks show that CMA-ME finds better-quality solutions than MAP-Elites; similarly, results on the strategic game Hearthstone show that CMA-ME finds both a higher overall quality and broader diversity of strategies than both CMA-ES and MAP-Elites. Overall, CMA-ME more than doubles the performance of MAP-Elites using standard QD performance metrics. These results suggest that QD algorithms augmented by operators from state-of-the-art optimization algorithms can yield high-performing methods for simultaneously exploring and optimizing continuous search spaces, with significant applications to design, testing, and reinforcement learning among other domains. Full Article
ia $V$-statistics and Variance Estimation. (arXiv:1912.01089v2 [stat.ML] UPDATED) By arxiv.org Published On :: This paper develops a general framework for analyzing asymptotics of $V$-statistics. Previous literature on limiting distribution mainly focuses on the cases when $n o infty$ with fixed kernel size $k$. Under some regularity conditions, we demonstrate asymptotic normality when $k$ grows with $n$ by utilizing existing results for $U$-statistics. The key in our approach lies in a mathematical reduction to $U$-statistics by designing an equivalent kernel for $V$-statistics. We also provide a unified treatment on variance estimation for both $U$- and $V$-statistics by observing connections to existing methods and proposing an empirically more accurate estimator. Ensemble methods such as random forests, where multiple base learners are trained and aggregated for prediction purposes, serve as a running example throughout the paper because they are a natural and flexible application of $V$-statistics. Full Article
ia Bayesian factor models for multivariate categorical data obtained from questionnaires. (arXiv:1910.04283v2 [stat.AP] UPDATED) By arxiv.org Published On :: Factor analysis is a flexible technique for assessment of multivariate dependence and codependence. Besides being an exploratory tool used to reduce the dimensionality of multivariate data, it allows estimation of common factors that often have an interesting theoretical interpretation in real problems. However, standard factor analysis is only applicable when the variables are scaled, which is often inappropriate, for example, in data obtained from questionnaires in the field of psychology,where the variables are often categorical. In this framework, we propose a factor model for the analysis of multivariate ordered and non-ordered polychotomous data. The inference procedure is done under the Bayesian approach via Markov chain Monte Carlo methods. Two Monte-Carlo simulation studies are presented to investigate the performance of this approach in terms of estimation bias, precision and assessment of the number of factors. We also illustrate the proposed method to analyze participants' responses to the Motivational State Questionnaire dataset, developed to study emotions in laboratory and field settings. Full Article
ia Differentiable Sparsification for Deep Neural Networks. (arXiv:1910.03201v2 [cs.LG] UPDATED) By arxiv.org Published On :: A deep neural network has relieved the burden of feature engineering by human experts, but comparable efforts are instead required to determine an effective architecture. On the other hands, as the size of a network has over-grown, a lot of resources are also invested to reduce its size. These problems can be addressed by sparsification of an over-complete model, which removes redundant parameters or connections by pruning them away after training or encouraging them to become zero during training. In general, however, these approaches are not fully differentiable and interrupt an end-to-end training process with the stochastic gradient descent in that they require either a parameter selection or a soft-thresholding step. In this paper, we propose a fully differentiable sparsification method for deep neural networks, which allows parameters to be exactly zero during training, and thus can learn the sparsified structure and the weights of networks simultaneously using the stochastic gradient descent. We apply the proposed method to various popular models in order to show its effectiveness. Full Article
ia DualSMC: Tunneling Differentiable Filtering and Planning under Continuous POMDPs. (arXiv:1909.13003v4 [cs.LG] UPDATED) By arxiv.org Published On :: A major difficulty of solving continuous POMDPs is to infer the multi-modal distribution of the unobserved true states and to make the planning algorithm dependent on the perceived uncertainty. We cast POMDP filtering and planning problems as two closely related Sequential Monte Carlo (SMC) processes, one over the real states and the other over the future optimal trajectories, and combine the merits of these two parts in a new model named the DualSMC network. In particular, we first introduce an adversarial particle filter that leverages the adversarial relationship between its internal components. Based on the filtering results, we then propose a planning algorithm that extends the previous SMC planning approach [Piche et al., 2018] to continuous POMDPs with an uncertainty-dependent policy. Crucially, not only can DualSMC handle complex observations such as image input but also it remains highly interpretable. It is shown to be effective in three continuous POMDP domains: the floor positioning domain, the 3D light-dark navigation domain, and a modified Reacher domain. Full Article
ia Estimating drift parameters in a non-ergodic Gaussian Vasicek-type model. (arXiv:1909.06155v2 [math.PR] UPDATED) By arxiv.org Published On :: We study the problem of parameter estimation for a non-ergodic Gaussian Vasicek-type model defined as $dX_t=(mu+ heta X_t)dt+dG_t, tgeq0$ with unknown parameters $ heta>0$ and $muinR$, where $G$ is a Gaussian process. We provide least square-type estimators $widetilde{ heta}_T$ and $widetilde{mu}_T$ respectively for the drift parameters $ heta$ and $mu$ based on continuous-time observations ${X_t, tin[0,T]}$ as $T ightarrowinfty$. Our aim is to derive some sufficient conditions on the driving Gaussian process $G$ in order to ensure that $widetilde{ heta}_T$ and $widetilde{mu}_T$ are strongly consistent, the limit distribution of $widetilde{ heta}_T$ is a Cauchy-type distribution and $widetilde{mu}_T$ is asymptotically normal. We apply our result to fractional Vasicek, subfractional Vasicek and bifractional Vasicek processes. In addition, this work extends the result of cite{EEO} studied in the case where $mu=0$. Full Article
ia Additive Bayesian variable selection under censoring and misspecification. (arXiv:1907.13563v3 [stat.ME] UPDATED) By arxiv.org Published On :: We study the interplay of two important issues on Bayesian model selection (BMS): censoring and model misspecification. We consider additive accelerated failure time (AAFT), Cox proportional hazards and probit models, and a more general concave log-likelihood structure. A fundamental question is what solution can one hope BMS to provide, when (inevitably) models are misspecified. We show that asymptotically BMS keeps any covariate with predictive power for either the outcome or censoring times, and discards other covariates. Misspecification refers to assuming the wrong model or functional effect on the response, including using a finite basis for a truly non-parametric effect, or omitting truly relevant covariates. We argue for using simple models that are computationally practical yet attain good power to detect potentially complex effects, despite misspecification. Misspecification and censoring both have an asymptotically negligible effect on (suitably-defined) false positives, but their impact on power is exponential. We portray these issues via simple descriptions of early/late censoring and the drop in predictive accuracy due to misspecification. From a methods point of view, we consider local priors and a novel structure that combines local and non-local priors to enforce sparsity. We develop algorithms to capitalize on the AAFT tractability, approximations to AAFT and probit likelihoods giving significant computational gains, a simple augmented Gibbs sampler to hierarchically explore linear and non-linear effects, and an implementation in the R package mombf. We illustrate the proposed methods and others based on likelihood penalties via extensive simulations under misspecification and censoring. We present two applications concerning the effect of gene expression on colon and breast cancer. Full Article
ia Multi-scale analysis of lead-lag relationships in high-frequency financial markets. (arXiv:1708.03992v3 [stat.ME] UPDATED) By arxiv.org Published On :: We propose a novel estimation procedure for scale-by-scale lead-lag relationships of financial assets observed at high-frequency in a non-synchronous manner. The proposed estimation procedure does not require any interpolation processing of original datasets and is applicable to those with highest time resolution available. Consistency of the proposed estimators is shown under the continuous-time framework that has been developed in our previous work Hayashi and Koike (2018). An empirical application to a quote dataset of the NASDAQ-100 assets identifies two types of lead-lag relationships at different time scales. Full Article
ia Nonstationary Bayesian modeling for a large data set of derived surface temperature return values. (arXiv:2005.03658v1 [stat.ME]) By arxiv.org Published On :: Heat waves resulting from prolonged extreme temperatures pose a significant risk to human health globally. Given the limitations of observations of extreme temperature, climate models are often used to characterize extreme temperature globally, from which one can derive quantities like return values to summarize the magnitude of a low probability event for an arbitrary geographic location. However, while these derived quantities are useful on their own, it is also often important to apply a spatial statistical model to such data in order to, e.g., understand how the spatial dependence properties of the return values vary over space and emulate the climate model for generating additional spatial fields with corresponding statistical properties. For these objectives, when modeling global data it is critical to use a nonstationary covariance function. Furthermore, given that the output of modern global climate models can be on the order of $mathcal{O}(10^4)$, it is important to utilize approximate Gaussian process methods to enable inference. In this paper, we demonstrate the application of methodology introduced in Risser and Turek (2020) to conduct a nonstationary and fully Bayesian analysis of a large data set of 20-year return values derived from an ensemble of global climate model runs with over 50,000 spatial locations. This analysis uses the freely available BayesNSGP software package for R. Full Article
ia Local Cascade Ensemble for Multivariate Data Classification. (arXiv:2005.03645v1 [cs.LG]) By arxiv.org Published On :: We present LCE, a Local Cascade Ensemble for traditional (tabular) multivariate data classification, and its extension LCEM for Multivariate Time Series (MTS) classification. LCE is a new hybrid ensemble method that combines an explicit boosting-bagging approach to handle the usual bias-variance tradeoff faced by machine learning models and an implicit divide-and-conquer approach to individualize classifier errors on different parts of the training data. Our evaluation firstly shows that the hybrid ensemble method LCE outperforms the state-of-the-art classifiers on the UCI datasets and that LCEM outperforms the state-of-the-art MTS classifiers on the UEA datasets. Furthermore, LCEM provides explainability by design and manifests robust performance when faced with challenges arising from continuous data collection (different MTS length, missing data and noise). Full Article
ia Know Your Clients' behaviours: a cluster analysis of financial transactions. (arXiv:2005.03625v1 [econ.EM]) By arxiv.org Published On :: In Canada, financial advisors and dealers by provincial securities commissions, and those self-regulatory organizations charged with direct regulation over investment dealers and mutual fund dealers, respectively to collect and maintain Know Your Client (KYC) information, such as their age or risk tolerance, for investor accounts. With this information, investors, under their advisor's guidance, make decisions on their investments which are presumed to be beneficial to their investment goals. Our unique dataset is provided by a financial investment dealer with over 50,000 accounts for over 23,000 clients. We use a modified behavioural finance recency, frequency, monetary model for engineering features that quantify investor behaviours, and machine learning clustering algorithms to find groups of investors that behave similarly. We show that the KYC information collected does not explain client behaviours, whereas trade and transaction frequency and volume are most informative. We believe the results shown herein encourage financial regulators and advisors to use more advanced metrics to better understand and predict investor behaviours. Full Article
ia A simulation study of disaggregation regression for spatial disease mapping. (arXiv:2005.03604v1 [stat.AP]) By arxiv.org Published On :: Disaggregation regression has become an important tool in spatial disease mapping for making fine-scale predictions of disease risk from aggregated response data. By including high resolution covariate information and modelling the data generating process on a fine scale, it is hoped that these models can accurately learn the relationships between covariates and response at a fine spatial scale. However, validating these high resolution predictions can be a challenge, as often there is no data observed at this spatial scale. In this study, disaggregation regression was performed on simulated data in various settings and the resulting fine-scale predictions are compared to the simulated ground truth. Performance was investigated with varying numbers of data points, sizes of aggregated areas and levels of model misspecification. The effectiveness of cross validation on the aggregate level as a measure of fine-scale predictive performance was also investigated. Predictive performance improved as the number of observations increased and as the size of the aggregated areas decreased. When the model was well-specified, fine-scale predictions were accurate even with small numbers of observations and large aggregated areas. Under model misspecification predictive performance was significantly worse for large aggregated areas but remained high when response data was aggregated over smaller regions. Cross-validation correlation on the aggregate level was a moderately good predictor of fine-scale predictive performance. While the simulations are unlikely to capture the nuances of real-life response data, this study gives insight into the effectiveness of disaggregation regression in different contexts. Full Article
ia Predictive Modeling of ICU Healthcare-Associated Infections from Imbalanced Data. Using Ensembles and a Clustering-Based Undersampling Approach. (arXiv:2005.03582v1 [cs.LG]) By arxiv.org Published On :: Early detection of patients vulnerable to infections acquired in the hospital environment is a challenge in current health systems given the impact that such infections have on patient mortality and healthcare costs. This work is focused on both the identification of risk factors and the prediction of healthcare-associated infections in intensive-care units by means of machine-learning methods. The aim is to support decision making addressed at reducing the incidence rate of infections. In this field, it is necessary to deal with the problem of building reliable classifiers from imbalanced datasets. We propose a clustering-based undersampling strategy to be used in combination with ensemble classifiers. A comparative study with data from 4616 patients was conducted in order to validate our proposal. We applied several single and ensemble classifiers both to the original dataset and to data preprocessed by means of different resampling methods. The results were analyzed by means of classic and recent metrics specifically designed for imbalanced data classification. They revealed that the proposal is more efficient in comparison with other approaches. Full Article
ia Noisy Differentiable Architecture Search. (arXiv:2005.03566v1 [cs.LG]) By arxiv.org Published On :: Simplicity is the ultimate sophistication. Differentiable Architecture Search (DARTS) has now become one of the mainstream paradigms of neural architecture search. However, it largely suffers from several disturbing factors of optimization process whose results are unstable to reproduce. FairDARTS points out that skip connections natively have an unfair advantage in exclusive competition which primarily leads to dramatic performance collapse. While FairDARTS turns the unfair competition into a collaborative one, we instead impede such unfair advantage by injecting unbiased random noise into skip operations' output. In effect, the optimizer should perceive this difficulty at each training step and refrain from overshooting on skip connections, but in a long run it still converges to the right solution area since no bias is added to the gradient. We name this novel approach as NoisyDARTS. Our experiments on CIFAR-10 and ImageNet attest that it can effectively break the skip connection's unfair advantage and yield better performance. It generates a series of models that achieve state-of-the-art results on both datasets. Full Article
ia Sequential Aggregation of Probabilistic Forecasts -- Applicaton to Wind Speed Ensemble Forecasts. (arXiv:2005.03540v1 [stat.AP]) By arxiv.org Published On :: In the field of numerical weather prediction (NWP), the probabilistic distribution of the future state of the atmosphere is sampled with Monte-Carlo-like simulations, called ensembles. These ensembles have deficiencies (such as conditional biases) that can be corrected thanks to statistical post-processing methods. Several ensembles exist and may be corrected with different statistiscal methods. A further step is to combine these raw or post-processed ensembles. The theory of prediction with expert advice allows us to build combination algorithms with theoretical guarantees on the forecast performance. This article adapts this theory to the case of probabilistic forecasts issued as step-wise cumulative distribution functions (CDF). The theory is applied to wind speed forecasting, by combining several raw or post-processed ensembles, considered as CDFs. The second goal of this study is to explore the use of two forecast performance criteria: the Continous ranked probability score (CRPS) and the Jolliffe-Primo test. Comparing the results obtained with both criteria leads to reconsidering the usual way to build skillful probabilistic forecasts, based on the minimization of the CRPS. Minimizing the CRPS does not necessarily produce reliable forecasts according to the Jolliffe-Primo test. The Jolliffe-Primo test generally selects reliable forecasts, but could lead to issuing suboptimal forecasts in terms of CRPS. It is proposed to use both criterion to achieve reliable and skillful probabilistic forecasts. Full Article
ia Robust location estimators in regression models with covariates and responses missing at random. (arXiv:2005.03511v1 [stat.ME]) By arxiv.org Published On :: This paper deals with robust marginal estimation under a general regression model when missing data occur in the response and also in some of covariates. The target is a marginal location parameter which is given through an $M-$functional. To obtain robust Fisher--consistent estimators, properly defined marginal distribution function estimators are considered. These estimators avoid the bias due to missing values by assuming a missing at random condition. Three methods are considered to estimate the marginal distribution function which allows to obtain the $M-$location of interest: the well-known inverse probability weighting, a convolution--based method that makes use of the regression model and an augmented inverse probability weighting procedure that prevents against misspecification. The robust proposed estimators and the classical ones are compared through a numerical study under different missing models including clean and contaminated samples. We illustrate the estimators behaviour under a nonlinear model. A real data set is also analysed. Full Article
ia Fast multivariate empirical cumulative distribution function with connection to kernel density estimation. (arXiv:2005.03246v1 [cs.DS]) By arxiv.org Published On :: This paper revisits the problem of computing empirical cumulative distribution functions (ECDF) efficiently on large, multivariate datasets. Computing an ECDF at one evaluation point requires $mathcal{O}(N)$ operations on a dataset composed of $N$ data points. Therefore, a direct evaluation of ECDFs at $N$ evaluation points requires a quadratic $mathcal{O}(N^2)$ operations, which is prohibitive for large-scale problems. Two fast and exact methods are proposed and compared. The first one is based on fast summation in lexicographical order, with a $mathcal{O}(N{log}N)$ complexity and requires the evaluation points to lie on a regular grid. The second one is based on the divide-and-conquer principle, with a $mathcal{O}(Nlog(N)^{(d-1){vee}1})$ complexity and requires the evaluation points to coincide with the input points. The two fast algorithms are described and detailed in the general $d$-dimensional case, and numerical experiments validate their speed and accuracy. Secondly, the paper establishes a direct connection between cumulative distribution functions and kernel density estimation (KDE) for a large class of kernels. This connection paves the way for fast exact algorithms for multivariate kernel density estimation and kernel regression. Numerical tests with the Laplacian kernel validate the speed and accuracy of the proposed algorithms. A broad range of large-scale multivariate density estimation, cumulative distribution estimation, survival function estimation and regression problems can benefit from the proposed numerical methods. Full Article
ia Classification of pediatric pneumonia using chest X-rays by functional regression. (arXiv:2005.03243v1 [stat.AP]) By arxiv.org Published On :: An accurate and prompt diagnosis of pediatric pneumonia is imperative for successful treatment intervention. One approach to diagnose pneumonia cases is using radiographic data. In this article, we propose a novel parsimonious scalar-on-image classification model adopting the ideas of functional data analysis. Our main idea is to treat images as functional measurements and exploit underlying covariance structures to select basis functions; these bases are then used in approximating both image profiles and corresponding regression coefficient. We re-express the regression model into a standard generalized linear model where the functional principal component scores are treated as covariates. We apply the method to (1) classify pneumonia against healthy and viral against bacterial pneumonia patients, and (2) test the null effect about the association between images and responses. Extensive simulation studies show excellent numerical performance in terms of classification, hypothesis testing, and efficient computation. Full Article
ia Efficient Characterization of Dynamic Response Variation Using Multi-Fidelity Data Fusion through Composite Neural Network. (arXiv:2005.03213v1 [stat.ML]) By arxiv.org Published On :: Uncertainties in a structure is inevitable, which generally lead to variation in dynamic response predictions. For a complex structure, brute force Monte Carlo simulation for response variation analysis is infeasible since one single run may already be computationally costly. Data driven meta-modeling approaches have thus been explored to facilitate efficient emulation and statistical inference. The performance of a meta-model hinges upon both the quality and quantity of training dataset. In actual practice, however, high-fidelity data acquired from high-dimensional finite element simulation or experiment are generally scarce, which poses significant challenge to meta-model establishment. In this research, we take advantage of the multi-level response prediction opportunity in structural dynamic analysis, i.e., acquiring rapidly a large amount of low-fidelity data from reduced-order modeling, and acquiring accurately a small amount of high-fidelity data from full-scale finite element analysis. Specifically, we formulate a composite neural network fusion approach that can fully utilize the multi-level, heterogeneous datasets obtained. It implicitly identifies the correlation of the low- and high-fidelity datasets, which yields improved accuracy when compared with the state-of-the-art. Comprehensive investigations using frequency response variation characterization as case example are carried out to demonstrate the performance. Full Article
ia On the Optimality of Randomization in Experimental Design: How to Randomize for Minimax Variance and Design-Based Inference. (arXiv:2005.03151v1 [stat.ME]) By arxiv.org Published On :: I study the minimax-optimal design for a two-arm controlled experiment where conditional mean outcomes may vary in a given set. When this set is permutation symmetric, the optimal design is complete randomization, and using a single partition (i.e., the design that only randomizes the treatment labels for each side of the partition) has minimax risk larger by a factor of $n-1$. More generally, the optimal design is shown to be the mixed-strategy optimal design (MSOD) of Kallus (2018). Notably, even when the set of conditional mean outcomes has structure (i.e., is not permutation symmetric), being minimax-optimal for variance still requires randomization beyond a single partition. Nonetheless, since this targets precision, it may still not ensure sufficient uniformity in randomization to enable randomization (i.e., design-based) inference by Fisher's exact test to appropriately detect violations of null. I therefore propose the inference-constrained MSOD, which is minimax-optimal among all designs subject to such uniformity constraints. On the way, I discuss Johansson et al. (2020) who recently compared rerandomization of Morgan and Rubin (2012) and the pure-strategy optimal design (PSOD) of Kallus (2018). I point out some errors therein and set straight that randomization is minimax-optimal and that the "no free lunch" theorem and example in Kallus (2018) are correct. Full Article
ia Adaptive Invariance for Molecule Property Prediction. (arXiv:2005.03004v1 [q-bio.QM]) By arxiv.org Published On :: Effective property prediction methods can help accelerate the search for COVID-19 antivirals either through accurate in-silico screens or by effectively guiding on-going at-scale experimental efforts. However, existing prediction tools have limited ability to accommodate scarce or fragmented training data currently available. In this paper, we introduce a novel approach to learn predictors that can generalize or extrapolate beyond the heterogeneous data. Our method builds on and extends recently proposed invariant risk minimization, adaptively forcing the predictor to avoid nuisance variation. We achieve this by continually exercising and manipulating latent representations of molecules to highlight undesirable variation to the predictor. To test the method we use a combination of three data sources: SARS-CoV-2 antiviral screening data, molecular fragments that bind to SARS-CoV-2 main protease and large screening data for SARS-CoV-1. Our predictor outperforms state-of-the-art transfer learning methods by significant margin. We also report the top 20 predictions of our model on Broad drug repurposing hub. Full Article
ia Add your entry to the great pandemic diary of 2020 By feedproxy.google.com Published On :: Mon, 04 May 2020 22:54:44 +0000 Monday 4 May 2020 The State Library wants to capture the thoughts and feelings of the State via a new diary sharing platform launched TODAY. Full Article
ia lslx: Semi-Confirmatory Structural Equation Modeling via Penalized Likelihood By www.jstatsoft.org Published On :: Mon, 27 Apr 2020 00:00:00 +0000 Sparse estimation via penalized likelihood (PL) is now a popular approach to learn the associations among a large set of variables. This paper describes an R package called lslx that implements PL methods for semi-confirmatory structural equation modeling (SEM). In this semi-confirmatory approach, each model parameter can be specified as free/fixed for theory testing, or penalized for exploration. By incorporating either a L1 or minimax concave penalty, the sparsity pattern of the parameter matrix can be efficiently explored. Package lslx minimizes the PL criterion through a quasi-Newton method. The algorithm conducts line search and checks the first-order condition in each iteration to ensure the optimality of the obtained solution. A numerical comparison between competing packages shows that lslx can reliably find PL estimates with the least time. The current package also supports other advanced functionalities, including a two-stage method with auxiliary variables for missing data handling and a reparameterized multi-group SEM to explore population heterogeneity. Full Article
ia Bayesian Random-Effects Meta-Analysis Using the bayesmeta R Package By www.jstatsoft.org Published On :: Mon, 27 Apr 2020 00:00:00 +0000 The random-effects or normal-normal hierarchical model is commonly utilized in a wide range of meta-analysis applications. A Bayesian approach to inference is very attractive in this context, especially when a meta-analysis is based only on few studies. The bayesmeta R package provides readily accessible tools to perform Bayesian meta-analyses and generate plots and summaries, without having to worry about computational details. It allows for flexible prior specification and instant access to the resulting posterior distributions, including prediction and shrinkage estimation, and facilitating for example quick sensitivity checks. The present paper introduces the underlying theory and showcases its usage. Full Article
ia mvord: An R Package for Fitting Multivariate Ordinal Regression Models By www.jstatsoft.org Published On :: Sat, 18 Apr 2020 03:35:08 +0000 The R package mvord implements composite likelihood estimation in the class of multivariate ordinal regression models with a multivariate probit and a multivariate logit link. A flexible modeling framework for multiple ordinal measurements on the same subject is set up, which takes into consideration the dependence among the multiple observations by employing different error structures. Heterogeneity in the error structure across the subjects can be accounted for by the package, which allows for covariate dependent error structures. In addition, different regression coefficients and threshold parameters for each response are supported. If a reduction of the parameter space is desired, constraints on the threshold as well as on the regression coefficients can be specified by the user. The proposed multivariate framework is illustrated by means of a credit risk application. Full Article
ia lmSubsets: Exact Variable-Subset Selection in Linear Regression for R By www.jstatsoft.org Published On :: Tue, 28 Apr 2020 00:00:00 +0000 An R package for computing the all-subsets regression problem is presented. The proposed algorithms are based on computational strategies recently developed. A novel algorithm for the best-subset regression problem selects subset models based on a predetermined criterion. The package user can choose from exact and from approximation algorithms. The core of the package is written in C++ and provides an efficient implementation of all the underlying numerical computations. A case study and benchmark results illustrate the usage and the computational efficiency of the package. Full Article
ia ManifoldOptim: An R Interface to the ROPTLIB Library for Riemannian Manifold Optimization By www.jstatsoft.org Published On :: Sat, 18 Apr 2020 03:35:08 +0000 Manifold optimization appears in a wide variety of computational problems in the applied sciences. In recent statistical methodologies such as sufficient dimension reduction and regression envelopes, estimation relies on the optimization of likelihood functions over spaces of matrices such as the Stiefel or Grassmann manifolds. Recently, Huang, Absil, Gallivan, and Hand (2016) have introduced the library ROPTLIB, which provides a framework and state of the art algorithms to optimize real-valued objective functions over commonly used matrix-valued Riemannian manifolds. This article presents ManifoldOptim, an R package that wraps the C++ library ROPTLIB. ManifoldOptim enables users to access functionality in ROPTLIB through R so that optimization problems can easily be constructed, solved, and integrated into larger R codes. Computationally intensive problems can be programmed with Rcpp and RcppArmadillo, and otherwise accessed through R. We illustrate the practical use of ManifoldOptim through several motivating examples involving dimension reduction and envelope methods in regression. Full Article
ia Arabo-Persian physiological theories in late Imperial China By blog.wellcomelibrary.org Published On :: Thu, 22 Feb 2018 11:20:20 +0000 The last seminar in the 2017–18 History of Pre-Modern Medicine seminar series takes place on Tuesday 27 February. Speaker: Dr Dror Weil (Max Planck Institute for the History of Science, Berlin) Bodies translated: the circulation of Arabo-Persian physiological theories in late… Continue reading Full Article Early Medicine Events and Visits China Chinese medicine physiology seminars
ia The Diary Files go live By feedproxy.google.com Published On :: Mon, 04 May 2020 22:47:40 +0000 The Library has a long tradition of collecting diaries — from First World War soldiers to famous literary figures like Miles Franklin. Now we want your diary entries to become part of the collection. Full Article
ia Water hyacinth : a potential lignocellulosic biomass for bioethanol By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Sharma, Anuja, authorCallnumber: OnlineISBN: 9783030356323 (electronic bk.) Full Article
ia Textbook of palliative care By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319317380 (electronic bk.) Full Article
ia Terrestrial hermit crab populations in the Maldives : ecology, distribution and anthropogenic impact By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Steibl, Sebastian, authorCallnumber: OnlineISBN: 9783658295417 (electronic bk.) Full Article
ia Staying out of trouble in pediatric orthopaedics By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Skaggs, David L., author.Callnumber: OnlineISBN: 9781975103958 (hardback) Full Article
ia Racing for the surface : pathogenesis of implant infection and advanced antimicrobial strategies By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030344757 (electronic bk.) Full Article
ia Priming-mediated stress and cross-stress tolerance in crop plants By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9780128178935 (electronic bk.) Full Article
ia Phytoremediation potential of perennial grasses By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Pandey, Vimal Chandra, authorCallnumber: OnlineISBN: 9780128177334 (electronic bk.) Full Article
ia Phytoremediation : in-situ applications By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030000998 (electronic bk.) Full Article
ia Pediatric surgery : a quick guide to decision-making By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Roy Choudhury, Subhasis, author.Callnumber: OnlineISBN: 9789811063046 (electronic bk.) Full Article
ia Pediatric restorative dentistry By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319934266 (electronic bk.) Full Article
ia Pediatric radiation oncology By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319435459 (electronic bk.) Full Article