models

Discussion: Models as Approximations

Dalia Ghanem, Todd A. Kuffner.

Source: Statistical Science, Volume 34, Number 4, 604--605.




models

Comment: Models as (Deliberate) Approximations

David Whitney, Ali Shojaie, Marco Carone.

Source: Statistical Science, Volume 34, Number 4, 591--598.




models

Comment: Models Are Approximations!

Anthony C. Davison, Erwan Koch, Jonathan Koh.

Source: Statistical Science, Volume 34, Number 4, 584--590.

Abstract:
This discussion focuses on areas of disagreement with the papers, particularly the target of inference and the case for using the robust ‘sandwich’ variance estimator in the presence of moderate mis-specification. We also suggest that existing procedures may be appreciably more powerful for detecting mis-specification than the authors’ RAV statistic, and comment on the use of the pairs bootstrap in balanced situations.




models

Comment: “Models as Approximations I: Consequences Illustrated with Linear Regression” by A. Buja, R. Berk, L. Brown, E. George, E. Pitkin, L. Zhan and K. Zhang

Roderick J. Little.

Source: Statistical Science, Volume 34, Number 4, 580--583.




models

Discussion of Models as Approximations I & II

Dag Tjøstheim.

Source: Statistical Science, Volume 34, Number 4, 575--579.




models

Comment: Models as Approximations

Nikki L. B. Freeman, Xiaotong Jiang, Owen E. Leete, Daniel J. Luckett, Teeranan Pokaprakarn, Michael R. Kosorok.

Source: Statistical Science, Volume 34, Number 4, 572--574.




models

Comment on Models as Approximations, Parts I and II, by Buja et al.

Jerald F. Lawless.

Source: Statistical Science, Volume 34, Number 4, 569--571.

Abstract:
I comment on the papers Models as Approximations I and II, by A. Buja, R. Berk, L. Brown, E. George, E. Pitkin, M. Traskin, L. Zhao and K. Zhang.




models

Discussion of Models as Approximations I & II

Sara van de Geer.

Source: Statistical Science, Volume 34, Number 4, 566--568.

Abstract:
We discuss the papers “Models as Approximations” I & II, by A. Buja, R. Berk, L. Brown, E. George, E. Pitkin, M. Traskin, L. Zao and K. Zhang (Part I) and A. Buja, L. Brown, A. K. Kuchibhota, R. Berk, E. George and L. Zhao (Part II). We present a summary with some details for the generalized linear model.




models

Models as Approximations II: A Model-Free Theory of Parametric Regression

Andreas Buja, Lawrence Brown, Arun Kumar Kuchibhotla, Richard Berk, Edward George, Linda Zhao.

Source: Statistical Science, Volume 34, Number 4, 545--565.

Abstract:
We develop a model-free theory of general types of parametric regression for i.i.d. observations. The theory replaces the parameters of parametric models with statistical functionals, to be called “regression functionals,” defined on large nonparametric classes of joint ${x extrm{-}y}$ distributions, without assuming a correct model. Parametric models are reduced to heuristics to suggest plausible objective functions. An example of a regression functional is the vector of slopes of linear equations fitted by OLS to largely arbitrary ${x extrm{-}y}$ distributions, without assuming a linear model (see Part I). More generally, regression functionals can be defined by minimizing objective functions, solving estimating equations, or with ad hoc constructions. In this framework, it is possible to achieve the following: (1) define a notion of “well-specification” for regression functionals that replaces the notion of correct specification of models, (2) propose a well-specification diagnostic for regression functionals based on reweighting distributions and data, (3) decompose sampling variability of regression functionals into two sources, one due to the conditional response distribution and another due to the regressor distribution interacting with misspecification, both of order $N^{-1/2}$, (4) exhibit plug-in/sandwich estimators of standard error as limit cases of ${x extrm{-}y}$ bootstrap estimators, and (5) provide theoretical heuristics to indicate that ${x extrm{-}y}$ bootstrap standard errors may generally be preferred over sandwich estimators.




models

Models as Approximations I: Consequences Illustrated with Linear Regression

Andreas Buja, Lawrence Brown, Richard Berk, Edward George, Emil Pitkin, Mikhail Traskin, Kai Zhang, Linda Zhao.

Source: Statistical Science, Volume 34, Number 4, 523--544.

Abstract:
In the early 1980s, Halbert White inaugurated a “model-robust” form of statistical inference based on the “sandwich estimator” of standard error. This estimator is known to be “heteroskedasticity-consistent,” but it is less well known to be “nonlinearity-consistent” as well. Nonlinearity, however, raises fundamental issues because in its presence regressors are not ancillary, hence cannot be treated as fixed. The consequences are deep: (1) population slopes need to be reinterpreted as statistical functionals obtained from OLS fits to largely arbitrary joint ${x extrm{-}y}$ distributions; (2) the meaning of slope parameters needs to be rethought; (3) the regressor distribution affects the slope parameters; (4) randomness of the regressors becomes a source of sampling variability in slope estimates of order $1/sqrt{N}$; (5) inference needs to be based on model-robust standard errors, including sandwich estimators or the ${x extrm{-}y}$ bootstrap. In theory, model-robust and model-trusting standard errors can deviate by arbitrary magnitudes either way. In practice, significant deviations between them can be detected with a diagnostic test.




models

Conditionally Conjugate Mean-Field Variational Bayes for Logistic Models

Daniele Durante, Tommaso Rigon.

Source: Statistical Science, Volume 34, Number 3, 472--485.

Abstract:
Variational Bayes (VB) is a common strategy for approximate Bayesian inference, but simple methods are only available for specific classes of models including, in particular, representations having conditionally conjugate constructions within an exponential family. Models with logit components are an apparently notable exception to this class, due to the absence of conjugacy among the logistic likelihood and the Gaussian priors for the coefficients in the linear predictor. To facilitate approximate inference within this widely used class of models, Jaakkola and Jordan ( Stat. Comput. 10 (2000) 25–37) proposed a simple variational approach which relies on a family of tangent quadratic lower bounds of the logistic log-likelihood, thus restoring conjugacy between these approximate bounds and the Gaussian priors. This strategy is still implemented successfully, but few attempts have been made to formally understand the reasons underlying its excellent performance. Following a review on VB for logistic models, we cover this gap by providing a formal connection between the above bound and a recent Pólya-gamma data augmentation for logistic regression. Such a result places the computational methods associated with the aforementioned bounds within the framework of variational inference for conditionally conjugate exponential family models, thereby allowing recent advances for this class to be inherited also by the methods relying on Jaakkola and Jordan ( Stat. Comput. 10 (2000) 25–37).




models

The Geometry of Continuous Latent Space Models for Network Data

Anna L. Smith, Dena M. Asta, Catherine A. Calder.

Source: Statistical Science, Volume 34, Number 3, 428--453.

Abstract:
We review the class of continuous latent space (statistical) models for network data, paying particular attention to the role of the geometry of the latent space. In these models, the presence/absence of network dyadic ties are assumed to be conditionally independent given the dyads’ unobserved positions in a latent space. In this way, these models provide a probabilistic framework for embedding network nodes in a continuous space equipped with a geometry that facilitates the description of dependence between random dyadic ties. Specifically, these models naturally capture homophilous tendencies and triadic clustering, among other common properties of observed networks. In addition to reviewing the literature on continuous latent space models from a geometric perspective, we highlight the important role the geometry of the latent space plays on properties of networks arising from these models via intuition and simulation. Finally, we discuss results from spectral graph theory that allow us to explore the role of the geometry of the latent space, independent of network size. We conclude with conjectures about how these results might be used to infer the appropriate latent space geometry from observed networks.




models

An Overview of Semiparametric Extensions of Finite Mixture Models

Sijia Xiang, Weixin Yao, Guangren Yang.

Source: Statistical Science, Volume 34, Number 3, 391--404.

Abstract:
Finite mixture models have offered a very important tool for exploring complex data structures in many scientific areas, such as economics, epidemiology and finance. Semiparametric mixture models, which were introduced into traditional finite mixture models in the past decade, have brought forth exciting developments in their methodologies, theories, and applications. In this article, we not only provide a selective overview of the newly-developed semiparametric mixture models, but also discuss their estimation methodologies, theoretical properties if applicable, and some open questions. Recent developments are also discussed.




models

Dural Calcitonin Gene-Related Peptide Produces Female-Specific Responses in Rodent Migraine Models

Amanda Avona
May 29, 2019; 39:4323-4331
Systems/Circuits




models

Red Hat's Virtual Summit Crowds Hint at Future Conference Models

In what could be a trial run for more of the same, Red Hat last week held a first-ever virtual technical summit to spread the word about its latest cloud tech offerings. CEO Paul Cormier welcomed online viewers to the conference, which attracted more than 80,000 virtual attendees. The company made several key announcements during the online gathering and highlighted customer innovations.




models

Explore 3-D Models of Historic Yukon Structures Threatened by Erosion

"We thought it was a good idea to get a comprehensive record of the site while we could in case the water levels rise," says one official




models

Comment on Squeekville, model train amusement park, on display at Children’s Museum Gala – Oak Ridger by modelsteamtrain

<span class="topsy_trackback_comment"><span class="topsy_twitter_username"><span class="topsy_trackback_content">Squeekville, model train amusement park, on display at Children's ...: Squeekville, model train amusement park, ... http://bit.ly/9x4oFS</span></span>




models

Prognostic Models for Stillbirth and Neonatal Death in Very Preterm Birth: A Validation Study

Two UK models predict the risk of mortality in very preterm Western infants (1) alive at onset of labor and (2) admitted for neonatal intensive care. Prognostic models need temporal and geographic validation to evaluate their performance.

The 2 models showed very good performance in a recent large cohort of very preterm infants born in another Western country. The accurate performance of both models suggests application in clinical practice (Read the full article)




models

Nonclinical Pharmacokinetics, Protein Binding, and Elimination of KBP-7072, An Aminomethylcycline Antibiotic in Animal Models [Pharmacology]

KBP-7072 is a semi-synthetic aminomethylcycline with broad-spectrum activity against Gram-positive and Gram-negative pathogens including multidrug resistant bacterial strains. The pharmacokinetics (PK) of KBP-7072 after oral and intravenous (IV) administration of single and multiple doses were investigated in animal models including during fed and fasted states and also evaluated the protein binding and excretion characteristics. In Sprague-Dawley (SD) rats, Beagle dogs, and CD-1 mice, KBP-7072 demonstrated a linear PK profile after administration of single oral and IV and multiple oral doses. Oral bioavailability ranged from 12% to 32%. Mean Tmax ranged from 0.5 to 4 hours, and mean half-life ranged from approximately 6 to 11 hours. Administration of oral doses in the fed state resulted in a marked reduction in Cmax and AUC compared with dosing in fasted animals. The mean bound fractions of KBP-7072 were 77.5%, 69.8%, 64.5%, 69.3%, and 69.2% in mouse, rat, dog, monkey, and human plasma, respectively. Following a single 22.5 mg/kg oral dose of KBP-7072 in SD rats, cumulative excretion in feces was 64% and in urine was 2.5% of the administered dose. The PK results in animal models are consistent with single and multiple ascending dose studies in healthy volunteers and confirm the suitability of KBP-7072 for once daily oral and IV administration in clinical studies.




models

Which COVID-19 models should we use to make policy decisions?

A new process to harness multiple disease models for outbreak management has been developed by an international team of researchers. The team will immediately implement the process to help inform policy decisions for the COVID-19 outbreak.




models

How Sonam Kapoor, Anand Ahuja Are Living Up To Their "Best Role Models"

"Thank you, parents, for being the best kind of role models. We are because of you," posted Sonam




models

Metamodels for Evaluating, Calibrating and Applying Agent-Based Models: A Review

Bruno Pietzsch, Sebastian Fiedler, Kai G. Mertens, Markus Richter, Cédric Scherer, Kirana Widyastuti, Marie-Christin Wimmler, Liubov Zakharova and Uta Berger: The recent advancement of agent-based modeling is characterized by higher demands on the parameterization, evaluation and documentation of these computationally expensive models. Accordingly, there is also a growing request for "easy to go" applications just mimicking the input-output behavior of such models. Metamodels are being increasingly used for these tasks. In this paper, we provide an overview of common metamodel types and the purposes of their usage in an agent-based modeling context. To guide modelers in the selection and application of metamodels for their own needs, we further assessed their implementation effort and performance. We performed a literature research in January 2019 using four different databases. Five different terms paraphrasing metamodels (approximation, emulator, meta-model, metamodel and surrogate) were used to capture the whole range of relevant literature in all disciplines. All metamodel applications found were then categorized into specific metamodel types and rated by different junior and senior researches from varying disciplines (including forest sciences, landscape ecology, or economics) regarding the implementation effort and performance. Specifically, we captured the metamodel performance according to (i) the consideration of uncertainties, (ii) the suitability assessment provided by the authors for the particular purpose, and (iii) the number of valuation criteria provided for suitability assessment. We selected 40 distinct metamodel applications from studies published in peer-reviewed journals from 2005 to 2019. These were used for the sensitivity analysis, calibration and upscaling of agent-based models, as well to mimic their prediction for different scenarios. This review provides information about the most applicable metamodel types for each purpose and forms a first guidance for the implementation and validation of metamodels for agent-based models.




models

The ODD Protocol for Describing Agent-Based and Other Simulation Models: A Second Update to Improve Clarity, Replication, and Structural Realism

Volker Grimm, Steven F. Railsback, Christian E. Vincenot, Uta Berger, Cara Gallagher, Donald L. DeAngelis, Bruce Edmonds, Jiaqi Ge, Jarl Giske, Jürgen Groeneveld, Alice S.A. Johnston, Alexander Milles, Jacob Nabe-Nielsen, J. Gareth Polhill, Viktoriia Radchuk, Marie-Sophie Rohwäder, Richard A. Stillman, Jan C. Thiele and Daniel Ayllón: The Overview, Design concepts and Details (ODD) protocol for describing Individual- and Agent-Based Models (ABMs) is now widely accepted and used to document such models in journal articles. As a standardized document for providing a consistent, logical and readable account of the structure and dynamics of ABMs, some research groups also find it useful as a workflow for model design. Even so, there are still limitations to ODD that obstruct its more widespread adoption. Such limitations are discussed and addressed in this paper: the limited availability of guidance on how to use ODD; the length of ODD documents; limitations of ODD for highly complex models; lack of sufficient details of many ODDs to enable reimplementation without access to the model code; and the lack of provision for sections in the document structure covering model design rationale, the model’s underlying narrative, and the means by which the model’s fitness for purpose is evaluated. We document the steps we have taken to provide better guidance on: structuring complex ODDs and an ODD summary for inclusion in a journal article (with full details in supplementary material; Table 1); using ODD to point readers to relevant sections of the model code; update the document structure to include sections on model rationale and evaluation. We also further advocate the need for standard descriptions of simulation experiments and argue that ODD can in principle be used for any type of simulation model. Thereby ODD would provide a lingua franca for simulation modelling.




models

Computational Models That Matter During a Global Pandemic Outbreak: A Call to Action

Flaminio Squazzoni, J. Gareth Polhill, Bruce Edmonds, Petra Ahrweiler, Patrycja Antosz, Geeske Scholz, Émile Chappin, Melania Borit, Harko Verhagen, Francesca Giardini and Nigel Gilbert: The COVID-19 pandemic is causing a dramatic loss of lives worldwide, challenging the sustainability of our health care systems, threatening economic meltdown, and putting pressure on the mental health of individuals (due to social distancing and lock-down measures). The pandemic is also posing severe challenges to the scientific community, with scholars under pressure to respond to policymakers’ demands for advice despite the absence of adequate, trusted data. Understanding the pandemic requires fine-grained data representing specific local conditions and the social reactions of individuals. While experts have built simulation models to estimate disease trajectories that may be enough to guide decision-makers to formulate policy measures to limit the epidemic, they do not cover the full behavioural and social complexity of societies under pandemic crisis. Modelling that has such a large potential impact upon people’s lives is a great responsibility. This paper calls on the scientific community to improve the transparency, access, and rigour of their models. It also calls on stakeholders to improve the rapidity with which data from trusted sources are released to the community (in a fully responsible manner). Responding to the pandemic is a stress test of our collaborative capacity and the social/economic value of research.




models

Emerging models: Small firms giving up office space to save rent

Given the Indian economy could well contract in 2020-21,following the disruption to the pandemic, even bigger companies might enourage some employees to work from home. For small businesses there might be no other option.




models

Mini Cooper BS6 models listed on website: Why 5-door and Clubman are missing

Only the Countryman now is available with a diesel engine in India while all other models come with the familiar 2.0-litre petrol motor.




models

Honda Cars launches online booking portal for all models amid lockdown with Honda from Home initiative

As the nation continues to be under lockdown due to the coronavirus pandemic. Honda cars India has launched an online portal for prospective customers who can book their Honda model of their choosing sitting at home, safe from COVID-19.




models

Generating IBIS models in cadence virtuoso

I'm trying to generate IBIS models for the parts that I'm designing.  I'm designing using CADENCE Virtuoso.  

I'm wondering if there is a tutorial for generating IBIS models in CADENCE Virtuoso.   Please pardon me if my question is broad.      




models

The Elephant in the Room: Mixed-Signal Models

Key Findings:  Nearly 100% of SoCs are mixed-signal to some extent.  Every one of these could benefit from the use of a metrics-driven unified verification methodology for mixed-signal (MD-UVM-MS), but the modeling step is the biggest hurdle to overcome.  Without the magical models, the process breaks down for lack of performance, or holes in the chip verification.

In the last installment of The Low Road, we were at the mixed-signal verification party. While no one talked about it, we all saw it: The party was raging and everyone was having a great time, but they were all dancing around that big elephant right in the middle of the room. For mixed-signal verification, that elephant is named Modeling.

To get to a fully verified SoC, the analog portions of the design have to run orders of magnitude faster than the speediest SPICE engine available. That means an abstraction of the behavior must be created. It puts a lot of people off when you tell them they have to do something extra to get done with something sooner. Guess what, it couldn’t be more true. If you want to keep dancing around like the elephant isn’t there, then enjoy your day. If you want to see about clearing the pachyderm from the dance floor, you’ll want to read on a little more….

Figure 1: The elephant in the room: who’s going to create the model?

 Whose job is it?

Modeling analog/mixed-signal behavior for use in SoC verification seems like the ultimate hot potato.  The analog team that creates the IP blocks says it doesn't have the expertise in digital verification to create a high-performance model. The digital designers say they don’t understand anything but ones and zeroes. The verification team, usually digitally-centric by background, are stuck in the middle (and have historically said “I just use the collateral from the design teams to do my job; I don’t create it”).

If there is an SoC verification team, then ensuring that the entire chip is verified ultimately rests upon their shoulders, whether or not they get all of the models they need from the various design teams for the project. That means that if a chip does not work because of a modeling error, it ought to point back to the verification team. If not, is it just a “systemic error” not accounted for in the methodology? That seems like a bad answer.

That all makes the most valuable guy in the room the engineer, whose knowledge spans the three worlds of analog, digital, and verification. There are a growing number of “mixed-signal verification engineers” found on SoC verification teams. Having a specialist appears to be the best approach to getting the job done, and done right.

So, my vote is for the verification team to step up and incorporate the expertise required to do a complete job of SoC verification, analog included. (I know my popularity probably did not soar with the attendees of DVCON with that statement, but the job has to get done).

It’s a game of trade-offs

The difference in computations required for continuous time versus discrete time behavior is orders of magnitude (as seen in Figure 2 below). The essential detail versus runtime tradeoff is a key enabler of verification techniques like software-driven testbenches. Abstraction is a lossy process, so care must be taken to fully understand the loss and test those elements in the appropriate domain (continuous time, frequency, etc.).

Figure 2: Modeling is required for performance

 

AFE for instance

The traditional separation of baseband and analog front-end (AFE) chips has shifted for the past several years. Advances in process technology, analog-to-digital converters, and the desire for cost reduction have driven both a re-architecting and re-partitioning of the long-standing baseband/AFE solution. By moving more digital processing to the AFE, lower cost architectures can be created, as well as reducing those 130 or so PCB traces between the chips.

There is lots of good scholarly work from a few years back on this subject, such as Digital Compensation of Dynamic Acquisition Errors at the Front-End of ADCS and Digital Compensation for Analog Front-Ends: A New Approach to Wireless Transceiver Design.


Figure 3: AFE evolution from first reference (Parastoo)

The digital calibration and compensation can be achieved by the introduction of a programmable solution. This is in fact the most popular approach amongst the mobile crowd today. By using a microcontroller, the software algorithms become adaptable to process-related issues and modifications to protocol standards.

However, for the SoC verification team, their job just got a whole lot harder. To determine if the interplay of the digital control and the analog function is working correctly, the software algorithms must be simulated on the combination of the two. That is, here is a classic case of inseparable mixed-signal verification.

So, what needs to be in the model is the big question. And the answer is, a lot. For this example, the main sources of dynamic error at the front-end of ADCs are critical for the non-linear digital filtering that is highly frequency dependent. The correction scheme must be verified to show that the nonlinearities are cancelled across the entire bandwidth of the ADC. 

This all means lots of simulation. It means that the right level of detail must be retained to ensure the integrity of the verification process. This means that domain experience must be added to the list of expertise of that mixed-signal verification engineer.

Back to the pachyderm

There is a lot more to say on this subject, and lots will be said in future posts. The important starting point is the recognition that the potential flaw in the system needs to be examined. It needs to be examined by a specialist.  Maybe a second opinion from the application domain is needed too.

So, put that cute little elephant on your desk as a reminder that the beast can be tamed.

 

 

Steve Carlson

Related stories

It’s Late, But the Party is Just Getting Started





models

10 Options and 5 Case Studies Show How to Reform Utility Business Models

Experts from Rocky Mountain Institute, the Advanced Energy Economy Institute and America’s Power Plan have released a new report that shows why new utility business models are key to the energy transition.




models

Engineering Possibilities Versus Practical Implementation: Utility Portfolios and Business Models

Europe’s utilities are re-evaluating their business models due to the energy transition. Members of POWER-GEN Europe’s Advisory Board consider how a reliance on fossil fuels is no longer politically desirable, forcing utilities to transform their portfolios to adapt to radical change.




models

FCA consultation paper on discretionary commission models and commission disclosure

1. FCA Final Report on Motor Finance In March 2019, the FCA published its Final Report on motor finance. Our briefing note on the Final Report can be found Full Article



models

Ghana’s Abedi, Nigeria’s Okocha and Liberia’s Weah picked as ultimate role models

The African football legends provided inspiration for many during their playing days ......




models

Pharmacologic Inhibitor of DNA-PK, M3814, Potentiates Radiotherapy and Regresses Human Tumors in Mouse Models

Physical and chemical DNA-damaging agents are used widely in the treatment of cancer. Double-strand break (DSB) lesions in DNA are the most deleterious form of damage and, if left unrepaired, can effectively kill cancer cells. DNA-dependent protein kinase (DNA-PK) is a critical component of nonhomologous end joining (NHEJ), one of the two major pathways for DSB repair. Although DNA-PK has been considered an attractive target for cancer therapy, the development of pharmacologic DNA-PK inhibitors for clinical use has been lagging. Here, we report the discovery and characterization of a potent, selective, and orally bioavailable DNA-PK inhibitor, M3814 (peposertib), and provide in vivo proof of principle for DNA-PK inhibition as a novel approach to combination radiotherapy. M3814 potently inhibits DNA-PK catalytic activity and sensitizes multiple cancer cell lines to ionizing radiation (IR) and DSB-inducing agents. Inhibition of DNA-PK autophosphorylation in cancer cells or xenograft tumors led to an increased number of persistent DSBs. Oral administration of M3814 to two xenograft models of human cancer, using a clinically established 6-week fractionated radiation schedule, strongly potentiated the antitumor activity of IR and led to complete tumor regression at nontoxic doses. Our results strongly support DNA-PK inhibition as a novel approach for the combination radiotherapy of cancer. M3814 is currently under investigation in combination with radiotherapy in clinical trials.




models

Fast Algorithms for Conducting Large-Scale GWAS of Age-at-Onset Traits Using Cox Mixed-Effects Models [Statistical Genetics and Genomics]

Age-at-onset is one of the critical traits in cohort studies of age-related diseases. Large-scale genome-wide association studies (GWAS) of age-at-onset traits can provide more insights into genetic effects on disease progression and transitions between stages. Moreover, proportional hazards (or Cox) regression models can achieve higher statistical power in a cohort study than a case-control trait using logistic regression. Although mixed-effects models are widely used in GWAS to correct for sample dependence, application of Cox mixed-effects models (CMEMs) to large-scale GWAS is so far hindered by intractable computational cost. In this work, we propose COXMEG, an efficient R package for conducting GWAS of age-at-onset traits using CMEMs. COXMEG introduces fast estimation algorithms for general sparse relatedness matrices including, but not limited to, block-diagonal pedigree-based matrices. COXMEG also introduces a fast and powerful score test for dense relatedness matrices, accounting for both population stratification and family structure. In addition, COXMEG generalizes existing algorithms to support positive semidefinite relatedness matrices, which are common in twin and family studies. Our simulation studies suggest that COXMEG, depending on the structure of the relatedness matrix, is orders of magnitude computationally more efficient than coxme and coxph with frailty for GWAS. We found that using sparse approximation of relatedness matrices yielded highly comparable results in controlling false-positive rate and retaining statistical power for an ethnically homogeneous family-based sample. By applying COXMEG to a study of Alzheimer’s disease (AD) with a Late-Onset Alzheimer’s Disease Family Study from the National Institute on Aging sample comprising 3456 non-Hispanic whites and 287 African Americans, we identified the APOE 4 variant with strong statistical power (P = 1e–101), far more significant than that reported in a previous study using a transformed variable and a marginal Cox model. Furthermore, we identified novel SNP rs36051450 (P = 2e–9) near GRAMD1B, the minor allele of which significantly reduced the hazards of AD in both genders. These results demonstrated that COXMEG greatly facilitates the application of CMEMs in GWAS of age-at-onset traits.




models

Complement Deficiencies Result in Surrogate Pathways of Complement Activation in Novel Polygenic Lupus-like Models of Kidney Injury [AUTOIMMUNITY]

Key Points

  • Novel TM lupus mouse strains develop spontaneous nephritis.

  • In C1q deficiency, kidney complement activation likely occurred via the LP.

  • In C3 deficiency, coagulation cascade contributed to kidney complement activation.




    models

    Transitioning from Basic toward Systems Pharmacodynamic Models: Lessons from Corticosteroids [Review Articles]

    Technology in bioanalysis, -omics, and computation have evolved over the past half century to allow for comprehensive assessments of the molecular to whole body pharmacology of diverse corticosteroids. Such studies have advanced pharmacokinetic and pharmacodynamic (PK/PD) concepts and models that often generalize across various classes of drugs. These models encompass the "pillars" of pharmacology, namely PK and target drug exposure, the mass-law interactions of drugs with receptors/targets, and the consequent turnover and homeostatic control of genes, biomarkers, physiologic responses, and disease symptoms. Pharmacokinetic methodology utilizes noncompartmental, compartmental, reversible, physiologic [full physiologically based pharmacokinetic (PBPK) and minimal PBPK], and target-mediated drug disposition models using a growing array of pharmacometric considerations and software. Basic PK/PD models have emerged (simple direct, biophase, slow receptor binding, indirect response, irreversible, turnover with inactivation, and transduction models) that place emphasis on parsimony, are mechanistic in nature, and serve as highly useful "top-down" methods of quantitating the actions of diverse drugs. These are often components of more complex quantitative systems pharmacology (QSP) models that explain the array of responses to various drugs, including corticosteroids. Progressively deeper mechanistic appreciation of PBPK, drug-target interactions, and systems physiology from the molecular (genomic, proteomic, metabolomic) to cellular to whole body levels provides the foundation for enhanced PK/PD to comprehensive QSP models. Our research based on cell, animal, clinical, and theoretical studies with corticosteroids have provided ideas and quantitative methods that have broadly advanced the fields of PK/PD and QSP modeling and illustrates the transition toward a global, systems understanding of actions of diverse drugs.

    Significance Statement

    Over the past half century, pharmacokinetics (PK) and pharmacokinetics/pharmacodynamics (PK/PD) have evolved to provide an array of mechanism-based models that help quantitate the disposition and actions of most drugs. We describe how many basic PK and PK/PD model components were identified and often applied to the diverse properties of corticosteroids (CS). The CS have complications in disposition and a wide array of simple receptor-to complex gene-mediated actions in multiple organs. Continued assessments of such complexities have offered opportunities to develop models ranging from simple PK to enhanced PK/PD to quantitative systems pharmacology (QSP) that help explain therapeutic and adverse CS effects. Concurrent development of state-of-the-art PK, PK/PD, and QSP models are described alongside experimental studies that revealed diverse CS actions.




    models

    Correction: EGFR Exon 20 Insertion Mutations Display Sensitivity to Hsp90 Inhibition in Preclinical Models and Lung Adenocarcinomas




    models

    Tissue Distribution of Doxycycline in Animal Models of Tuberculosis [Pharmacology]

    Doxycycline, an FDA-approved tetracycline, is used in tuberculosis in vivo models for the temporal control of mycobacterial gene expression. In these models, animals are infected with recombinant Mycobacterium tuberculosis carrying genes of interest under transcriptional control of the doxycycline-responsive TetR-tetO unit. To minimize fluctuations of plasma levels, doxycycline is usually administered in the diet. However, tissue penetration studies to identify the minimum doxycycline content in food achieving complete repression of TetR-controlled genes in tuberculosis (TB)-infected organs and lesions have not been conducted. Here, we first determined the tetracycline concentrations required to achieve silencing of M. tuberculosis target genes in vitro. Next, we measured doxycycline concentrations in plasma, major organs, and lung lesions in TB-infected mice and rabbits and compared these values to silencing concentrations measured in vitro. We found that 2,000 ppm doxycycline supplemented in mouse and rabbit feed is sufficient to reach target concentrations in TB lesions. In rabbit chow, the calcium content had to be reduced 5-fold to minimize chelation of doxycycline and deliver adequate oral bioavailability. Clearance kinetics from major organs and lung lesions revealed that doxycycline levels fall below concentrations that repress tet promoters within 7 to 14 days after doxycycline is removed from the diet. In summary, we have shown that 2,000 ppm doxycycline supplemented in standard mouse diet and in low-calcium rabbit diet delivers concentrations adequate to achieve full repression of tet promoters in infected tissues of mice and rabbits.




    models

    Assessing Animal Models of Bacterial Pneumonia Used in Investigational New Drug Applications for the Treatment of Bacterial Pneumonia [Experimental Therapeutics]

    Animal models of bacterial infection have been widely used to explore the in vivo activity of antibacterial drugs. These data are often submitted to the U.S. Food and Drug Administration to support human use in an investigational new drug application (IND). To better understand the range and scientific use of animal models in regulatory submissions, a database was created surveying recent pneumonia models submitted as part of IND application packages. The IND studies were compared to animal models of bacterial pneumonia published in the scientific literature over the same period of time. In this review, we analyze the key experimental design elements, such as animal species, immune status, pathogens selected, and route of administration, and study endpoints.




    models

    [TECHNIQUE] Animal Models of Hepatitis C Virus Infection

    Hepatitis C virus (HCV) is an important and underreported infectious disease, causing chronic infection in ~71 million people worldwide. The limited host range of HCV, which robustly infects only humans and chimpanzees, has made studying this virus in vivo challenging and hampered the development of a desperately needed vaccine. The restrictions and ethical concerns surrounding biomedical research in chimpanzees has made the search for an animal model all the more important. In this review, we discuss different approaches that are being pursued toward creating small animal models for HCV infection. Although efforts to use a nonhuman primate species besides chimpanzees have proven challenging, important advances have been achieved in a variety of humanized mouse models. However, such models still fall short of the overarching goal to have an immunocompetent, inheritably susceptible in vivo platform in which the immunopathology of HCV could be studied and putative vaccines development. Alternatives to overcome this include virus adaptation, such as murine-tropic HCV strains, or the use of related hepaciviruses, of which many have been recently identified. Of the latter, the rodent/rat hepacivirus from Rattus norvegicus species-1 (RHV-rn1) holds promise as a surrogate virus in fully immunocompetent rats that can inform our understanding of the interaction between the immune response and viral outcomes (i.e., clearance vs. persistence). However, further characterization of these animal models is necessary before their use for gaining new insights into the immunopathogenesis of HCV and for conceptualizing HCV vaccines.




    models

    Andrey Markov & Claude Shannon Counted Letters to Build the First Language-Generation Models

    Shannon's said: “OCRO HLI RGWR NMIELWIS”



    • robotics
    • robotics/artificial-intelligence

    models

    Which COVID-19 models should we use to make policy decisions?

    A new process to harness multiple disease models for outbreak management has been developed by an international team of researchers. The team will immediately implement the process to help inform policy decisions for the COVID-19 outbreak.




    models

    Adult live-streaming site CAM4 exposes millions of models&apos; personal information

    First and last names, email addresses, gender and sexual orientation, and credit card information of models and users was left on an insecure server




    models

    IMD to use dynamic models for forecasts

    Statistical models used by IMD will still be used for monsoon forecast, but the ministry of earth sciences is putting more emphasis on dynamic models.




    models

    What’s Missing in Pandemic Models - Issue 84: Outbreak


    In the COVID-19 pandemic, numerous models are being used to predict the future. But as helpful as they are, they cannot make sense of themselves. They rely on epidemiologists and other modelers to interpret them. Trouble is, making predictions in a pandemic is also a philosophical exercise. We need to think about hypothetical worlds, causation, evidence, and the relationship between models and reality.1,2

    The value of philosophy in this crisis is that although the pandemic is unique, many of the challenges of prediction, evidence, and modeling are general problems. Philosophers like myself are trained to see the most general contours of problems—the view from the clouds. They can help interpret scientific results and claims and offer clarity in times of uncertainty, bringing their insights down to Earth. When it comes to predicting in an outbreak, building a model is only half the battle. The other half is making sense of what it shows, what it leaves out, and what else we need to know to predict the future of COVID-19.

    Prediction is about forecasting the future, or, when comparing scenarios, projecting several hypothetical futures. Because epidemiology informs public health directives, predicting is central to the field. Epidemiologists compare hypothetical worlds to help governments decide whether to implement lockdowns and social distancing measures—and when to lift them. To make this comparison, they use models to predict the evolution of the outbreak under various simulated scenarios. However, some of these simulated worlds may turn out to misrepresent the real world, and then our prediction might be off.

    In his book Philosophy of Epidemiology, Alex Broadbent, a philosopher at the University of Johannesburg, argues that good epidemiological prediction requires asking, “What could possibly go wrong?” He elaborated in an interview with Nautilus, “To predict well is to be able to explain why what you predict will happen rather than the most likely hypothetical alternatives. You consider the way the world would have to be for your prediction to be true, then consider worlds in which the prediction is false.” By ruling out hypothetical worlds in which they are wrong, epidemiologists can increase their confidence that they are right. For instance, by using antibody tests to estimate previous infections in the population, public health authorities could rule out the hypothetical possibility (modeled by a team at Oxford) that the coronavirus has circulated much more widely than we think.3

    One reason the dynamics of an outbreak are often more complicated than a traditional model can predict is that they result from human behavior and not just biology.

    Broadbent is concerned that governments across Africa are not thinking carefully enough about what could possibly go wrong, having for the most part implemented coronavirus policies in line with the rest of the world. He believes a one-size-fits-all approach to the pandemic could prove fatal.4 The same interventions that might have worked elsewhere could have very different effects in the African context. For instance, the economic impacts of social distancing policies on all-cause mortality might be worse because so many people on the continent suffer increased food insecurity and malnutrition in an economic downturn.5 Epidemic models only represent the spread of the infection. They leave out important elements of the social world.

    Another limitation of epidemic models is that they model the effect of behaviors on the spread of infection, but not the effect of a public health policy on behaviors. The latter requires understanding how a policy works. Nancy Cartwright, a philosopher at Durham University and the University of California, San Diego, suggests that “the road from ‘It works somewhere’ to ‘It will work for us’ is often long and tortuous.”6 The kinds of causal principles that make policies effective, she says, “are both local and fragile.” Principles can break in transit from one place to the other. Take the principle, “Stay-at-home policies reduce the number of social interactions.” This might be true in Wuhan, China, but might not be true in a South African township in which the policies are infeasible or in which homes are crowded. Simple extrapolation from one context to another is risky. A pandemic is global, but prediction should be local.

    Predictions require assumptions that in turn require evidence. Cartwright and Jeremy Hardie, an economist and research associate at the Center for Philosophy of Natural and Social Science at the London School of Economics, represent evidence-based policy predictions using a pyramid, where each assumption is a building block.7 If evidence for any assumption is missing, the pyramid might topple. I have represented evidence-based medicine predictions using a chain of inferences, where each link in the chain is made of an alloy containing assumptions.8 If any assumption comes apart, the chain might break.

    An assumption can involve, for example, the various factors supporting an intervention. Cartwright writes that “policy variables are rarely sufficient to produce a contribution [to some outcome]; they need an appropriate support team if they are to act at all.” A policy is only one slice of a complete causal pie.9 Take age, an important support factor in causal principles of social distancing. If social distancing prevents deaths primarily by preventing infections among older individuals, wherever there are fewer older individuals there may be fewer deaths to prevent—and social distancing will be less effective. This matters because South Africa and other African countries have younger populations than do Italy or China.10

    The lesson that assumptions need evidence can sound obvious, but it is especially important to bear in mind when modeling. Most epidemic modeling makes assumptions about the reproductive number, the size of the susceptible population, and the infection-fatality ratio, among other parameters. The evidence for these assumptions comes from data that, in a pandemic, is often rough, especially in early days. It has been argued that nonrepresentative diagnostic testing early in the COVID-19 pandemic led to unreliable estimates of important inputs in our epidemic modeling.11

    Epidemic models also don’t model all the influences of the pathogen and of our policy interventions on health and survival. For example, what matters most when comparing deaths among hypothetical worlds is how different the death toll is overall, not just the difference in deaths due to the direct physiological effects of a virus. The new coronavirus can overwhelm health systems and consume health resources needed to save non-COVID-19 patients if left unchecked. On the other hand, our policies have independent effects on financial welfare and access to regular healthcare that might in turn influence survival.

    A surprising difficulty with predicting in a pandemic is that the same pathogen can behave differently in different settings. Infection fatality ratios and outbreak dynamics are not intrinsic properties of a pathogen; these things emerge from the three-way interaction among pathogen, population, and place. Understanding more about each point in this triangle can help in predicting the local trajectory of an outbreak.

    In April, an influential data-driven model, developed by the Institute for Health Metrics and Evaluation (IHME) at the University of Washington, which uses a curve-fitting approach, came under criticism for its volatile projections and questionable assumption that the trajectory of COVID-19 deaths in American states can be extrapolated from curves in other countries.12,13 In a curve-fitting approach, the infection curve representing a local outbreak is extrapolated from data collected locally along with data regarding the trajectory of the outbreak elsewhere. The curve is drawn to fit the data. However, the true trajectory of the local outbreak, including the number of infections and deaths, depends upon characteristics of the local population as well as policies and behaviors adopted locally, not just upon the virus.

    Predictions require assumptions that in turn require evidence.

    Many of the other epidemic models in the coronavirus pandemic are SIR-type models, a more traditional modelling approach for infectious-disease epidemiology. SIR-type models represent the dynamics of an outbreak, the transition of individuals in the population from a state of being susceptible to infection (S) to one of being infectious to others (I) and, finally, recovered from infection (R). These models simulate the real world. In contrast to the data-driven approach, SIR models are more theory-driven. The theory that underwrites them includes the mathematical theory of outbreaks developed in the 1920s and 1930s, and the qualitative germ theory pioneered in the 1800s. Epidemiologic theories impart SIR-type models with the know-how to make good predictions in different contexts.

    For instance, they represent the transmission of the virus as a factor of patterns of social contact as well as viral transmissibility, which depend on local behaviors and local infection control measures, respectively. The drawback of these more theoretical models is that without good data to support their assumptions they might misrepresent reality and make unreliable projections for the future.

    One reason why the dynamics of an outbreak are often more complicated than a traditional model can predict, or an infectious-disease epidemiology theory can explain, is that the dynamics of an outbreak result from human behavior and not just human biology. Yet more sophisticated disease-behavior models can represent the behavioral dynamics of an outbreak by modeling the spread of opinions or the choices individuals make.14,15 Individual behaviors are influenced by the trajectory of the epidemic, which is in turn influenced by individual behaviors.

    “There are important feedback loops that are readily represented by disease-behavior models,” Bert Baumgartner, a philosopher who has helped develop some of these models, explains. “As a very simple example, people may start to socially distance as disease spreads, then as disease consequently declines people may stop social distancing, which leads to the disease increasing again.” These looping effects of disease-behavior models are yet another challenge to predicting.

    It is a highly complex and daunting challenge we face. That’s nothing unusual for doctors and public health experts, who are used to grappling with uncertainty. I remember what that uncertainty felt like when I was training in medicine. It can be discomforting, especially when confronted with a deadly disease. However, uncertainty need not be paralyzing. By spotting the gaps in our models and understanding, we can often narrow those gaps or at least navigate around them. Doing so requires clarifying and questioning our ideas and assumptions. In other words, we must think like a philosopher.

    Jonathan Fuller is an assistant professor in the Department of History and Philosophy of Science at the University of Pittsburgh. He draws on his dual training in philosophy and in medicine to answer fundamental questions about the nature of contemporary disease, evidence, and reasoning in healthcare, and theory and methods in epidemiology and medical science.

    References

    1. Walker, P., et al. The global impact of COVID-19 and strategies for mitigation and suppression. Imperial College London (2020).

    2. Flaxman, S., et al. Estimating the number of infections and the impact of non-pharmaceutical interventions on COVID-19 in 11 European countries. Imperial College London (2020).

    3. Lourenco, J., et al. Fundamental principles of epidemic spread highlight the immediate need for large-scale serological surveys to assess the stage of the SARS-CoV-2 epidemic. medRxiv:10.1101/2020.03.24.20042291 (2020).

    4. Broadbent, A., & Smart, B. Why a one-size-fits-all approach to COVID-19 could have lethal consequences. TheConversation.com (2020).

    5. United Nations. Global recession increases malnutrition for the most vulnerable people in developing countries. United Nations Standing Committee on Nutrition (2009).

    6. Cartwright, N. Will this policy work for you? Predicting effectiveness better: How philosophy helps. Philosophy of Science 79, 973-989 (2012).

    7. Cartwright, N. & Hardie, J. Evidence-Based Policy: A Practical Guide to Doing it Better Oxford University Press, New York, New York (2012).

    8. Fuller, J., & Flores, L. The Risk GP Model: The standard model of prediction in medicine. Studies in History and Philosophy of Biological and Biomedical Sciences 54, 49-61 (2015).

    9. Rothman, K., & Greenland, S. Causation and causal inference in epidemiology. American Journal Public Health 95, S144-S50 (2005).

    10. Dowd, J. et al. Demographic science aids in understanding the spread and fatality rates of COVID-19. Proceedings of the National Academy of Sciences 117, 9696-9698 (2020).

    11. Ioannidis, J. Coronavirus disease 2019: The harms of exaggerated information and non‐evidence‐based measures. European Journal of Clinical Investigation 50, e13222 (2020).

    12. COVID-19 Projections. Healthdata.org. https://covid19.healthdata.org/united-states-of-america.

    13. Jewell, N., et al. Caution warranted: Using the Institute for Health metrics and evaluation model for predicting the course of the COVID-19 pandemic. Annals of Internal Medicine (2020).

    14. Nardin, L., et al. Planning horizon affects prophylactic decision-making and epidemic dynamics. PeerJ 4:e2678 (2016).

    15. Tyson, R., et al. The timing and nature of behavioural responses affect the course of an epidemic. Bulletin of Mathematical Biology 82, 14 (2020).

    Lead image: yucelyilmaz / Shutterstock


    Read More…




    models

    Google Stadia will support “a variety of business models”

    But the streaming gaming revolution "is not going to happen overnight."




    models

    Fee-free models could help women's football flourish in Australia

    With many families getting priced out of junior football, one club in Perth is waiving fees, and it could signal a new direction for grassroots sport in Australia, writes Samantha Lewis.




    models

    Over 60,000 lives claimed by COVID-19 in U.S. — a tally some models predicted for late summer

    New York sees a dip in deaths, and Louisiana governor meets Trump, as each state in the union thinks about how to move forward amid coronavirus.