predict 64Cu-DOTATATE PET/CT and prediction of overall and progression-free survival in patients with neuroendocrine neoplasms By jnm.snmjournals.org Published On :: 2020-02-28T13:52:17-08:00 Overexpression of somatostatin receptors in patients with neuroendocrine neoplasms (NEN) is utilized for both diagnosis and treatment. Receptor density may reflect tumor differentiation and thus be associated with prognosis. Non-invasive visualization and quantification of somatostatin receptor density is possible by somatostatin receptor imaging (SRI) using positron emission tomography (PET). Recently, we introduced 64Cu-DOTATATE for SRI and we hypothesized that uptake of this tracer could be associated with overall (OS) and progression-free survival (PFS). Methods: We evaluated patients with NEN that had a 64Cu-DOTATATE PET/CT SRI performed in two prospective studies. Tracer uptake was determined as the maximal standardized uptake value (SUVmax) for each patient. Kaplan-Meier analysis with log-rank was used to determine the predictive value of 64Cu-DOTATATE SUVmax for OS and PFS. Specificity, sensitivity and accuracy was calculated for prediction of outcome at 24 months after 64Cu-DOTATATE PET/CT. Results: A total of 128 patients with NEN were included and followed for a median of 73 (1-112) months. During follow-up, 112 experienced disease progression and 69 patients died. The optimal cutoff for 64Cu-DOTATATE SUVmax was 43.3 for prediction of PFS with a hazard ratio of 0.56 (95% CI: 0.38-0.84) for patients with SUVmax > 43.3. However, no significant cutoff was found for prediction of OS. In multiple Cox regression adjusted for age, sex, primary tumor site and tumor grade, the SUVmax cutoff hazard ratio was 0.50 (0.32-0.77) for PFS. The accuracy was moderate for predicting PFS (57%) at 24 months after 64Cu-DOTATATE PET/CT. Conclusion: In this first study to report the association of 64Cu-DOTATATE PET/CT and outcome in patients with NEN, tumor somatostatin receptor density visualized with 64Cu-DOTATATE PET/CT was prognostic for PFS but not OS. However, the accuracy of prediction of PFS at 24 months after 64Cu-DOTATATE PET/CT SRI was moderate limiting the value on an individual patient basis. Full Article
predict Factors predicting metastatic disease in 68Ga-PSMA-11 PET positive osseous lesions in prostate cancer By jnm.snmjournals.org Published On :: 2020-04-17T08:32:41-07:00 Bone is the most common site of distant metastatic spread in prostate adenocarcinoma. Prostate-specific membrane antigen uptake has been described in both benign and malignant bone lesions, which can lead to false-positive findings on 68Ga-prostate-specific membrane antigen-11 positron emission tomography (68Ga-PSMA-11 PET). The purpose of this study was to evaluate the diagnostic accuracy of 68Ga-PSMA-11 PET for osseous prostate cancer metastases and improve bone uptake interpretation using semi-quantitative metrics. METHODS: 56 prostate cancer patients (18 pre-prostatectomy, 38 biochemical recurrence) who underwent 68Ga-PSMA-11 PET/MRI or PET/CT examinations with osseous PSMA-ligand uptake were included in the study. Medical records were reviewed retrospectively by board-certified nuclear radiologists to determine true or false positivity based on a composite endpoint. For each avid osseous lesion, biological volume, size, PSMA-RADS rating, maximum standardized uptake value (SUVmax), and ratio of lesion SUVmax to liver, blood pool, and background bone SUVmax were measured. Differences between benign and malignant lesions were evaluated for statistical significance, and cut-off values for these parameters were determined to maximize diagnostic accuracy. RESULTS: Among 56 participants, 13 patients (22.8%) had false-positive osseous 68Ga-PSMA-11 findings and 43 patients (76.8%) had true-positive osseous 68Ga-PSMA-11 findings. Twenty-two patients (39%) had 1 osseous lesion, 18 (32%) had 2-4 lesions, and 16 (29%) had 5 or more lesions. Cut-off values resulting in statistically significant (p<0.005) differences between benign and malignant lesions were: PSMA-RADS ≥4, SUVmax ≥4.1, SUVmax ratio of lesion to blood pool ≥2.11, to liver ≥0.55, and to bone ≥4.4. These measurements corresponded to lesion-based 68Ga-PSMA-11 PET lesion detection rate for malignancy of 80%, 93%, 89%, 21%, 89%, and a specificity of 73%, 73%, 73%, 93%, 60%, respectively. CONCLUSION: PSMA-RADS rating, SUVmax, and SUVmax ratio of lesion to blood pool can help differentiate benign from malignant lesions on 68Ga-PSMA-11 PET. SUVmax ratio to blood pool above 2.2 is a reasonable parameter to support image interpretation and presented superior lesion detection rate and specificity when compared to visual interpretation by PSMA RADS. These parameters hold clinical value by improving diagnostic accuracy for metastatic prostate cancer on 68Ga-PSMA-11 PET/MRI and PET/CT. Full Article
predict Mass Spectrometry Based Immunopeptidomics Leads to Robust Predictions of Phosphorylated HLA Class I Ligands [Technological Innovation and Resources] By feedproxy.google.com Published On :: 2020-02-01T00:05:30-08:00 The presentation of peptides on class I human leukocyte antigen (HLA-I) molecules plays a central role in immune recognition of infected or malignant cells. In cancer, non-self HLA-I ligands can arise from many different alterations, including non-synonymous mutations, gene fusion, cancer-specific alternative mRNA splicing or aberrant post-translational modifications. Identifying HLA-I ligands remains a challenging task that requires either heavy experimental work for in vivo identification or optimized bioinformatics tools for accurate predictions. To date, no HLA-I ligand predictor includes post-translational modifications. To fill this gap, we curated phosphorylated HLA-I ligands from several immunopeptidomics studies (including six newly measured samples) covering 72 HLA-I alleles and retrieved a total of 2,066 unique phosphorylated peptides. We then expanded our motif deconvolution tool to identify precise binding motifs of phosphorylated HLA-I ligands. Our results reveal a clear enrichment of phosphorylated peptides among HLA-C ligands and demonstrate a prevalent role of both HLA-I motifs and kinase motifs on the presentation of phosphorylated peptides. These data further enabled us to develop and validate the first predictor of interactions between HLA-I molecules and phosphorylated peptides. Full Article
predict Phosphotyrosine-based Phosphoproteomics for Target Identification and Drug Response Prediction in AML Cell Lines [Research] By feedproxy.google.com Published On :: 2020-05-01T00:05:26-07:00 Acute myeloid leukemia (AML) is a clonal disorder arising from hematopoietic myeloid progenitors. Aberrantly activated tyrosine kinases (TK) are involved in leukemogenesis and are associated with poor treatment outcome. Kinase inhibitor (KI) treatment has shown promise in improving patient outcome in AML. However, inhibitor selection for patients is suboptimal. In a preclinical effort to address KI selection, we analyzed a panel of 16 AML cell lines using phosphotyrosine (pY) enrichment-based, label-free phosphoproteomics. The Integrative Inferred Kinase Activity (INKA) algorithm was used to identify hyperphosphorylated, active kinases as candidates for KI treatment, and efficacy of selected KIs was tested. Heterogeneous signaling was observed with between 241 and 2764 phosphopeptides detected per cell line. Of 4853 identified phosphopeptides with 4229 phosphosites, 4459 phosphopeptides (4430 pY) were linked to 3605 class I sites (3525 pY). INKA analysis in single cell lines successfully pinpointed driver kinases (PDGFRA, JAK2, KIT and FLT3) corresponding with activating mutations present in these cell lines. Furthermore, potential receptor tyrosine kinase (RTK) drivers, undetected by standard molecular analyses, were identified in four cell lines (FGFR1 in KG-1 and KG-1a, PDGFRA in Kasumi-3, and FLT3 in MM6). These cell lines proved highly sensitive to specific KIs. Six AML cell lines without a clear RTK driver showed evidence of MAPK1/3 activation, indicative of the presence of activating upstream RAS mutations. Importantly, FLT3 phosphorylation was demonstrated in two clinical AML samples with a FLT3 internal tandem duplication (ITD) mutation. Our data show the potential of pY-phosphoproteomics and INKA analysis to provide insight in AML TK signaling and identify hyperactive kinases as potential targets for treatment in AML cell lines. These results warrant future investigation of clinical samples to further our understanding of TK phosphorylation in relation to clinical response in the individual patient. Full Article
predict Predictions and Policymaking: Complex Modelling Beyond COVID-19 By feedproxy.google.com Published On :: Wed, 01 Apr 2020 09:11:23 +0000 1 April 2020 Yasmin Afina Research Assistant, International Security Programme @afinayasmin LinkedIn Calum Inverarity Research Analyst and Coordinator, International Security Programme LinkedIn The COVID-19 pandemic has highlighted the potential of complex systems modelling for policymaking but it is crucial to also understand its limitations. GettyImages-1208425931.jpg A member of the media wearing a protective face mask works in Downing Street where Britain's Prime Minister Boris Johnson is self-isolating in central London, 27 March 2020. Photo by TOLGA AKMEN/AFP via Getty Images. Complex systems models have played a significant role in informing and shaping the public health measures adopted by governments in the context of the COVID-19 pandemic. For instance, modelling carried out by a team at Imperial College London is widely reported to have driven the approach in the UK from a strategy of mitigation to one of suppression.Complex systems modelling will increasingly feed into policymaking by predicting a range of potential correlations, results and outcomes based on a set of parameters, assumptions, data and pre-defined interactions. It is already instrumental in developing risk mitigation and resilience measures to address and prepare for existential crises such as pandemics, prospects of a nuclear war, as well as climate change.The human factorIn the end, model-driven approaches must stand up to the test of real-life data. Modelling for policymaking must take into account a number of caveats and limitations. Models are developed to help answer specific questions, and their predictions will depend on the hypotheses and definitions set by the modellers, which are subject to their individual and collective biases and assumptions. For instance, the models developed by Imperial College came with the caveated assumption that a policy of social distancing for people over 70 will have a 75 per cent compliance rate. This assumption is based on the modellers’ own perceptions of demographics and society, and may not reflect all societal factors that could impact this compliance rate in real life, such as gender, age, ethnicity, genetic diversity, economic stability, as well as access to food, supplies and healthcare. This is why modelling benefits from a cognitively diverse team who bring a wide range of knowledge and understanding to the early creation of a model.The potential of artificial intelligenceMachine learning, or artificial intelligence (AI), has the potential to advance the capacity and accuracy of modelling techniques by identifying new patterns and interactions, and overcoming some of the limitations resulting from human assumptions and bias. Yet, increasing reliance on these techniques raises the issue of explainability. Policymakers need to be fully aware and understand the model, assumptions and input data behind any predictions and must be able to communicate this aspect of modelling in order to uphold democratic accountability and transparency in public decision-making.In addition, models using machine learning techniques require extensive amounts of data, which must also be of high quality and as free from bias as possible to ensure accuracy and address the issues at stake. Although technology may be used in the process (i.e. automated extraction and processing of information with big data), data is ultimately created, collected, aggregated and analysed by and for human users. Datasets will reflect the individual and collective biases and assumptions of those creating, collecting, processing and analysing this data. Algorithmic bias is inevitable, and it is essential that policy- and decision-makers are fully aware of how reliable the systems are, as well as their potential social implications.The age of distrustIncreasing use of emerging technologies for data- and evidence-based policymaking is taking place, paradoxically, in an era of growing mistrust towards expertise and experts, as infamously surmised by Michael Gove. Policymakers and subject-matter experts have faced increased public scrutiny of their findings and the resultant policies that they have been used to justify.This distrust and scepticism within public discourse has only been fuelled by an ever-increasing availability of diffuse sources of information, not all of which are verifiable and robust. This has caused tension between experts, policymakers and public, which has led to conflicts and uncertainty over what data and predictions can be trusted, and to what degree. This dynamic is exacerbated when considering that certain individuals may purposefully misappropriate, or simply misinterpret, data to support their argument or policies. Politicians are presently considered the least trusted professionals by the UK public, highlighting the importance of better and more effective communication between the scientific community, policymakers and the populations affected by policy decisions.Acknowledging limitationsWhile measures can and should be built in to improve the transparency and robustness of scientific models in order to counteract these common criticisms, it is important to acknowledge that there are limitations to the steps that can be taken. This is particularly the case when dealing with predictions of future events, which inherently involve degrees of uncertainty that cannot be fully accounted for by human or machine. As a result, if not carefully considered and communicated, the increased use of complex modelling in policymaking holds the potential to undermine and obfuscate the policymaking process, which may contribute towards significant mistakes being made, increased uncertainty, lack of trust in the models and in the political process and further disaffection of citizens.The potential contribution of complexity modelling to the work of policymakers is undeniable. However, it is imperative to appreciate the inner workings and limitations of these models, such as the biases that underpin their functioning and the uncertainties that they will not be fully capable of accounting for, in spite of their immense power. They must be tested against the data, again and again, as new information becomes available or there is a risk of scientific models becoming embroiled in partisan politicization and potentially weaponized for political purposes. It is therefore important not to consider these models as oracles, but instead as one of many contributions to the process of policymaking. Full Article
predict Predicting Rays' Opening Day roster By mlb.mlb.com Published On :: Sun, 10 Feb 2019 13:19:32 EDT Here's an early look at how the Rays' 25-man roster could shape up on Opening Day. Full Article
predict Predictors of Postpartum Diabetes in Women With Gestational Diabetes Mellitus By diabetes.diabetesjournals.org Published On :: 2006-03-01 Kristian LöbnerMar 1, 2006; 55:792-797Pathophysiology Full Article
predict Predicting the Giants' Opening Day roster By mlb.mlb.com Published On :: Mon, 11 Feb 2019 11:08:17 EDT With Spring Training set to kick off Tuesday, it feels like an opportune time to put together a way-too-early look at who might be with the Giants when they begin their regular season against the Padres on March 28. Full Article
predict Predicting the Braves' Opening Day roster By mlb.mlb.com Published On :: Sun, 10 Feb 2019 15:16:51 EDT At some point over the next six weeks, injuries, improvement or regression will inevitably alter the plan nearly every Major League club brings into Spring Training. The Braves must decide exactly how to round out their rotation and bullpen, but for now it looks like they won't have any position-player battles. Here's the first prediction of how Atlanta's Opening Day roster might look. Full Article
predict Predicting O's Opening Day roster By mlb.mlb.com Published On :: Sun, 10 Feb 2019 15:56:25 EDT Here's an early look at how the Orioles' 25-man roster could shape up on Opening Day. Full Article
predict Impaired Metabolic Flexibility to High-Fat Overfeeding Predicts Future Weight Gain in Healthy Adults By diabetes.diabetesjournals.org Published On :: 2020-01-20T12:00:26-08:00 The ability to switch fuels for oxidation in response to changes in macronutrient composition of diet (metabolic flexibility) may be informative of individuals’ susceptibility to weight gain. Seventy-nine healthy, weight-stable participants underwent 24-h assessments of energy expenditure and respiratory quotient (RQ) in a whole-room calorimeter during energy balance (EBL) (50% carbohydrate, 30% fat) and then during 24-h fasting and three 200% overfeeding diets in a crossover design. Metabolic flexibility was defined as the change in 24-h RQ from EBL during fasting and standard overfeeding (STOF) (50% carbohydrate, 30% fat), high-fat overfeeding (HFOF) (60% fat, 20% carbohydrate), and high-carbohydrate overfeeding (HCOF) (75% carbohydrate, 5% fat) diets. Free-living weight change was assessed after 6 and 12 months. Compared with EBL, RQ decreased on average by 9% during fasting and by 4% during HFOF but increased by 4% during STOF and by 8% during HCOF. A smaller decrease in RQ, reflecting a smaller increase in lipid oxidation rate, during HFOF but not during the other diets predicted greater weight gain at both 6 and 12 months. An impaired metabolic flexibility to acute HFOF can identify individuals prone to weight gain, indicating that an individual’s capacity to oxidize dietary fat is a metabolic determinant of weight change. Full Article
predict Predicting the Astros' Opening Day roster By mlb.mlb.com Published On :: Mon, 11 Feb 2019 11:14:54 EDT There won't be many roster battles when the Astros open camp later this week, and they won't be decided until the final days of the team's stay in West Palm Beach, Fla. Houston has an open competition for its fifth starter spot heading into Spring Training, and the club also has to sort out a crowded outfield and the final spots in the bullpen. Full Article
predict David Williams - everyday discrimination is an independent predictor of mortality By feeds.bmj.com Published On :: Thu, 13 Feb 2020 12:44:12 +0000 There comes a tipping point in all campaigns when the evidence is overwhelming and the only way to proceed is with action. According to David Williams, it’s time to tackle the disproportionate effects of race on patients in the UK. David Williams, from Harvard University, developed the Everyday Discrimination Scale that, in 1997, launched a new... Full Article
predict C-Reactive Protein Is an Independent Predictor of Risk for the Development of Diabetes in the West of Scotland Coronary Prevention Study By diabetes.diabetesjournals.org Published On :: 2002-05-01 Dilys J. FreemanMay 1, 2002; 51:1596-1600Complications Full Article
predict Elevated Levels of Acute-Phase Proteins and Plasminogen Activator Inhibitor-1 Predict the Development of Type 2 Diabetes: The Insulin Resistance Atherosclerosis Study By diabetes.diabetesjournals.org Published On :: 2002-04-01 Andreas FestaApr 1, 2002; 51:1131-1137Complications Full Article
predict Predictive Modeling of Type 1 Diabetes Stages Using Disparate Data Sources By diabetes.diabetesjournals.org Published On :: 2020-01-20T12:00:26-08:00 This study aims to model genetic, immunologic, metabolomics, and proteomic biomarkers for development of islet autoimmunity (IA) and progression to type 1 diabetes in a prospective high-risk cohort. We studied 67 children: 42 who developed IA (20 of 42 progressed to diabetes) and 25 control subjects matched for sex and age. Biomarkers were assessed at four time points: earliest available sample, just prior to IA, just after IA, and just prior to diabetes onset. Predictors of IA and progression to diabetes were identified across disparate sources using an integrative machine learning algorithm and optimization-based feature selection. Our integrative approach was predictive of IA (area under the receiver operating characteristic curve [AUC] 0.91) and progression to diabetes (AUC 0.92) based on standard cross-validation (CV). Among the strongest predictors of IA were change in serum ascorbate, 3-methyl-oxobutyrate, and the PTPN22 (rs2476601) polymorphism. Serum glucose, ADP fibrinogen, and mannose were among the strongest predictors of progression to diabetes. This proof-of-principle analysis is the first study to integrate large, diverse biomarker data sets into a limited number of features, highlighting differences in pathways leading to IA from those predicting progression to diabetes. Integrated models, if validated in independent populations, could provide novel clues concerning the pathways leading to IA and type 1 diabetes. Full Article
predict World Bank predicts sharpest decline of remittances to Caribbean By jamaica-gleaner.com Published On :: Thu, 23 Apr 2020 15:26:53 -0500 WASHINGTON, CMC – The World Bank has predicted the sharpest decline of remittances to Latin America and the Caribbean, saying that global remittances on a whole are projected to fall by about 20 percent in 2020 due to the economic crisis... Full Article
predict Rotating night shift work and adherence to unhealthy lifestyle in predicting risk of type 2 diabetes: results from two large US cohorts of female nurses By feeds.bmj.com Published On :: Wednesday, November 21, 2018 - 23:31 Full Article
predict Predicting the Reds' Opening Day roster By mlb.mlb.com Published On :: Mon, 11 Feb 2019 10:57:54 EDT Here's an early look at how the Reds' 25-man roster could shape up on Opening Day. Full Article
predict Predicting the D-backs' Opening Day roster By mlb.mlb.com Published On :: Mon, 11 Feb 2019 10:59:54 EDT Here's an early look at how the D-backs' 25-man roster could shape up on Opening Day. Full Article
predict Predictive Value of 18F-Florbetapir and 18F-FDG PET for Conversion from Mild Cognitive Impairment to Alzheimer Dementia By jnm.snmjournals.org Published On :: 2020-04-01T06:00:28-07:00 The present study examined the predictive values of amyloid PET, 18F-FDG PET, and nonimaging predictors (alone and in combination) for development of Alzheimer dementia (AD) in a large population of patients with mild cognitive impairment (MCI). Methods: The study included 319 patients with MCI from the Alzheimer Disease Neuroimaging Initiative database. In a derivation dataset (n = 159), the following Cox proportional-hazards models were constructed, each adjusted for age and sex: amyloid PET using 18F-florbetapir (pattern expression score of an amyloid-β AD conversion–related pattern, constructed by principle-components analysis); 18F-FDG PET (pattern expression score of a previously defined 18F-FDG–based AD conversion–related pattern, constructed by principle-components analysis); nonimaging (functional activities questionnaire, apolipoprotein E, and mini-mental state examination score); 18F-FDG PET + amyloid PET; amyloid PET + nonimaging; 18F-FDG PET + nonimaging; and amyloid PET + 18F-FDG PET + nonimaging. In a second step, the results of Cox regressions were applied to a validation dataset (n = 160) to stratify subjects according to the predicted conversion risk. Results: On the basis of the independent validation dataset, the 18F-FDG PET model yielded a significantly higher predictive value than the amyloid PET model. However, both were inferior to the nonimaging model and were significantly improved by the addition of nonimaging variables. The best prediction accuracy was reached by combining 18F-FDG PET, amyloid PET, and nonimaging variables. The combined model yielded 5-y free-of-conversion rates of 100%, 64%, and 24% for the low-, medium- and high-risk groups, respectively. Conclusion: 18F-FDG PET, amyloid PET, and nonimaging variables represent complementary predictors of conversion from MCI to AD. Especially in combination, they enable an accurate stratification of patients according to their conversion risks, which is of great interest for patient care and clinical trials. Full Article
predict Prediction models for diagnosis and prognosis of covid-19 infection: systematic review and critical appraisal By feeds.bmj.com Published On :: Tuesday, April 7, 2020 - 09:01 Full Article
predict Use of electronic medical records in development and validation of risk prediction models of hospital readmission: systematic review By feeds.bmj.com Published On :: Wednesday, April 8, 2020 - 09:41 Full Article
predict Webinar: The UK's Unpredictable General Election? By feedproxy.google.com Published On :: Wed, 13 Nov 2019 17:00:01 +0000 Members Event Webinar 19 November 2019 - 11:30am to 12:00pm Chatham House | 10 St James's Square | London | SW1Y 4LE Event participants Professor Matthew Goodwin, Visiting Senior Fellow, Europe Programme, Chatham HouseProfessor David Cutts, Associate Fellow, Europe Programme, Chatham House On 12 December 2019, the United Kingdom will go to the polls in a fifth nationwide vote in only four years. This is expected to be one of the most unpredictable general elections in the nation’s post-war history with the anti-Brexit Liberal Democrats and Nigel Farage’s Eurosceptic Brexit Party both presenting a serious challenge to the UK’s established two-party system.This webinar will discuss the UK general election and will unpack some of the reasons behind its supposed unpredictability. To what extent will this be a Brexit election and what are the other issues at the forefront of voters’ minds? And will the outcome of the election give us a clear indication of the UK’s domestic, European and wider international political trajectory?Please note, this event is online only. Members can watch webinars from a computer or another internet-ready device and do not need to come to Chatham House to attend. Full Article
predict Plasma Lipidome and Prediction of Type 2 Diabetes in the Population-Based Malmö Diet and Cancer Cohort By care.diabetesjournals.org Published On :: 2020-01-20T12:00:30-08:00 OBJECTIVE Type 2 diabetes mellitus (T2DM) is associated with dyslipidemia, but the detailed alterations in lipid species preceding the disease are largely unknown. We aimed to identify plasma lipids associated with development of T2DM and investigate their associations with lifestyle. RESEARCH DESIGN AND METHODS At baseline, 178 lipids were measured by mass spectrometry in 3,668 participants without diabetes from the Malmö Diet and Cancer Study. The population was randomly split into discovery (n = 1,868, including 257 incident cases) and replication (n = 1,800, including 249 incident cases) sets. We used orthogonal projections to latent structures discriminant analyses, extracted a predictive component for T2DM incidence (lipid-PCDM), and assessed its association with T2DM incidence using Cox regression and lifestyle factors using general linear models. RESULTS A T2DM-predictive lipid-PCDM derived from the discovery set was independently associated with T2DM incidence in the replication set, with hazard ratio (HR) among subjects in the fifth versus first quintile of lipid-PCDM of 3.7 (95% CI 2.2–6.5). In comparison, the HR of T2DM among obese versus normal weight subjects was 1.8 (95% CI 1.2–2.6). Clinical lipids did not improve T2DM risk prediction, but adding the lipid-PCDM to all conventional T2DM risk factors increased the area under the receiver operating characteristics curve by 3%. The lipid-PCDM was also associated with a dietary risk score for T2DM incidence and lower level of physical activity. CONCLUSIONS A lifestyle-related lipidomic profile strongly predicts T2DM development beyond current risk factors. Further studies are warranted to test if lifestyle interventions modifying this lipidomic profile can prevent T2DM. Full Article
predict Respective Contributions of Glycemic Variability and Mean Daily Glucose as Predictors of Hypoglycemia in Type 1 Diabetes: Are They Equivalent? By care.diabetesjournals.org Published On :: 2020-03-20T11:50:34-07:00 OBJECTIVE To evaluate the respective contributions of short-term glycemic variability and mean daily glucose (MDG) concentration to the risk of hypoglycemia in type 1 diabetes. RESEARCH DESIGN AND METHODS People with type 1 diabetes (n = 100) investigated at the University Hospital of Montpellier (France) underwent continuous glucose monitoring (CGM) on two consecutive days, providing a total of 200 24-h glycemic profiles. The following parameters were computed: MDG concentration, within-day glycemic variability (coefficient of variation for glucose [%CV]), and risk of hypoglycemia (presented as the percentage of time spent below three glycemic thresholds: 3.9, 3.45, and 3.0 mmol/L). RESULTS MDG was significantly higher, and %CV significantly lower (both P < 0.001), when comparing the 24-h glycemic profiles according to whether no time or a certain duration of time was spent below the thresholds. Univariate regression analyses showed that MDG and %CV were the two explanatory variables that entered the model with the outcome variable (time spent below the thresholds). The classification and regression tree procedure indicated that the predominant predictor for hypoglycemia was %CV when the threshold was 3.0 mmol/L. In people with mean glucose ≤7.8 mmol/L, the time spent below 3.0 mmol/L was shortest (P < 0.001) when %CV was below 34%. CONCLUSIONS In type 1 diabetes, short-term glycemic variability relative to mean glucose (i.e., %CV) explains more hypoglycemia than does mean glucose alone when the glucose threshold is 3.0 mmol/L. Minimizing the risk of hypoglycemia requires a %CV below 34%. Full Article
predict Erratum. Predicting 10-Year Risk of End-Organ Complications of Type 2 Diabetes With and Without Metabolic Surgery: A Machine Learning Approach. Diabetes Care 2020;43:852-859 By care.diabetesjournals.org Published On :: 2020-04-15T14:26:52-07:00 Full Article
predict Using the BRAVO Risk Engine to Predict Cardiovascular Outcomes in Clinical Trials With Sodium-Glucose Transporter 2 Inhibitors By care.diabetesjournals.org Published On :: 2020-04-28T12:58:49-07:00 OBJECTIVEThis study evaluated the ability of the Building, Relating, Assessing, and Validating Outcomes (BRAVO) risk engine to accurately project cardiovascular outcomes in three major clinical trials—BI 10773 (Empagliflozin) Cardiovascular Outcome Event Trial in Type 2 Diabetes Mellitus Patients (EMPA-REG OUTCOME), Canagliflozin Cardiovascular Assessment Study (CANVAS), and Dapagliflozin Effect on Cardiovascular Events–Thrombolysis in Myocardial Infarction (DECLARE-TIMI 58) trial—on sodium–glucose cotransporter 2 inhibitors (SGLT2is) to treat patients with type 2 diabetes.RESEARCH DESIGN AND METHODSBaseline data from the publications of the three trials were obtained and entered into the BRAVO model to predict cardiovascular outcomes. Projected benefits of reducing risk factors of interest (A1C, systolic blood pressure [SBP], LDL, or BMI) on cardiovascular events were evaluated, and simulated outcomes were compared with those observed in each trial.RESULTSBRAVO achieved the best prediction accuracy when simulating outcomes of the CANVAS and DECLARE-TIMI 58 trials. For the EMPA-REG OUTCOME trial, a mild bias was observed (~20%) in the prediction of mortality and angina. The effect of risk reduction on outcomes in treatment versus placebo groups predicted by the BRAVO model strongly correlated with the observed effect of risk reduction on the trial outcomes as published. Finally, the BRAVO engine revealed that most of the clinical benefits associated with SGLT2i treatment are through A1C control, although reductions in SBP and BMI explain a proportion of the observed decline in cardiovascular events.CONCLUSIONSThe BRAVO risk engine was effective in predicting the benefits of SGLT2is on cardiovascular health through improvements in commonly measured risk factors, including A1C, SBP, and BMI. Since these benefits are individually small, the use of the complex, dynamic BRAVO model is ideal to explain the cardiovascular outcome trial results. Full Article
predict Predicting the Risk of Inpatient Hypoglycemia With Machine Learning Using Electronic Health Records By care.diabetesjournals.org Published On :: 2020-04-29T13:46:01-07:00 OBJECTIVEWe analyzed data from inpatients with diabetes admitted to a large university hospital to predict the risk of hypoglycemia through the use of machine learning algorithms.RESEARCH DESIGN AND METHODSFour years of data were extracted from a hospital electronic health record system. This included laboratory and point-of-care blood glucose (BG) values to identify biochemical and clinically significant hypoglycemic episodes (BG ≤3.9 and ≤2.9 mmol/L, respectively). We used patient demographics, administered medications, vital signs, laboratory results, and procedures performed during the hospital stays to inform the model. Two iterations of the data set included the doses of insulin administered and the past history of inpatient hypoglycemia. Eighteen different prediction models were compared using the area under the receiver operating characteristic curve (AUROC) through a 10-fold cross validation.RESULTSWe analyzed data obtained from 17,658 inpatients with diabetes who underwent 32,758 admissions between July 2014 and August 2018. The predictive factors from the logistic regression model included people undergoing procedures, weight, type of diabetes, oxygen saturation level, use of medications (insulin, sulfonylurea, and metformin), and albumin levels. The machine learning model with the best performance was the XGBoost model (AUROC 0.96). This outperformed the logistic regression model, which had an AUROC of 0.75 for the estimation of the risk of clinically significant hypoglycemia.CONCLUSIONSAdvanced machine learning models are superior to logistic regression models in predicting the risk of hypoglycemia in inpatients with diabetes. Trials of such models should be conducted in real time to evaluate their utility to reduce inpatient hypoglycemia. Full Article
predict AccuWeather increases number of hurricanes predicted for 'very active' 2020 Atlantic season By www.upi.com Published On :: Fri, 08 May 2020 20:39:05 -0400 Based on the newest forecasting models, AccuWeather forecasters have extended the upper range of hurricanes predicted for the Atlantic hurricane season. Full Article
predict Predicting 10-Year Risk of End-Organ Complications of Type 2 Diabetes With and Without Metabolic Surgery: A Machine Learning Approach By care.diabetesjournals.org Published On :: 2020-03-20T11:50:34-07:00 OBJECTIVE To construct and internally validate prediction models to estimate the risk of long-term end-organ complications and mortality in patients with type 2 diabetes and obesity that can be used to inform treatment decisions for patients and practitioners who are considering metabolic surgery. RESEARCH DESIGN AND METHODS A total of 2,287 patients with type 2 diabetes who underwent metabolic surgery between 1998 and 2017 in the Cleveland Clinic Health System were propensity-matched 1:5 to 11,435 nonsurgical patients with BMI ≥30 kg/m2 and type 2 diabetes who received usual care with follow-up through December 2018. Multivariable time-to-event regression and random forest machine learning models were built and internally validated using fivefold cross-validation to predict the 10-year risk for four outcomes of interest. The prediction models were programmed to construct user-friendly web-based and smartphone applications of Individualized Diabetes Complications (IDC) Risk Scores for clinical use. RESULTS The prediction tools demonstrated the following discrimination ability based on the area under the receiver operating characteristic curve (1 = perfect discrimination and 0.5 = chance) at 10 years in the surgical and nonsurgical groups, respectively: all-cause mortality (0.79 and 0.81), coronary artery events (0.66 and 0.67), heart failure (0.73 and 0.75), and nephropathy (0.73 and 0.76). When a patient’s data are entered into the IDC application, it estimates the individualized 10-year morbidity and mortality risks with and without undergoing metabolic surgery. CONCLUSIONS The IDC Risk Scores can provide personalized evidence-based risk information for patients with type 2 diabetes and obesity about future cardiovascular outcomes and mortality with and without metabolic surgery based on their current status of obesity, diabetes, and related cardiometabolic conditions. Full Article
predict The science of fate : why your future is more predictable than you think / Hannah Critchlow. By www.catalog.slsa.sa.gov.au Published On :: Neurosciences. Full Article
predict [Accounts of medical and magical character, fortune tellings and predictions] By search.wellcomelibrary.org Published On :: 19th century. Full Article
predict What Predicts Early College Success for Indiana Students? By feedproxy.google.com Published On :: Mon, 02 Jul 2018 00:00:00 +0000 Research from REL Midwest examines the student characteristics associated with early college success in Indiana, with a focus on financial aid. Full Article Indiana
predict Gaussian field on the symmetric group: Prediction and learning By projecteuclid.org Published On :: Tue, 05 May 2020 22:00 EDT François Bachoc, Baptiste Broto, Fabrice Gamboa, Jean-Michel Loubes. Source: Electronic Journal of Statistics, Volume 14, Number 1, 503--546.Abstract: In the framework of the supervised learning of a real function defined on an abstract space $mathcal{X}$, Gaussian processes are widely used. The Euclidean case for $mathcal{X}$ is well known and has been widely studied. In this paper, we explore the less classical case where $mathcal{X}$ is the non commutative finite group of permutations (namely the so-called symmetric group $S_{N}$). We provide an application to Gaussian process based optimization of Latin Hypercube Designs. We also extend our results to the case of partial rankings. Full Article
predict Assessing prediction error at interpolation and extrapolation points By projecteuclid.org Published On :: Mon, 27 Apr 2020 22:02 EDT Assaf Rabinowicz, Saharon Rosset. Source: Electronic Journal of Statistics, Volume 14, Number 1, 272--301.Abstract: Common model selection criteria, such as $AIC$ and its variants, are based on in-sample prediction error estimators. However, in many applications involving predicting at interpolation and extrapolation points, in-sample error does not represent the relevant prediction error. In this paper new prediction error estimators, $tAI$ and $Loss(w_{t})$ are introduced. These estimators generalize previous error estimators, however are also applicable for assessing prediction error in cases involving interpolation and extrapolation. Based on these prediction error estimators, two model selection criteria with the same spirit as $AIC$ and Mallow’s $C_{p}$ are suggested. The advantages of our suggested methods are demonstrated in a simulation and a real data analysis of studies involving interpolation and extrapolation in linear mixed model and Gaussian process regression. Full Article
predict On the predictive potential of kernel principal components By projecteuclid.org Published On :: Wed, 15 Apr 2020 04:02 EDT Ben Jones, Andreas Artemiou, Bing Li. Source: Electronic Journal of Statistics, Volume 14, Number 1, 1--23.Abstract: We give a probabilistic analysis of a phenomenon in statistics which, until recently, has not received a convincing explanation. This phenomenon is that the leading principal components tend to possess more predictive power for a response variable than lower-ranking ones despite the procedure being unsupervised. Our result, in its most general form, shows that the phenomenon goes far beyond the context of linear regression and classical principal components — if an arbitrary distribution for the predictor $X$ and an arbitrary conditional distribution for $Yvert X$ are chosen then any measureable function $g(Y)$, subject to a mild condition, tends to be more correlated with the higher-ranking kernel principal components than with the lower-ranking ones. The “arbitrariness” is formulated in terms of unitary invariance then the tendency is explicitly quantified by exploring how unitary invariance relates to the Cauchy distribution. The most general results, for technical reasons, are shown for the case where the kernel space is finite dimensional. The occurency of this tendency in real world databases is also investigated to show that our results are consistent with observation. Full Article
predict Sparsely observed functional time series: estimation and prediction By projecteuclid.org Published On :: Thu, 27 Feb 2020 22:04 EST Tomáš Rubín, Victor M. Panaretos. Source: Electronic Journal of Statistics, Volume 14, Number 1, 1137--1210.Abstract: Functional time series analysis, whether based on time or frequency domain methodology, has traditionally been carried out under the assumption of complete observation of the constituent series of curves, assumed stationary. Nevertheless, as is often the case with independent functional data, it may well happen that the data available to the analyst are not the actual sequence of curves, but relatively few and noisy measurements per curve, potentially at different locations in each curve’s domain. Under this sparse sampling regime, neither the established estimators of the time series’ dynamics nor their corresponding theoretical analysis will apply. The subject of this paper is to tackle the problem of estimating the dynamics and of recovering the latent process of smooth curves in the sparse regime. Assuming smoothness of the latent curves, we construct a consistent nonparametric estimator of the series’ spectral density operator and use it to develop a frequency-domain recovery approach, that predicts the latent curve at a given time by borrowing strength from the (estimated) dynamic correlations in the series across time. This new methodology is seen to comprehensively outperform a naive recovery approach that would ignore temporal dependence and use only methodology employed in the i.i.d. setting and hinging on the lag zero covariance. Further to predicting the latent curves from their noisy point samples, the method fills in gaps in the sequence (curves nowhere sampled), denoises the data, and serves as a basis for forecasting. Means of providing corresponding confidence bands are also investigated. A simulation study interestingly suggests that sparse observation for a longer time period may provide better performance than dense observation for a shorter period, in the presence of smoothness. The methodology is further illustrated by application to an environmental data set on fair-weather atmospheric electricity, which naturally leads to a sparse functional time series. Full Article
predict Scalar-on-function regression for predicting distal outcomes from intensively gathered longitudinal data: Interpretability for applied scientists By projecteuclid.org Published On :: Tue, 05 Nov 2019 22:03 EST John J. Dziak, Donna L. Coffman, Matthew Reimherr, Justin Petrovich, Runze Li, Saul Shiffman, Mariya P. Shiyko. Source: Statistics Surveys, Volume 13, 150--180.Abstract: Researchers are sometimes interested in predicting a distal or external outcome (such as smoking cessation at follow-up) from the trajectory of an intensively recorded longitudinal variable (such as urge to smoke). This can be done in a semiparametric way via scalar-on-function regression. However, the resulting fitted coefficient regression function requires special care for correct interpretation, as it represents the joint relationship of time points to the outcome, rather than a marginal or cross-sectional relationship. We provide practical guidelines, based on experience with scientific applications, for helping practitioners interpret their results and illustrate these ideas using data from a smoking cessation study. Full Article
predict A comparison of spatial predictors when datasets could be very large By projecteuclid.org Published On :: Tue, 19 Jul 2016 14:13 EDT Jonathan R. Bradley, Noel Cressie, Tao Shi. Source: Statistics Surveys, Volume 10, 100--131.Abstract: In this article, we review and compare a number of methods of spatial prediction, where each method is viewed as an algorithm that processes spatial data. To demonstrate the breadth of available choices, we consider both traditional and more-recently-introduced spatial predictors. Specifically, in our exposition we review: traditional stationary kriging, smoothing splines, negative-exponential distance-weighting, fixed rank kriging, modified predictive processes, a stochastic partial differential equation approach, and lattice kriging. This comparison is meant to provide a service to practitioners wishing to decide between spatial predictors. Hence, we provide technical material for the unfamiliar, which includes the definition and motivation for each (deterministic and stochastic) spatial predictor. We use a benchmark dataset of $mathrm{CO}_{2}$ data from NASA’s AIRS instrument to address computational efficiencies that include CPU time and memory usage. Furthermore, the predictive performance of each spatial predictor is assessed empirically using a hold-out subset of the AIRS data. Full Article
predict Errata: A survey of Bayesian predictive methods for model assessment, selection and comparison By projecteuclid.org Published On :: Wed, 26 Feb 2014 09:10 EST Aki Vehtari, Janne Ojanen. Source: Statistics Surveys, Volume 8, , 1--1.Abstract: Errata for “A survey of Bayesian predictive methods for model assessment, selection and comparison” by A. Vehtari and J. Ojanen, Statistics Surveys , 6 (2012), 142–228. doi:10.1214/12-SS102. Full Article
predict A survey of Bayesian predictive methods for model assessment, selection and comparison By projecteuclid.org Published On :: Thu, 27 Dec 2012 12:22 EST Aki Vehtari, Janne OjanenSource: Statist. Surv., Volume 6, 142--228.Abstract: To date, several methods exist in the statistical literature for model assessment, which purport themselves specifically as Bayesian predictive methods. The decision theoretic assumptions on which these methods are based are not always clearly stated in the original articles, however. The aim of this survey is to provide a unified review of Bayesian predictive model assessment and selection methods, and of methods closely related to them. We review the various assumptions that are made in this context and discuss the connections between different approaches, with an emphasis on how each method approximates the expected utility of using a Bayesian model for the purpose of predicting future data. Full Article
predict Prediction in several conventional contexts By projecteuclid.org Published On :: Tue, 08 May 2012 08:50 EDT Bertrand Clarke, Jennifer ClarkeSource: Statist. Surv., Volume 6, 1--73.Abstract: We review predictive techniques from several traditional branches of statistics. Starting with prediction based on the normal model and on the empirical distribution function, we proceed to techniques for various forms of regression and classification. Then, we turn to time series, longitudinal data, and survival analysis. Our focus throughout is on the mechanics of prediction more than on the properties of predictors. Full Article
predict Predictive Modeling of ICU Healthcare-Associated Infections from Imbalanced Data. Using Ensembles and a Clustering-Based Undersampling Approach. (arXiv:2005.03582v1 [cs.LG]) By arxiv.org Published On :: Early detection of patients vulnerable to infections acquired in the hospital environment is a challenge in current health systems given the impact that such infections have on patient mortality and healthcare costs. This work is focused on both the identification of risk factors and the prediction of healthcare-associated infections in intensive-care units by means of machine-learning methods. The aim is to support decision making addressed at reducing the incidence rate of infections. In this field, it is necessary to deal with the problem of building reliable classifiers from imbalanced datasets. We propose a clustering-based undersampling strategy to be used in combination with ensemble classifiers. A comparative study with data from 4616 patients was conducted in order to validate our proposal. We applied several single and ensemble classifiers both to the original dataset and to data preprocessed by means of different resampling methods. The results were analyzed by means of classic and recent metrics specifically designed for imbalanced data classification. They revealed that the proposal is more efficient in comparison with other approaches. Full Article
predict Relevance Vector Machine with Weakly Informative Hyperprior and Extended Predictive Information Criterion. (arXiv:2005.03419v1 [stat.ML]) By arxiv.org Published On :: In the variational relevance vector machine, the gamma distribution is representative as a hyperprior over the noise precision of automatic relevance determination prior. Instead of the gamma hyperprior, we propose to use the inverse gamma hyperprior with a shape parameter close to zero and a scale parameter not necessary close to zero. This hyperprior is associated with the concept of a weakly informative prior. The effect of this hyperprior is investigated through regression to non-homogeneous data. Because it is difficult to capture the structure of such data with a single kernel function, we apply the multiple kernel method, in which multiple kernel functions with different widths are arranged for input data. We confirm that the degrees of freedom in a model is controlled by adjusting the scale parameter and keeping the shape parameter close to zero. A candidate for selecting the scale parameter is the predictive information criterion. However the estimated model using this criterion seems to cause over-fitting. This is because the multiple kernel method makes the model a situation where the dimension of the model is larger than the data size. To select an appropriate scale parameter even in such a situation, we also propose an extended prediction information criterion. It is confirmed that a multiple kernel relevance vector regression model with good predictive accuracy can be obtained by selecting the scale parameter minimizing extended prediction information criterion. Full Article
predict Adaptive Invariance for Molecule Property Prediction. (arXiv:2005.03004v1 [q-bio.QM]) By arxiv.org Published On :: Effective property prediction methods can help accelerate the search for COVID-19 antivirals either through accurate in-silico screens or by effectively guiding on-going at-scale experimental efforts. However, existing prediction tools have limited ability to accommodate scarce or fragmented training data currently available. In this paper, we introduce a novel approach to learn predictors that can generalize or extrapolate beyond the heterogeneous data. Our method builds on and extends recently proposed invariant risk minimization, adaptively forcing the predictor to avoid nuisance variation. We achieve this by continually exercising and manipulating latent representations of molecules to highlight undesirable variation to the predictor. To test the method we use a combination of three data sources: SARS-CoV-2 antiviral screening data, molecular fragments that bind to SARS-CoV-2 main protease and large screening data for SARS-CoV-1. Our predictor outperforms state-of-the-art transfer learning methods by significant margin. We also report the top 20 predictions of our model on Broad drug repurposing hub. Full Article
predict Optimal prediction in the linearly transformed spiked model By projecteuclid.org Published On :: Mon, 17 Feb 2020 04:02 EST Edgar Dobriban, William Leeb, Amit Singer. Source: The Annals of Statistics, Volume 48, Number 1, 491--513.Abstract: We consider the linearly transformed spiked model , where the observations $Y_{i}$ are noisy linear transforms of unobserved signals of interest $X_{i}$: egin{equation*}Y_{i}=A_{i}X_{i}+varepsilon_{i},end{equation*} for $i=1,ldots ,n$. The transform matrices $A_{i}$ are also observed. We model the unobserved signals (or regression coefficients) $X_{i}$ as vectors lying on an unknown low-dimensional space. Given only $Y_{i}$ and $A_{i}$ how should we predict or recover their values? The naive approach of performing regression for each observation separately is inaccurate due to the large noise level. Instead, we develop optimal methods for predicting $X_{i}$ by “borrowing strength” across the different samples. Our linear empirical Bayes methods scale to large datasets and rely on weak moment assumptions. We show that this model has wide-ranging applications in signal processing, deconvolution, cryo-electron microscopy, and missing data with noise. For missing data, we show in simulations that our methods are more robust to noise and to unequal sampling than well-known matrix completion methods. Full Article
predict A hierarchical Bayesian model for predicting ecological interactions using scaled evolutionary relationships By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Mohamad Elmasri, Maxwell J. Farrell, T. Jonathan Davies, David A. Stephens. Source: The Annals of Applied Statistics, Volume 14, Number 1, 221--240.Abstract: Identifying undocumented or potential future interactions among species is a challenge facing modern ecologists. Recent link prediction methods rely on trait data; however, large species interaction databases are typically sparse and covariates are limited to only a fraction of species. On the other hand, evolutionary relationships, encoded as phylogenetic trees, can act as proxies for underlying traits and historical patterns of parasite sharing among hosts. We show that, using a network-based conditional model, phylogenetic information provides strong predictive power in a recently published global database of host-parasite interactions. By scaling the phylogeny using an evolutionary model, our method allows for biological interpretation often missing from latent variable models. To further improve on the phylogeny-only model, we combine a hierarchical Bayesian latent score framework for bipartite graphs that accounts for the number of interactions per species with host dependence informed by phylogeny. Combining the two information sources yields significant improvement in predictive accuracy over each of the submodels alone. As many interaction networks are constructed from presence-only data, we extend the model by integrating a correction mechanism for missing interactions which proves valuable in reducing uncertainty in unobserved interactions. Full Article
predict Hierarchical infinite factor models for improving the prediction of surgical complications for geriatric patients By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Elizabeth Lorenzi, Ricardo Henao, Katherine Heller. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2637--2661.Abstract: Nearly a third of all surgeries performed in the United States occur for patients over the age of 65; these older adults experience a higher rate of postoperative morbidity and mortality. To improve the care for these patients, we aim to identify and characterize high risk geriatric patients to send to a specialized perioperative clinic while leveraging the overall surgical population to improve learning. To this end, we develop a hierarchical infinite latent factor model (HIFM) to appropriately account for the covariance structure across subpopulations in data. We propose a novel Hierarchical Dirichlet Process shrinkage prior on the loadings matrix that flexibly captures the underlying structure of our data while sharing information across subpopulations to improve inference and prediction. The stick-breaking construction of the prior assumes an infinite number of factors and allows for each subpopulation to utilize different subsets of the factor space and select the number of factors needed to best explain the variation. We develop the model into a latent factor regression method that excels at prediction and inference of regression coefficients. Simulations validate this strong performance compared to baseline methods. We apply this work to the problem of predicting surgical complications using electronic health record data for geriatric patients and all surgical patients at Duke University Health System (DUHS). The motivating application demonstrates the improved predictive performance when using HIFM in both area under the ROC curve and area under the PR Curve while providing interpretable coefficients that may lead to actionable interventions. Full Article
predict On Bayesian new edge prediction and anomaly detection in computer networks By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Silvia Metelli, Nicholas Heard. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2586--2610.Abstract: Monitoring computer network traffic for anomalous behaviour presents an important security challenge. Arrivals of new edges in a network graph represent connections between a client and server pair not previously observed, and in rare cases these might suggest the presence of intruders or malicious implants. We propose a Bayesian model and anomaly detection method for simultaneously characterising existing network structure and modelling likely new edge formation. The method is demonstrated on real computer network authentication data and successfully identifies some machines which are known to be compromised. Full Article