predict

Scientists may now be able to predict forest die-off up to 19 months in advance

Even forests that look green from space can show symptoms of impending decline.




predict

To predict the next infectious disease outbreak, ask a computer

Mathematical modeling and AI can pick out patterns preceding epidemics that human brains can’t readily discern.




predict

Clippers newcomer Reggie Jackson predicted he'd one day play alongside Paul George

Veteran guard Reggie Jackson always wanted to play alongside Paul George. He gets his chance now with a Clippers team aiming for an NBA title.




predict

Op-Ed: Predictions about where the coronavirus pandemic is going vary widely. Can models be trusted?

A model predicts COVID-19 deaths in the U.S. will drop to zero by June. Another suggests without a vaccine, the coronavirus will be with us for years.




predict

Elliott: NHL season 'pause' because of the coronavirus has an unpredictable aura

The NHL hopes to complete its season after suspending play because of the coronavirus, but playing into the middle of the summer creates complications.




predict

When can we travel again? Experts share their predictions

As U.S. faces its most trying coronavirus pandemic days, industry leaders imagine the future of travel.




predict

2020 Jets schedule: game-by-game predictions

With Tom Brady out of the picture, how will the Jets fare this season?




predict

2020 Jets schedule: game-by-game predictions

With Tom Brady out of the picture, how will the Jets fare this season?




predict

'1917' dominates our 2020 Oscar predictions, but 'Parasite' could surprise

Predicting the four acting races for the 2020 Oscars is easy this year, but there's still drama in the best picture race and others




predict

Coronavirus: Heathrow suffers worst month since 1980s and predicts 90% traffic fall in April

Boss of Britain's busiest air hub calls for 'a common international standard for healthcare screening in airports'




predict

Coronavirus: Air travel industry predicted to lose £250bn this year

'There is a large amount of pent-up demand, should health and travel restrictions allow it to return to the market,' said IATA's chief economist, Brian Pearce




predict

Rangers team news: Predicted 4-3-3 line-up vs Kilmarnock - Jack decision, Barisic doubt



Rangers take on Kilmarnock in the Scottish Premiership on Wednesday night, with Steven Gerrard looking to keep within touching distance of rivals Celtic. Express Sport delivers you all of the team news for the Gers, as well as the predicted line up for the clash, as Ryan Jack returns from injury.




predict

Leeds team news: Predicted 4-1-4-1 line up vs Huddersfield - Bamford decision for Bielsa



Leeds face Huddersfield in the Championship on Saturday, as Marcelo Bielsa aims to continue his the Whites' superb form at Elland Road. We bring you the latest team news and predicted line up, as the manager makes a key decision on whether to drop striker Patrick Bamford, as Kiko Casilla remains banned.




predict

Nostradamus 2020: Three predictions that came true - is coronavirus the fourth?



NOSTRADAMUS is hailed as the world's greatest prophetic mind, with three predictions people think he got right. Did he predict the coronavirus outbreak?




predict

Rangers team news: Predicted 4-3-3 line-up to face Hamilton - Katic to return, two changes



Rangers take on Hamilton as boss Steven Gerrard will be looking to keep the pressure on Celtic in the Scottish Premiership.




predict

Lewis Hamilton's Mercedes bosses tipped to sell team as Eddie Jordan predicts future of F1



Lewis Hamilton has enjoyed six years of success with Mercedes but Eddie Jordan thinks it's time for them to sell their F1 team.




predict

Leeds United team news: Predicted 4-1-4-1 line up vs Hull City - Casilla ban decision



Leeds United take on Hull City on Saturday and Whites boss Marcelo Bielsa faces an important decision after Kiko Casilla was issued an eight-match ban.




predict

Celtic team news: Predicted line-up vs Hearts - Neil Lennon to make Ryan Christie decision



Celtic host Hearts as top takes on bottom in the Scottish Premiership, with Neil Lennon hoping to maintain his seven-point lead over title rivals Rangers. Express Sport is on hand to provide all of the Hoops' team news, as well the predicted line-up, as two stars return from injury.




predict

Man City team news: Predicted 4-3-3 line up vs Sheffield Wed - Aguero knock, Bravo choice



Manchester City face Sheffield Wednesday in the FA Cup fifth round at Hillsborough on Wednesday night, and Pep Guardiola has a number of injury concerns for his starting XI. This is the Citizens' predicted line up, as Sergio Aguero could feature, while Claudio Bravo starts in goal.




predict

Masters leaderboard predicted: Would Tiger Woods have defended his title this weekend?



The Masters was scheduled to take place this weekend until it was called off due to the coronavirus outbreak - but would Tiger Woods have defended his title?




predict

Rangers boss Steven Gerrard makes Ryan Kent prediction after impressive return from injury



Steven Gerrard believes Ryan Kent will get even better for Rangers as he’s not even fully fit yet.




predict

Unemployment fears mount in UK holiday hotspots with mass job cuts predicted



BRITAIN'S summer holiday destinations will face some of the biggest economic hits of the coronavirus pandemic with fears of massive job losses in coastal communities, a study has claimed.




predict

Leeds team news: Predicted line-up vs Middlesbrough - Phillips and Casilla injury decision



Leeds face Middlesbrough in the Championship on Wednesday evening, with Marcelo Bielsa hoping to continue the Whites' good form after back-to-back wins. Express Sport brings you the latest team news and injury updates, with concerns over the inclusion of Kalvin Phillips and Kiko Casilla in the manager's starting XI.




predict

SPFL table PREDICTED: Super computer predicts who will win title - Celtic or Rangers?



THE SPFL title race is hotting up and a super computer has given us an idea of how the season may pan out.




predict

Horoscope: Astrology for this week - what do horoscopes for your star sign predict?



HOROSCOPE: Horoscopes for this week for all 12 star signs have been shared by Russell Grant. What does the astrology forecast for your sign predict? Check the reading here.




predict

Leeds SECOND in Championship and promoted as final table predicted amid coronavirus crisis



Leeds United's promotion hopes have been thrown into chaos after the coronavirus caused the EFL to suspend all leagues, but the final Championship table has been predicted by a supercomputer. Will Marcelo Bielsa's side finally return to the Premier League, and who will be relegated?




predict

Coronavirus: Sewage study could predict second Covid-19 peak

Scientists are tracing infections by analysing sewage samples from water treatment works.




predict

30 Big Tech Predictions for 2020

Digital transformation has just begun.

Not a single industry is safe from the unstoppable wave of digitization that is sweeping through finance, retail, healthcare, and more.

In 2020, we expect to see even more transformative developments that will change our businesses, careers, and lives.

To help you stay ahead of the curve, Business Insider Intelligence has put together a list of 30 Big Tech Predictions for 2020 across Banking, Connectivity & Tech, Digital Media, Payments & Commerce, Fintech, and Digital Health.

This exclusive report can be yours for FREE today.

Join the conversation about this story »




predict

Predictions Review: Trump, Zuck Crush My Optimism In 2019

This past year, I predicted the fall of both Zuck and Trump, not to mention the triumph of cannabis and rationale markets. But in 2019, the sociopaths won – bigly. Damn, was I wrong. One year ago this week, I sat down to write my annual list of ten or so predictions for the coming … Continue reading "Predictions Review: Trump, Zuck Crush My Optimism In 2019"




predict

Predictions 2020: Facebook Caves, Google Zags, Netflix Sells Out, and Data Policy Gets Sexy

A new year brings another run at my annual predictions: For 17 years now, I’ve taken a few hours to imagine what might happen over the course of the coming twelve months. And my goodness did I swing for the fences last year — and I pretty much whiffed. Batting .300 is great in the majors, but it … Continue reading "Predictions 2020: Facebook Caves, Google Zags, Netflix Sells Out, and Data Policy Gets Sexy"




predict

Rush Limbaugh Predicts Joe Biden Won’t Be The Dem Nominee: “Something’s Gonna Happen”

The following article, Rush Limbaugh Predicts Joe Biden Won’t Be The Dem Nominee: “Something’s Gonna Happen”, was first published on 100PercentFedUp.com.

Is Joe Biden going to become the Democrat nominee and run against Trump in the fall?

Continue reading: Rush Limbaugh Predicts Joe Biden Won’t Be The Dem Nominee: “Something’s Gonna Happen” ...




predict

The Future of NATO: A Strong Alliance in an Unpredictable World

Members Event

19 June 2014 - 11:00am to 12:00pm

Chatham House, London

Event participants

Anders Fogh Rasmussen, Secretary-General, North Atlantic Treaty Organization (NATO)
Chair: Robin Niblett, Director, Chatham House 

In September, the UK will host a summit on the future of NATO. The Wales Summit will chart the course of the alliance as it deals with the long-term implications of Russia’s policy towards Ukraine and prepares to complete its longest combat mission in Afghanistan. The secretary-general will outline the decisions that need to be taken to ensure that the alliance remains fit to face the future. He will set out NATO’s readiness action plan, address the debate on declining defence budgets, and explain how NATO intends to turn a new page in Afghanistan. 

Members Events Team




predict

GPS 2.0, a Tool to Predict Kinase-specific Phosphorylation Sites in Hierarchy

Yu Xue
Sep 1, 2008; 7:1598-1608
Research




predict

Why Britain’s 2019 Election Is Its Most Unpredictable in Recent History

7 November 2019

Professor Matthew Goodwin

Visiting Senior Fellow, Europe Programme
Leadership concerns and a collapse of traditional party loyalties make the December vote uncommonly volatile.

On 12 December, Britain will hold the most consequential election in its postwar history. The outcome of the election will influence not only the fate of Brexit but also the likelihood of a second referendum on EU membership, a second independence referendum in Scotland, the most economically radical Labour Party for a generation, Britain’s foreign and security policy and, ultimately, its position in the wider international order.

If you look only at the latest polls, then the outcome looks fairly certain. Ever since a majority of MPs voted to hold the election, the incumbent Conservative Party has averaged 38%, the opposition Labour Party 27%, the Liberal Democrats 16%, Brexit Party 10%, Greens 4% and Scottish National Party 3%. Prime Minister Boris Johnson and his party continue to average an 11-point lead which, if this holds until the election, would likely deliver a comfortable majority.

Johnson can also point to other favourable metrics. When voters are asked who would make the ‘best prime minister’, a clear plurality (43%) say Johnson while only a small minority (20%) choose the Labour Party leader, Jeremy Corbyn. Polls also suggest that, on the whole, Johnson is more trusted by voters than Corbyn to deal with Brexit, the economy and crime, while Jeremy Corbyn only tends to enjoy leads on health. All of this lends credence to the claim that Britain could be set for a Conservative majority and, in turn, the passing of a withdrawal agreement bill in early 2020.

But these polls also hide a lot of other shifts that are taking place and which, combined, make the 2019 general election unpredictable. One concerns leadership. While Boris Johnson enjoys stronger leadership ratings than Jeremy Corbyn, it should be remembered that what unites Britain’s current generation of party leaders is that they are all unpopular. Data compiled by Ipsos-MORI reveals that while Johnson has the lowest ratings of any new prime minister, Labour’s Jeremy Corbyn has the lowest ratings of any opposition leader since records began.

Another deeper shift is fragmentation. One irony of Britain’s Brexit moment is that ever since the country voted to leave the European Union its politics have looked more ‘European’. Over the past year, one of the world’s most stable two-party systems has imploded into a four-party race, with the anti-Brexit Liberal Democrats and Nigel Farage’s strongly Eurosceptic Brexit Party both presenting a serious challenge to the two mainstream parties.

In the latest polls, for example, Labour and the Conservatives are attracting only 61 per cent of the overall vote, well down on the 80 per cent they polled in 2017. Labour is weakened by the fact that it is only currently attracting 53 per cent of people who voted Labour at the last election, in 2017. A large number of these 2017 Labour voters, nearly one in four, have left for the Liberal Democrats, who are promising to revoke Article 50 and ‘cancel Brexit’. This divide in the Remain vote will produce unpredictable outcomes at the constituency level.

At the other end of the spectrum, the Conservatives are grappling with a similar but less severe threat. Nigel Farage and the Brexit Party are attracting around one in ten people who voted Conservative in 2017, which will make Boris Johnson’s task of capturing the crucial ‘Labour Leave’ seats harder. There is clear evidence that Johnson has been curbing Farage’s appeal, but it remains unclear how this rivalry on the right will play out from one seat to the next.

One clue as to what happens next can be found in those leadership ratings. While 80 per cent of Brexit Party voters back Johnson over Corbyn, only 25 per cent of Liberal Democrat voters back Corbyn over Johnson. Johnson may find it easier to consolidate the Leave vote than Corbyn will find the task of consolidating the Remain vote.

All of this reflects another reason why the election is unpredictable: volatility. This election is already Britain’s fifth nationwide election in only four years. After the 2015 general election, 2016 EU referendum, 2017 general election and 2019 European parliament elections, Britain’s political system and electorate have been in a state of almost continual flux. Along the way, a large number of voters have reassessed their loyalties.

As the British Election Study makes clear, the current rate of ‘vote-switching’ in British politics, where people switch their vote from one election to the next, is largely unprecedented in the post-war era. Across the three elections held in 2010, 2015 and 2017, a striking 49 per cent of people switched their vote.

This is not all about Brexit. Attachment to the main parties has been weakening since the 1960s. But Brexit is now accelerating this process as tribal identities as ‘Remainers’ or ‘Leavers’ cut across traditional party loyalties. All this volatility not only gives good reason to expect further shifts in support during the campaign but to also meet any confident predictions about the election result with a healthy dose of scepticism.




predict

Predictions and Policymaking: Complex Modelling Beyond COVID-19

1 April 2020

Yasmin Afina

Research Assistant, International Security Programme

Calum Inverarity

Research Analyst and Coordinator, International Security Programme
The COVID-19 pandemic has highlighted the potential of complex systems modelling for policymaking but it is crucial to also understand its limitations.

GettyImages-1208425931.jpg

A member of the media wearing a protective face mask works in Downing Street where Britain's Prime Minister Boris Johnson is self-isolating in central London, 27 March 2020. Photo by TOLGA AKMEN/AFP via Getty Images.

Complex systems models have played a significant role in informing and shaping the public health measures adopted by governments in the context of the COVID-19 pandemic. For instance, modelling carried out by a team at Imperial College London is widely reported to have driven the approach in the UK from a strategy of mitigation to one of suppression.

Complex systems modelling will increasingly feed into policymaking by predicting a range of potential correlations, results and outcomes based on a set of parameters, assumptions, data and pre-defined interactions. It is already instrumental in developing risk mitigation and resilience measures to address and prepare for existential crises such as pandemics, prospects of a nuclear war, as well as climate change.

The human factor

In the end, model-driven approaches must stand up to the test of real-life data. Modelling for policymaking must take into account a number of caveats and limitations. Models are developed to help answer specific questions, and their predictions will depend on the hypotheses and definitions set by the modellers, which are subject to their individual and collective biases and assumptions. For instance, the models developed by Imperial College came with the caveated assumption that a policy of social distancing for people over 70 will have a 75 per cent compliance rate. This assumption is based on the modellers’ own perceptions of demographics and society, and may not reflect all societal factors that could impact this compliance rate in real life, such as gender, age, ethnicity, genetic diversity, economic stability, as well as access to food, supplies and healthcare. This is why modelling benefits from a cognitively diverse team who bring a wide range of knowledge and understanding to the early creation of a model.

The potential of artificial intelligence

Machine learning, or artificial intelligence (AI), has the potential to advance the capacity and accuracy of modelling techniques by identifying new patterns and interactions, and overcoming some of the limitations resulting from human assumptions and bias. Yet, increasing reliance on these techniques raises the issue of explainability. Policymakers need to be fully aware and understand the model, assumptions and input data behind any predictions and must be able to communicate this aspect of modelling in order to uphold democratic accountability and transparency in public decision-making.

In addition, models using machine learning techniques require extensive amounts of data, which must also be of high quality and as free from bias as possible to ensure accuracy and address the issues at stake. Although technology may be used in the process (i.e. automated extraction and processing of information with big data), data is ultimately created, collected, aggregated and analysed by and for human users. Datasets will reflect the individual and collective biases and assumptions of those creating, collecting, processing and analysing this data. Algorithmic bias is inevitable, and it is essential that policy- and decision-makers are fully aware of how reliable the systems are, as well as their potential social implications.

The age of distrust

Increasing use of emerging technologies for data- and evidence-based policymaking is taking place, paradoxically, in an era of growing mistrust towards expertise and experts, as infamously surmised by Michael Gove. Policymakers and subject-matter experts have faced increased public scrutiny of their findings and the resultant policies that they have been used to justify.

This distrust and scepticism within public discourse has only been fuelled by an ever-increasing availability of diffuse sources of information, not all of which are verifiable and robust. This has caused tension between experts, policymakers and public, which has led to conflicts and uncertainty over what data and predictions can be trusted, and to what degree. This dynamic is exacerbated when considering that certain individuals may purposefully misappropriate, or simply misinterpret, data to support their argument or policies. Politicians are presently considered the least trusted professionals by the UK public, highlighting the importance of better and more effective communication between the scientific community, policymakers and the populations affected by policy decisions.

Acknowledging limitations

While measures can and should be built in to improve the transparency and robustness of scientific models in order to counteract these common criticisms, it is important to acknowledge that there are limitations to the steps that can be taken. This is particularly the case when dealing with predictions of future events, which inherently involve degrees of uncertainty that cannot be fully accounted for by human or machine. As a result, if not carefully considered and communicated, the increased use of complex modelling in policymaking holds the potential to undermine and obfuscate the policymaking process, which may contribute towards significant mistakes being made, increased uncertainty, lack of trust in the models and in the political process and further disaffection of citizens.

The potential contribution of complexity modelling to the work of policymakers is undeniable. However, it is imperative to appreciate the inner workings and limitations of these models, such as the biases that underpin their functioning and the uncertainties that they will not be fully capable of accounting for, in spite of their immense power. They must be tested against the data, again and again, as new information becomes available or there is a risk of scientific models becoming embroiled in partisan politicization and potentially weaponized for political purposes. It is therefore important not to consider these models as oracles, but instead as one of many contributions to the process of policymaking.




predict

Ukraine's Unpredictable Presidential Elections




predict

Phosphotyrosine-based Phosphoproteomics for Target Identification and Drug Response Prediction in AML Cell Lines [Research]

Acute myeloid leukemia (AML) is a clonal disorder arising from hematopoietic myeloid progenitors. Aberrantly activated tyrosine kinases (TK) are involved in leukemogenesis and are associated with poor treatment outcome. Kinase inhibitor (KI) treatment has shown promise in improving patient outcome in AML. However, inhibitor selection for patients is suboptimal.

In a preclinical effort to address KI selection, we analyzed a panel of 16 AML cell lines using phosphotyrosine (pY) enrichment-based, label-free phosphoproteomics. The Integrative Inferred Kinase Activity (INKA) algorithm was used to identify hyperphosphorylated, active kinases as candidates for KI treatment, and efficacy of selected KIs was tested.

Heterogeneous signaling was observed with between 241 and 2764 phosphopeptides detected per cell line. Of 4853 identified phosphopeptides with 4229 phosphosites, 4459 phosphopeptides (4430 pY) were linked to 3605 class I sites (3525 pY). INKA analysis in single cell lines successfully pinpointed driver kinases (PDGFRA, JAK2, KIT and FLT3) corresponding with activating mutations present in these cell lines. Furthermore, potential receptor tyrosine kinase (RTK) drivers, undetected by standard molecular analyses, were identified in four cell lines (FGFR1 in KG-1 and KG-1a, PDGFRA in Kasumi-3, and FLT3 in MM6). These cell lines proved highly sensitive to specific KIs. Six AML cell lines without a clear RTK driver showed evidence of MAPK1/3 activation, indicative of the presence of activating upstream RAS mutations. Importantly, FLT3 phosphorylation was demonstrated in two clinical AML samples with a FLT3 internal tandem duplication (ITD) mutation.

Our data show the potential of pY-phosphoproteomics and INKA analysis to provide insight in AML TK signaling and identify hyperactive kinases as potential targets for treatment in AML cell lines. These results warrant future investigation of clinical samples to further our understanding of TK phosphorylation in relation to clinical response in the individual patient.




predict

Predicting Storm Surge

Storm surge is often the most devastating part of a hurricane. Mathematical models used to predict surge must incorporate the effects of winds, atmospheric pressure, tides, waves and river flows, as well as the geometry and topography of the coastal ocean and the adjacent floodplain. Equations from fluid dynamics describe the movement of water, but most often such huge systems of equations need to be solved by numerical analysis in order to better forecast where potential flooding will occur. Much of the detailed geometry and topography on or near a coast require very fine precision to model, while other regions such as large open expanses of deep water can typically be solved with much coarser resolution. So using one scale throughout either has too much data to be feasible or is not very predictive in the area of greatest concern, the coastal floodplain. Researchers solve this problem by using an unstructured grid size that adapts to the relevant regions and allows for coupling of the information from the ocean to the coast and inland. The model was very accurate in tests of historical storms in southern Louisiana and is being used to design better and safer levees in the region and to evaluate the safety of all coastal regions. For More Information: A New Generation Hurricane Storm Surge Model for Southern Louisiana, by Joannes Westerink et al.




predict

Predicting Climate - Part 2

What.s in store for our climate and us? It.s an extraordinarily complex question whose answer requires physics, chemistry, earth science, and mathematics (among other subjects) along with massive computing power. Mathematicians use partial differential equations to model the movement of the atmosphere; dynamical systems to describe the feedback between land, ocean, air, and ice; and statistics to quantify the uncertainty of current projections. Although there is some discrepancy among different climate forecasts, researchers all agree on the tremendous need for people to join this effort and create new approaches to help understand our climate. It.s impossible to predict the weather even two weeks in advance, because almost identical sets of temperature, pressure, etc. can in just a few days result in drastically different weather. So how can anyone make a prediction about long-term climate? The answer is that climate is an average of weather conditions. In the same way that good predictions about the average height of 100 people can be made without knowing the height of any one person, forecasts of climate years into the future are feasible without being able to predict the conditions on a particular day. The challenge now is to gather more data and use subjects such as fluid dynamics and numerical methods to extend today.s 20-year projections forward to the next 100 years. For More Information: Mathematics of Climate Change: A New Discipline for an Uncertain Century, Dana Mackenzie, 2007.




predict

Predicting Climate - Part 1

What.s in store for our climate and us? It.s an extraordinarily complex question whose answer requires physics, chemistry, earth science, and mathematics (among other subjects) along with massive computing power. Mathematicians use partial differential equations to model the movement of the atmosphere; dynamical systems to describe the feedback between land, ocean, air, and ice; and statistics to quantify the uncertainty of current projections. Although there is some discrepancy among different climate forecasts, researchers all agree on the tremendous need for people to join this effort and create new approaches to help understand our climate. It.s impossible to predict the weather even two weeks in advance, because almost identical sets of temperature, pressure, etc. can in just a few days result in drastically different weather. So how can anyone make a prediction about long-term climate? The answer is that climate is an average of weather conditions. In the same way that good predictions about the average height of 100 people can be made without knowing the height of any one person, forecasts of climate years into the future are feasible without being able to predict the conditions on a particular day. The challenge now is to gather more data and use subjects such as fluid dynamics and numerical methods to extend today.s 20-year projections forward to the next 100 years. For More Information: Mathematics of Climate Change: A New Discipline for an Uncertain Century, Dana Mackenzie, 2007.




predict

Minimum energy requirements for microbial communities to live predicted

(University of Warwick) A microbial community is a complex, dynamic system composed of hundreds of species and their interactions, they are found in oceans, soil, animal guts and plant roots. Each system feeds the Earth's ecosystem and their own growth, as they each have their own metabolism that underpin biogeochemical cycles. Researchers from the School of Life Sciences at the University of Warwick have produced an extendable thermodynamic model for simulating the dynamics of microbial communities.




predict

Ocean acidification prediction now possible years in advance

(University of Colorado at Boulder) CU Boulder researchers have developed a method that could enable scientists to accurately forecast ocean acidity up to five years in advance. This would enable fisheries and communities that depend on seafood negatively affected by ocean acidification to adapt to changing conditions in real time, improving economic and food security in the next few decades.




predict

An Uncertain Future: Predicting the Economy After COVID-19

Abby Joseph Cohen and Alexis Crow share insights on the economic impact of COVID-19 in a discussion moderated by Pierre Yared. 




predict

Artificial Intelligence Prediction and Counterterrorism

9 August 2019

The use of AI in counterterrorism is not inherently wrong, and this paper suggests some necessary conditions for legitimate use of AI as part of a predictive approach to counterterrorism on the part of liberal democratic states.

Kathleen McKendrick

British Army Officer, Former Visiting Research Fellow at Chatham House

2019-08-06-AICounterterrorism.jpg

Surveillance cameras manufactured by Hangzhou Hikvision Digital Technology Co. at a testing station near the company’s headquarters in Hangzhou, China. Photo: Getty Images

Summary

  • The use of predictive artificial intelligence (AI) in countering terrorism is often assumed to have a deleterious effect on human rights, generating spectres of ‘pre-crime’ punishment and surveillance states. However, the well-regulated use of new capabilities may enhance states’ abilities to protect citizens’ right to life, while at the same time improving adherence to principles intended to protect other human rights, such as transparency, proportionality and freedom from unfair discrimination. The same regulatory framework could also contribute to safeguarding against broader misuse of related technologies.
  • Most states focus on preventing terrorist attacks, rather than reacting to them. As such, prediction is already central to effective counterterrorism. AI allows higher volumes of data to be analysed, and may perceive patterns in those data that would, for reasons of both volume and dimensionality, otherwise be beyond the capacity of human interpretation. The impact of this is that traditional methods of investigation that work outwards from known suspects may be supplemented by methods that analyse the activity of a broad section of an entire population to identify previously unknown threats.
  • Developments in AI have amplified the ability to conduct surveillance without being constrained by resources. Facial recognition technology, for instance, may enable the complete automation of surveillance using CCTV in public places in the near future.
  • The current way predictive AI capabilities are used presents a number of interrelated problems from both a human rights and a practical perspective. Where limitations and regulations do exist, they may have the effect of curtailing the utility of approaches that apply AI, while not necessarily safeguarding human rights to an adequate extent.
  • The infringement of privacy associated with the automated analysis of certain types of public data is not wrong in principle, but the analysis must be conducted within a robust legal and policy framework that places sensible limitations on interventions based on its results.
  • In future, broader access to less intrusive aspects of public data, direct regulation of how those data are used – including oversight of activities by private-sector actors – and the imposition of technical as well as regulatory safeguards may improve both operational performance and compliance with human rights legislation. It is important that any such measures proceed in a manner that is sensitive to the impact on other rights such as freedom of expression, and freedom of association and assembly.




predict

Predictions and Policymaking: Complex Modelling Beyond COVID-19

1 April 2020

Yasmin Afina

Research Assistant, International Security Programme

Calum Inverarity

Research Analyst and Coordinator, International Security Programme
The COVID-19 pandemic has highlighted the potential of complex systems modelling for policymaking but it is crucial to also understand its limitations.

GettyImages-1208425931.jpg

A member of the media wearing a protective face mask works in Downing Street where Britain's Prime Minister Boris Johnson is self-isolating in central London, 27 March 2020. Photo by TOLGA AKMEN/AFP via Getty Images.

Complex systems models have played a significant role in informing and shaping the public health measures adopted by governments in the context of the COVID-19 pandemic. For instance, modelling carried out by a team at Imperial College London is widely reported to have driven the approach in the UK from a strategy of mitigation to one of suppression.

Complex systems modelling will increasingly feed into policymaking by predicting a range of potential correlations, results and outcomes based on a set of parameters, assumptions, data and pre-defined interactions. It is already instrumental in developing risk mitigation and resilience measures to address and prepare for existential crises such as pandemics, prospects of a nuclear war, as well as climate change.

The human factor

In the end, model-driven approaches must stand up to the test of real-life data. Modelling for policymaking must take into account a number of caveats and limitations. Models are developed to help answer specific questions, and their predictions will depend on the hypotheses and definitions set by the modellers, which are subject to their individual and collective biases and assumptions. For instance, the models developed by Imperial College came with the caveated assumption that a policy of social distancing for people over 70 will have a 75 per cent compliance rate. This assumption is based on the modellers’ own perceptions of demographics and society, and may not reflect all societal factors that could impact this compliance rate in real life, such as gender, age, ethnicity, genetic diversity, economic stability, as well as access to food, supplies and healthcare. This is why modelling benefits from a cognitively diverse team who bring a wide range of knowledge and understanding to the early creation of a model.

The potential of artificial intelligence

Machine learning, or artificial intelligence (AI), has the potential to advance the capacity and accuracy of modelling techniques by identifying new patterns and interactions, and overcoming some of the limitations resulting from human assumptions and bias. Yet, increasing reliance on these techniques raises the issue of explainability. Policymakers need to be fully aware and understand the model, assumptions and input data behind any predictions and must be able to communicate this aspect of modelling in order to uphold democratic accountability and transparency in public decision-making.

In addition, models using machine learning techniques require extensive amounts of data, which must also be of high quality and as free from bias as possible to ensure accuracy and address the issues at stake. Although technology may be used in the process (i.e. automated extraction and processing of information with big data), data is ultimately created, collected, aggregated and analysed by and for human users. Datasets will reflect the individual and collective biases and assumptions of those creating, collecting, processing and analysing this data. Algorithmic bias is inevitable, and it is essential that policy- and decision-makers are fully aware of how reliable the systems are, as well as their potential social implications.

The age of distrust

Increasing use of emerging technologies for data- and evidence-based policymaking is taking place, paradoxically, in an era of growing mistrust towards expertise and experts, as infamously surmised by Michael Gove. Policymakers and subject-matter experts have faced increased public scrutiny of their findings and the resultant policies that they have been used to justify.

This distrust and scepticism within public discourse has only been fuelled by an ever-increasing availability of diffuse sources of information, not all of which are verifiable and robust. This has caused tension between experts, policymakers and public, which has led to conflicts and uncertainty over what data and predictions can be trusted, and to what degree. This dynamic is exacerbated when considering that certain individuals may purposefully misappropriate, or simply misinterpret, data to support their argument or policies. Politicians are presently considered the least trusted professionals by the UK public, highlighting the importance of better and more effective communication between the scientific community, policymakers and the populations affected by policy decisions.

Acknowledging limitations

While measures can and should be built in to improve the transparency and robustness of scientific models in order to counteract these common criticisms, it is important to acknowledge that there are limitations to the steps that can be taken. This is particularly the case when dealing with predictions of future events, which inherently involve degrees of uncertainty that cannot be fully accounted for by human or machine. As a result, if not carefully considered and communicated, the increased use of complex modelling in policymaking holds the potential to undermine and obfuscate the policymaking process, which may contribute towards significant mistakes being made, increased uncertainty, lack of trust in the models and in the political process and further disaffection of citizens.

The potential contribution of complexity modelling to the work of policymakers is undeniable. However, it is imperative to appreciate the inner workings and limitations of these models, such as the biases that underpin their functioning and the uncertainties that they will not be fully capable of accounting for, in spite of their immense power. They must be tested against the data, again and again, as new information becomes available or there is a risk of scientific models becoming embroiled in partisan politicization and potentially weaponized for political purposes. It is therefore important not to consider these models as oracles, but instead as one of many contributions to the process of policymaking.




predict

Combined Visual and Semi-quantitative Evaluation Improves Outcome Prediction by Early Mid-treatment 18F-fluoro-deoxi-glucose Positron Emission Tomography in Diffuse Large B-cell Lymphoma.

The purpose of this study was to assess the predictive and prognostic value of interim FDG PET (iPET) in evaluating early response to immuno-chemotherapy after two cycles (PET-2) in diffuse large B-cell lymphoma (DLBCL) by applying two different methods of interpretation: the Deauville visual five-point scale (5-PS) and a change in standardised uptake value by semi-quantitative evaluation. Methods: 145 patients with newly diagnosed DLBCL underwent pre-treatment PET (PET-0) and PET-2 assessment. PET-2 was classified according to both the visual 5-PS and percentage SUV changes (SUV). Receiver operating characteristic (ROC) analysis was performed to compare the accuracy of the two methods for predicting progression-free survival (PFS). Survival estimates, based on each method separately and combined, were calculated for iPET-positive (iPET+) and iPET-negative (iPET–) groups and compared. Results: Both with visual and SUV-based evaluations significant differences were found between the PFS of iPET– and iPET+ patient groups (p<0.001). Visually the best negative (NPV) and positive predictive value (PPV) occurred when iPET was defined as positive if Deauville score 4-5 (89% and 59%, respectively). Using the 66% SUV cut-off value, reported previously, NPV and PPV were 80 and 76%, respectively. SUV at 48.9% cut-off point, reported for the first time here, produced 100% specificity along with the highest sensitivity (24%). Visual and semi-quantitative SUV<48.9% assessment of each PET-2 gave the same PET-2 classification (positive or negative) in 70% (102/145) of all patients. This combined classification delivered NPV and PPV of 89% and 100% respectively, and all iPET+ patients failed to achieve or remain in remission. Conclusion: In this large consistently treated and assessed series of DLBCL, iPET had good prognostic value interpreted either visually or semi-quantitatively. We determined that the most effective SUV cut-off was at 48.9%, and that when combined with visual 5-PS assessment, a positive PET-2 was highly predictive of treatment failure.




predict

Pre-treatment 18F-FDG PET/CT Radiomics predict local recurrence in patients treated with stereotactic radiotherapy for early-stage non-small cell lung cancer: a multicentric study

Purpose: The aim of this retrospective multicentric study was to develop and evaluate a prognostic FDG PET/CT radiomics signature in early-stage non-small cell lung cancer (NSCLC) patients treated with stereotactic radiotherapy (SBRT). Material and Methods: Patients from 3 different centers (n = 27, 29 and 8) were pooled to constitute the training set, whereas the patients from a fourth center (n = 23) were used as the testing set. The primary endpoint was local control (LC). The primary tumour was semi-automatically delineated in the PET images using the Fuzzy locally adaptive Bayesian algorithm, and manually in the low-dose CT images. A total of 184 IBSI-compliant radiomic features were extracted. Seven clinical and treatment parameters were included. We used ComBat to harmonize radiomic features extracted from the four institutions relying on different PET/CT scanners. In the training set, variables found significant in the univariate analysis were fed into a multivariate regression model and models were built by combining independent prognostic factors. Results: Median follow-up was 21.1 (1.7 – 63.4) and 25.5 (7.7 – 57.8) months in training and testing sets respectively. In univariate analysis, none of the clinical variables, 2 PET and 2 CT features were significantly predictive of LC. The best predictive models in the training set were obtained by combining one feature from PET, namely information correlation 2 (IC2) and one from CT (Flatness), reaching a sensitivity of 100% and a specificity of 96%. Another model combining 2 PET features (IC2 and Strength), reached sensitivity of 100% and specificity of 88%, both with an undefined hazard ratio (HR) (p<0.001). The latter model obtained an accuracy of 0.91 (sensitivity 100%, specificity 81%), with a HR undefined (P = 0.023) in the testing set, however other models relying on CT radiomics features only or the combination of PET and CT features failed to validate in the testing set. Conclusion: We showed that two radiomic features derived from FDG PET were independently associated with LC in patients with NSCLC undergoing SBRT and could be combined in an accurate predictive model. This model could provide local relapse-related information and could be helpful in clinical decision-making.




predict

Inflammation-based index and 68Ga-DOTATOC PET-derived uptake and volumetric parameters predict outcome in neuroendocrine tumor patients treated with 90Y-DOTATOC

We performed post-hoc analyses on the utility of pre-therapeutic and early interim 68Ga-DOTA-Tyr3-octreotide (68Ga-DOTATOC) positron emission tomography (PET) tumor uptake and volumetric parameters and a recently proposed biomarker, the inflammation-based index (IBI), for peptide receptor radionuclide therapy (PRRT) in neuroendocrine tumor (NET) patients treated with 90Y-DOTATOC in the setting of a prospective phase II trial. Methods: Forty-three NET patients received up to four cycles of 1.85 GBq/m²/cycle 90Y-DOTATOC with a maximal kidney biologic effective dose of 37 Gy. All patients underwent a 68Ga-DOTATOC PET/computed tomography (CT) at baseline and seven weeks after the first PRRT cycle. 68Ga-DOTATOC-avid tumor lesions were semi-automatically delineated using a customized standardized uptake value (SUV) threshold-based approach. PRRT response was assessed on CT using RECIST 1.1. Results: Median progression-free survival (PFS) and overall survival (OS) were 13.9 and 22.3 months, respectively. An SUVmean higher than 13.7 (75th percentile (P75)) was associated with better survival (hazard ratio (HR) 0.45; P = 0.024), whereas a 68Ga-DOTATOC-avid tumor volume higher than 578 ml (P75) was associated with worse OS (HR 2.18; P = 0.037). Elevated baseline IBI was associated with worse OS (HR 3.90; P = 0.001). Multivariate analysis corroborated independent associations between OS and SUVmean (P = 0.016) and IBI (P = 0.015). No significant correlations with PFS were found. A composite score based on SUVmean and IBI allowed to further stratify patients in three categories with significantly different survival. On early interim PET, a decrease in SUVmean of more than 17% (P75) was associated with worse survival (HR 2.29; P = 0.024). Conclusion: Normal baseline IBI and high 68Ga-DOTATOC tumor uptake predict better outcome in NET patients treated with 90Y-DOTATOC. This can be used for treatment personalization. Interim 68Ga-DOTATOC PET does not provide information for treatment personalization.




predict

Head to head prospective comparison of quantitative lung scintigraphy and segment counting in predicting pulmonary function of lung cancer patients undergoing video-assisted thoracoscopic lobectomy

Prediction of post-operative pulmonary function in lung cancer patients before tumor resection is essential for patient selection for surgery and is conventionally done with a non-imaging segment counting method (SC) or a two-dimensional planar lung perfusion scintigraphy (PS). The purpose of this study was to compare quantitative analysis of PS to single photon emission computed tomography/computed tomography (SPECT/CT) and to estimate the accuracy of SC, PS and SPECT/CT in predicting post-operative pulmonary function in patients undergoing lobectomy. Methods: Seventy-five non-small cell lung cancer (NSCLC) patients planned for lobectomy were prospectively enrolled (68% males, average age 68.1±8 years ). All patients completed pre-operative forced expiratory volume capacity (FEV1), diffusing capacity of the lung for carbon monoxide (DLCO), Tc99m-MAA lung perfusion scintigraphy with PS and SPECT/CT quantification. A subgroup of 60 patients underwent video-assisted thoracoscopic (VATS) lobectomy and measurement of post-operative FEV1 and DLCO. Relative uptake of the lung lobes estimated by PS and SPECT/CT were compared. Predicted post-operative FEV1 and DLCO were derived from SC, PS and SPECT/CT. Prediction results were compared between the different methods and the true post-operative measurements in patients who underwent lobectomy. Results: Relative uptake measurements differed significantly between PS and SPECT/CT in right lung lobes, with a mean difference of -8.2±3.8, 18.0±5.0 and -11.5±6.1 for right upper, middle and lower lobes respectively (p<0.001). The differences between the methods in the left lung lobes were minor with a mean difference of -0.4±4.4 (p>0.05) and -2.0±4.0 (p<0.001) for left upper and lower lobes respectively. No significant difference and strong correlation (R=0.6-0.76, p<0.001) were found between predicted post-operative lung function values according to SC, PS, SPECT/CT and the actual post-operative FEV1 and DLCO. Conclusion: Although lobar quantification parameters differed significantly between PS and SPECT/CT, no significant differences were found between the predicted post-operative lung function results derived from these methods and the actual post-operative results. The additional time and effort of SPECT/CT quantification may not have an added value in patient selection for surgery. SPECT/CT may be advantageous in patients planned for right lobectomies but further research is warranted.




predict

FDG-PET/CT identifies predictors of survival in patients with locally advanced cervical carcinoma and para-aortic lymph node involvement to increase treatment

Introduction: To use positron emission tomography coupled with computed tomography (18FDG-PET/CT) to identify a high-risk subgroup requiring therapeutic intensification among patients with locally advanced cervical cancer (LACC) and para-aortic lymph node (PALN) involvement. Methods: In this retrospective multicentric study, patients with LACC and PALN involvement concurrently treated with chemoradiotherapy and extended-field radiotherapy (EFR) between 2006 and 2016 were included. A senior nuclear medicine specialist in PET for gynaecologic oncology reviewed all 18FDG-PET/CT scans. Metabolic parameters including maximum standardised uptake value (SUVmax), metabolic tumour volume (MTV) and total lesion glycolysis (TLG) were determined for the primary tumour, pelvic lymph nodes and PALN. Associations between these parameters and overall survival (OS) were assessed with Cox's proportional hazards model. Results: Sixty-eight patients were enrolled in the study. Three-year OS was 55.5% (95% CI (40.8-68.0)). When adjusted for age, stage and histology, pelvic lymph node TLG, PALN TLG and PALN SUVmax were significantly associated with OS (p<0.005). Conclusion: FDG-PET/CT was able to identify predictors of survival in the homogeneous subgroup of patients with LACC and PALN involvement, thus allowing therapeutic intensification to be proposed.