y

Dysregulation of Exosome Cargo by Mutant Tau Expressed in Human-Induced Pluripotent Stem Cell (iPSC) Neurons Revealed by Proteomics Analyses

Sonia Podvin
Apr 15, 2020; 0:RA120.002079v1-mcp.RA120.002079
Research




y

Developments and Applications of Functional Protein Microarrays

Guan-Da Syu
Apr 17, 2020; 0:R120.001936v1-mcp.R120.001936
Review




y

Identification of an Unconventional Subpeptidome Bound to the Behcet's Disease-associated HLA-B*51:01 that is Regulated by Endoplasmic Reticulum Aminopeptidase 1 (ERAP1)

Liye Chen
May 1, 2020; 19:871-883
Research




y

Human Hepatocyte Nuclear Factor 4-{alpha} Encodes Isoforms with Distinct Transcriptional Functions

Élie Lambert
May 1, 2020; 19:808-827
Research




y

Arginine in C9ORF72 Dipolypeptides Mediates Promiscuous Proteome Binding and Multiple Modes of Toxicity

Mona Radwan
Apr 1, 2020; 19:640-654
Research




y

Immediate adaptation analysis implicates BCL6 as an EGFR-TKI combination therapy target in NSCLC

Yan Zhou Tran
Mar 31, 2020; 0:RA120.002036v1-mcp.RA120.002036
Research




y

A cross-linking mass spectrometry approach defines protein interactions in yeast mitochondria

Andreas Linden
Apr 24, 2020; 0:RA120.002028v1-mcp.RA120.002028
Research




y

Characterization of signaling pathways associated with pancreatic {beta}-cell adaptive flexibility in compensation of obesity-linked diabetes in db/db mice

Taewook Kang
Apr 7, 2020; 0:RA119.001882v1-mcp.RA119.001882
Research




y

Discovery of a Redox Thiol Switch: Implications for Cellular Energy Metabolism

Xing-Huang Gao
May 1, 2020; 19:852-870
Research




y

Proteome and phosphoproteome analysis of brown adipocytes reveals that RICTOR loss dampens global insulin/AKT signaling

Samuel W Entwisle
Apr 6, 2020; 0:RA120.001946v2-mcp.RA120.001946
Research




y

Large-scale Identification of N-linked Intact Glycopeptides in Human Serum using HILIC Enrichment and Spectral Library Search

Qingbo Shu
Apr 1, 2020; 19:672-689
Research




y

Selection of features with consistent profiles improves relative protein quantification in mass spectrometry experiments

Tsung-Heng Tsai
Mar 31, 2020; 0:RA119.001792v1-mcp.RA119.001792
Research




y

DEqMS: a method for accurate variance estimation in differential protein expression analysis

Yafeng Zhu
Mar 23, 2020; 0:TIR119.001646v1-mcp.TIR119.001646
Technological Innovation and Resources




y

Improving Identification of In-organello Protein-Protein Interactions Using an Affinity-enrichable, Isotopically Coded, and Mass Spectrometry-cleavable Chemical Crosslinker

Karl A. T. Makepeace
Apr 1, 2020; 19:624-639
Research




y

MaxQuant software for ion mobility enhanced shotgun proteomics

Nikita Prianichnikov
Mar 10, 2020; 0:TIR119.001720v1-mcp.TIR119.001720
Technological Innovation and Resources




y

An Improved Boosting to Amplify Signal with Isobaric Labeling (iBASIL) Strategy for Precise Quantitative Single-cell Proteomics

Chia-Feng Tsai
May 1, 2020; 19:828-838
Research




y

Acquiring and Analyzing Data Independent Acquisition Proteomics Experiments without Spectrum Libraries

Lindsay K Pino
Apr 20, 2020; 0:P119.001913v1-mcp.P119.001913
Perspective




y

Proximity Dependent Biotinylation: Key Enzymes and Adaptation to Proteomics Approaches

Payman Samavarchi-Tehrani
May 1, 2020; 19:757-773
Review




y

Microsoft delivers fixes for 110 bugs in April, 2020 Patch Tuesday

For the April edition of Patch Tuesday, Microsoft repaired a total of 110 security vulnerabilities across their product line. Included in this count are 37 remote code execution bugs, and 33 elevation of privilege bugs. The company rated eighteen of the vulnerabilities “Critical.” This release’s most notable item is the follow-up to last month’s announcement, […]




y

Following the money in a massive “sextortion” spam scheme

Cryptocurrency profits from sextortion spam funneled into wallets tied to other cybercrime and dark web market activity.





y

Mathematical light shines blindly on us

By William Yslas Vélez Professor Emeritus University of Arizona “When I go to a Mexican restaurant I would gladly pay the musicians to stop playing.” John (not his real name) did not like the noise level. This statement came up … Continue reading




y

A New Type of Learning Community

Setting high standards is expected from all educators. Yet, I think I may have taken this to an extreme in my 2019 spring senior seminar course in algebraic combinatorics. Students walked in to class, got a copy of the syllabus … Continue reading




y

Cybersecurity in the Commonwealth: Building the Foundations of Effective National Responses in the Caribbean

Invitation Only Research Event

8 March 2019 - 9:00am to 5:30pm

Bridgetown, Barbados

Event participants

Joyce Hakmeh, Cyber Research Fellow, International Security Department, Chatham House

This workshop is the second in a series in the 'Implementing the Commonwealth Cybersecurity Agenda' project. The workshop aims to provide a multi-stakeholder pan-Commonwealth platform to discuss how to take the implementation of the 'Commonwealth Cyber Declaration' forward with a focus on the second pillar of the declaration – building the foundations of an effective national cybersecurity response with eight action points. 

As such, the workshop gathers different project implementers under the UK Foreign and Commonwealth Office’s Cyber Programme, in addition to other key relevant stakeholders from the global level, to explore ongoing initiatives which aim to deliver one or more of pillar two’s action points.

The workshop addresses issues from a global perspective and a Commonwealth perspective and will include presentations from selected partners from different Commonwealth countries.

Calum Inverarity

Research Analyst and Coordinator, International Security Department
+44 (0) 207 957 5751




y

Transparency and Accountability for Drone Use: European Approaches

Invitation Only Research Event

11 March 2019 - 9:30am to 12 March 2019 - 12:30pm

Chatham House

With increased use of military drones in recent years there have also been many calls for greater transparency and accountability with regards to drone operations.

This would allow for greater public understanding, particularly as the complex nature of military operations today intensifies difficulties in sustaining perceptions of the legitimate use of force.

For example, in Europe, leading states rely on the US for drone platforms and for the infrastructure - such as military communication networks - that enable those operations, while the US also relies on airbases in European states to operate its drone programme.

In addition, with reports that the US is loosening the rules on the use of drones, it is important to understand how European approaches to transparency and accountability may be affected by these developments.

This workshop focuses on how European states can facilitate transparency to ensure accountability for drone use, as well as what the limits might be, considering both the complexity of military operations today and the need for achieving operational goals.

With the US easing restrictions on export controls, the discussion also considers the role of regulation in ensuring accountability and prospects for developing common standards.

Attendance at this event is by invitation only.

Nilza Amaral

Project Manager, International Security Programme




y

Protecting Children in Conflict: See Me Safe Symposium

Invitation Only Research Event

7 May 2019 - 10:00am to 5:00pm

Chatham House, London

Today there are 420 million children, or one-fifth of children worldwide, who live in conflict zones and are at risk of being killed or injured and denied access to education, healthcare and humanitarian assistance. From Myanmar and Syria, to South Sudan and Yemen, the impact of conflict on children and their families is devastating. With conflicts becoming more protracted and urbanized, and the undermining of international rules and norms, the risk to civilians is rapidly increasing. 
 
The impact of the crisis in civilian protection is not only devastating children’s lives and risking a lost generation, it threatens global stability and prosperity, contributing to the degradation of the international rules-based system and its institutions and undermining the ability to hold perpetrators accountable and prevent these atrocities from happening.
 
This symposium will bring together practitioners, policymakers, business leaders, philanthropists and academics for a day of panel discussions on the protection of children in conflict. The aim of the event is to generate an informed debate and to deepen engagement with issues around protecting children in conflict as well as to inspire support to help rebuild children’s lives.
 
This event will be followed by a reception from 17:00-18:30.
 
Attendance is by invitation only.
 
Celebrating its centenary in 2020, Chatham House is partnering with Save the Children on this core area of their work, in their anniversary year.

Nilza Amaral

Project Manager, International Security Programme




y

Cyber Insurance for Civil Nuclear Facilities: Risks and Opportunities

8 May 2019

This paper sets out a roadmap for how organizations in the civil nuclear sector can explore their options and review their cyber risk exposure.

Éireann Leverett

Senior Risk Researcher, University of Cambridge

GettyImages-667179424.jpg

The control room inside the Paks nuclear power plant in Hungary, 10 April 2017. Photo: Getty Images
  • Civil nuclear facilities and organizations hold sensitive information on security clearances, national security, health and safety, nuclear regulatory issues and international inspection obligations. The sensitivity and variety of such data mean that products tailored for insuring the civil nuclear industry have evolved independently and are likely to continue to do so.
  • ‘Air-gaps’ – measures designed to isolate computer systems from the internet – need to be continually maintained for industrial systems. Yet years of evidence indicate that proper maintenance of such protections is often lacking (mainly because very real economic drivers exist that push users towards keeping infrastructure connected). Indeed, even when air-gaps are maintained, security breaches can still occur.
  • Even if a particular organization has staff that are highly trained, ready and capable of handling a technological accident, hacking attack or incidence of insider sabotage, it still has to do business and/or communicate with other organizations that may not have the essentials of cybersecurity in place.
  • Regardless of whether the choice is made to buy external insurance or put aside revenues in preparation for costly incidents, the approach to cyber risk calculation should be the same. Prevention is one part of the equation, but an organization will also need to consider the resources and contingency measures available to it should prevention strategies fail. Can it balance the likelihood of a hacker’s success against the maximum cost to the organization, and put aside enough capital and manpower to get it through a crisis?
  • All civil nuclear facilities should consider the establishment of computer security incident response (CSIR) teams as a relevant concern, if such arrangements are not already in place. The existence of a CSIR team will be a prerequisite for any facility seeking to obtain civil nuclear cyber insurance.
  • Preventing attacks such as those involving phishing and ransomware requires good cyber hygiene practices throughout the workforce. Reducing an organization’s ‘time to recovery’ takes training and dedication. Practising the necessary tasks in crisis simulations greatly reduces the likelihood of friction and the potential for error in a crisis.




y

Understanding Cybercrime for Better Policing: Regional and Global Challenges

Research Event

18 June 2019 - 9:00am to 5:30pm

Chatham House | 10 St James's Square | London | SW1Y 4LE

In recent years, cybercrime has evolved from a niche technological concern into a prominent global issue with substantial preventative and remedial costs for businesses and governments alike. Despite heavy investment in sophisticated cybersecurity measures and the adoption of several legal, organizational and capacity-building measures, cybercrime remains a major threat which is evolving on a daily basis. Today’s cybercrime is more aggressive, more complex, more organized and – importantly – more unpredictable than ever before.

The challenges posed by cybercrime are experienced acutely by countries undergoing digital transformations: as the level of connectivity rises, so too does the potential for online theft, fraud and abuse. Cybercrime is pervasive but governments can work to limit its impact by creating a resilient overall economy and robust institution, and appropriately equipping law enforcement and the justice system to navigate its novel challenges.

To advance the discourse surrounding these issues, this workshop will assess the current cyber threat landscape and how it is evolving. It will identify the main obstacles encountered by law enforcement, the judiciary and prosecutors in their fight against cybercrime. It will also compare national, regional and global approaches that countries can use to effectively curb cybercrime and tackle its emerging challenges.

Calum Inverarity

Research Analyst and Coordinator, International Security Department
+44 (0) 207 957 5751




y

Cybersecurity of NATO’s Space-based Strategic Assets

1 July 2019

Almost all modern military engagements rely on space-based assets, but cyber vulnerabilities can undermine confidence in the performance of strategic systems. This paper will evaluate the threats, vulnerabilities and consequences of cyber risks to strategic systems.

Dr Beyza Unal

Senior Research Fellow, International Security Programme

2019-06-25-Space-Cybersecurity.jpg

The radar domes of RAF Menwith Hill, reported to be the biggest spy base in the world, dominate the skyline on 30 October 2007 in Harrogate, UK. Photo: Getty Images

Summary

  • All satellites depend on cyber technology including software, hardware and other digital components. Any threat to a satellite’s control system or available bandwidth poses a direct challenge to national critical assets.
  • NATO’s missions and operations are conducted in the air, land, cyber and maritime domains. Space-based architecture is fundamental to the provision of data and services in each of these contexts. The critical dependency on space has resulted in new cyber risks that disproportionately affect mission assurance. Investing in mitigation measures and in the resilience of space systems for the military is key to achieving protection in all domains.
  • Almost all modern military engagements rely on space-based assets. During the US-led invasion of Iraq in 2003, 68 per cent of US munitions were guided utilizing space-based means (including laser-, infrared- and satellite-guided munitions); up sharply from 10 per cent in 1990–91, during the first Gulf war. In 2001, 60 per cent of the weapons used by the US in Afghanistan were precision-guided munitions, many of which had the capability to use information provided by space-based assets to correct their own positioning to hit a target.
  • NATO does not own satellites. It owns and operates a few terrestrial elements, such as satellite communications anchor stations and terminals. It requests access to products and services – such as space weather reports and satellite overflight reports provided via satellite reconnaissance advance notice systems – but does not have direct access to satellites: it is up to individual NATO member states to determine whether they allow access.
  • Cyber vulnerabilities undermine confidence in the performance of strategic systems. As a result, rising uncertainty in information and analysis continues to impact the credibility of deterrence and strategic stability. Loss of trust in technology also has implications for determining the source of a malicious attack (attribution), strategic calculus in crisis decision-making and may increase the risk of misperception.




y

Yasmin Afina

Research Assistant, International Security Programme

Biography

Yasmin Afina joined Chatham House as research assistant for the International Security programme in April 2019. She formerly worked for the United Nations Institute for Disarmament Research (UNIDIR)’s Security and Technology Programme, and the United Nations Office for Disarmament Affairs (UNODA).

Yasmin’s research at Chatham House covers projects related to nuclear weapons systems, strategic weapons systems, emerging technologies including cyber and artificial intelligence, and international law.

In her previous capacities, Yasmin’s research included international, regional and national cybersecurity policies, the international security implications of quantum computing, and algorithmic bias in autonomous technologies and law enforcement operations.

Yasmin holds an LL.M. from the Geneva Academy of International Humanitarian Law and Human Rights, an LL.B. from the University of Essex, and a French Bachelor of Laws and Postgraduate degree (Maîtrise) in International Law from the Université Toulouse I Capitole.

Areas of expertise

  • Cybersecurity of weapons systems, command control and communication systems
  • Cybersecurity policies and governance
  • Autonomous technologies (incl. artificial intelligence, machine learning)
  • International law (incl. international humanitarian law, international human rights law, jus ad bellum)
  • Nuclear weapons policy

Past experience

2018-19Programme assistant, security and technology, United Nations Institute for Disarmament Research (UNIDIR)
2017-18Project assistant, emerging security issues, United Nations Institute for Disarmament Research (UNIDIR)
2017Weapons of Mass Destruction Programme, United Nations Institute for Disarmament Research (UNIDIR)
2017-18LL.M., Geneva Academy of International Humanitarian Law and Human Rights (CH)
2016-17Maîtrise, Université Toulouse I Capitole (FR)
2016Convention on Certain Conventional Weapons Implementation Support Unit, United Nations Office for Disarmament Affairs (UNODA) Geneva Branch
2013-17LL.B., University of Essex (UK)
2013-16Licence (Bachelor of Laws), Université Toulouse I Capitole (FR)
2014Volunteer, World YWCA




y

The Destabilizing Danger of Cyberattacks on Missile Systems

2 July 2019

Dr Patricia Lewis

Research Director, Conflict, Science & Transformation; Director, International Security Programme

Dr Beyza Unal

Senior Research Fellow, International Security Programme
‘Left-of-launch’ attacks that aim to disable enemy missile systems may increase the chance of them being used, not least because the systems are so vulnerable.

2019-07-02-NKMissile.jpg

This undated photo released by North Korea's news agency in March 2017 shows the launch of four ballistic missiles during a military drill at an undisclosed location in North Korea. Photo: STR/AFP/Getty Images.

After President Trump decided to halt a missile attack on Iran in response to the downing of a US drone, it was revealed that the US had conducted cyberattacks on Iranian weapons systems to prevent Iran launching missiles against US assets in the region.

This ‘left-of-launch’ strategy – the pre-emptive action to prevent an adversary launch missiles – has been part of the US missile defence strategy for some time now. President George W Bush asked the US military and intelligence community to infiltrate the supply chain of North Korean missiles. It was claimed that the US hacked the North Korean ballistic missile programme, causing a failed ballistic missile test, in 2012.

It was not clear then – or now – whether these ‘left-of-launch’ cyberattacks aimed at North Korea were successful as described or whether they were primarily a bluff. But that is somewhat irrelevant; the belief in the possibility and the understanding of the potential impact of such cyber capabilities undermines North Korean or Iranian confidence in their abilities to launch their missiles. In times of conflict, loss of confidence in weapons systems may lead to escalation.

In other words, the adversary may be left with no option but to take the chance to use these missiles or to lose them in a conflict setting. ‘Left of launch’ is a dangerous game. If it is based on a bluff, it could be called upon and lead to deterrence failure. If it is based on real action, then it could create an asymmetrical power struggle. If the attacker establishes false confidence in the power of a cyber weapon, then it might lead to false signalling and messaging.

This is the new normal. The cat-and-mouse game has to be taken seriously, not least because missile systems are so vulnerable.

There are several ways an offensive cyber operation against missile systems might work. These include exploiting missile designs, altering software or hardware, or creating clandestine pathways to the missile command and control systems.

They can also be attacked in space, targeting space assets and their link to strategic systems.

Most missile systems rely, at least in part, on digital information that comes from or via space-based or space-dependent assets such as: communication satellites; satellites that provide position, navigation and timing (PNT) information (for example GPS or Galileo); weather satellites to help predict flight paths, accurate targeting and launch conditions; and remote imagery satellites to assist with information and intelligence for the planning and targeting.

Missile launches themselves depend on 1) the command and control systems of the missiles, 2) the way in which information is transmitted to the missile launch facilities and 3) the way in which information is transmitted to the missiles themselves in flight. All these aspects rely on space technology.

In addition, the ground stations that transmit and receive data to and from satellites are also vulnerable to cyberattack – either through their known and unknown internet connectivity or through malicious use of flash drives that contain a deliberate cyber infection.

Non-space-based communications systems that use cable and ground-to-air-to-ground masts are likewise under threat from cyberattacks that find their way in via internet connectivity, proximity interference or memory sticks. Human error in introducing connectivity via phones, laptops and external drives, and in clicking on malicious links in sophisticated phishing lures, is common in facilitating inadvertent connectivity and malware infection.

All of these can create a military capacity able to interfere with missile launches. Malware might have been sitting on the missile command and control system for months or even years, remaining inactivated until a chosen time or by a trigger that sets in motion a disruption either to the launch or to the flight path of the missile. The country that launches the missile that either fails to launch or fails to reach the target may never know if this was the result of a design flaw, a common malfunction or a deliberate cyberattack.

States with these capabilities must exercise caution: cyber offence manoeuvres may prevent the launch of missile attacks against US assets in the Middle East or in the Pacific regions, but they may also interfere with US missile launches in the future. Even, as has recently been revealed, US cyber weapons targeting an adversary may blow back and inadvertently infect US systems. Nobody is invulnerable.




y

Cybersecurity by Design in Civil Nuclear Power Plants

24 July 2019

Cyberattacks are increasingly challenging critical national infrastructure. This paper considers the security by design approach for civil nuclear power plants and analyses areas of risk and opportunities for the nuclear industry.

Dr Beyza Unal

Senior Research Fellow, International Security Programme

Roger Brunt

Managing Director, Grosmont Howe Ltd

2019-07-24-NuclearPlant.jpg

An employee climbs into the cooling tower of the third and fourth unit at Mochovce nuclear power plant in Slovakia on 2 July 2019. Photo: Getty Images

Summary

  • The application of ‘security by design’ in nuclear new builds could provide operators with the opportunity to establish a robust and resilient security architecture at the beginning of a nuclear power plant’s life cycle. This will enhance the protection of the plant and reduce the need for costly security improvements during its operating life.
  • Security by design cannot fully protect a nuclear power plant from rapidly evolving cyberattacks, which expose previously unsuspected or unknown vulnerabilities.
  • Careful design of security systems and architecture can – and should – achieve levels of protection that exceed current norms and expectations. However, the sourcing of components from a global supply chain means that the integrity of even the most skilfully designed security regime cannot be guaranteed without exhaustive checks of its components.
  • Security by design may well include a requirement for a technical support organization to conduct quality assurance of cyber defences and practices, and this regime should be endorsed by a facility’s executive board and continued at regular intervals after the new build facility has been commissioned.
  • Given the years it takes to design, plan and build a new nuclear power plant, it is important to recognize that from the point of ‘design freeze’ onwards, the operator will be building in vulnerabilities, as technology continues to evolve rapidly while construction fails to keep pace with it. Security by design cannot be a panacea, but it is an important factor in the establishment of a robust nuclear security – and cybersecurity – culture.




y

Policy Implications of Armed Drone Use

This project brings together experts on the use of armed drones, including current and former military officials, academia, think-tanks and NGOs, to discuss and exchange perspectives based on their different experiences, with the aim of sharing knowledge and increasing understanding on these issues, and to inform and provide input into the European debate.

With the increased use of armed drones in recent years, ethical and legal concerns have been raised in regard to civilian casualties, secrecy and lack of transparency and accountability for drone strikes.

This project brings together experts on the use of armed drones, including current and former military officials, academia, think-tanks and NGOs, to discuss and exchange perspectives based on their different experiences, with the aim of sharing knowledge and increasing understanding on these issues, and to inform and provide input into the European debate. The experts explore the issues and controversies surrounding the use of drones outside formal armed conflict and study the broader policy implications in detail, particularly with regards to what this means for the UK and other European countries.

Building on the findings from the workshops, this project will hold a simulation exercise to stress test critical areas of concern around the use of armed drones that are relevant for the UK and other EU member states.

The discussions and the simulation exercise will provide opportunities for policy input on areas of mutual concern and feed into practical policy recommendations on the use of armed drones.

This project builds on previous work on armed drones by the International Security Department and is funded by the Open Society Foundations.

More on Policy Implications of Armed Drone Use




y

Dorothy Gordon

Associate Fellow, International Security Programme and Global Economy and Finance Programme

Biography

Dorothy was the founding director general of the Ghana-India Kofi Annan Centre of Excellence in ICT, a position which she held for over a decade.

She works globally as a policy adviser, evaluator, project manager and organizational management consultant.

Over the course of her 30-year career in international development and technology she has held management positions with the UN and global management consulting firms on four continents.

As a strong advocate of the importance of building robust local innovation ecosystems based on open source technologies, she serves on the board and as a mentor to a number of start-ups and NGOs focused on women in tech.




y

Cyber Governance in the Commonwealth: Towards Stability and Responsible State Behaviour in Cyberspace

Invitation Only Research Event

7 October 2019 - 10:30am to 5:30pm

Addis Ababa, Ethiopia

This roundtable is part of a series under the project, 'Implementing the Commonwealth Cybersecurity Agenda', funded by the UK Foreign and Commonwealth Office (FCO). The roundtable aims to provide a multi-stakeholder, pan-Commonwealth platform to discuss how to implement the Commonwealth Cyber Declaration with a focus on its third pillar 'To promote stability in cyberspace through international cooperation'.

In particular, the roundtable focuses on points 3 and 4 of the third pillar which revolve around the commitment to promote frameworks for stability in cyberspace including the applicability of international law, agreed voluntary norms of responsible state behaviour and the development and implementation of confidence-building measures consistent with the 2015 report of the UNGGE. 

The workshop also focuses on the commitment to advance discussions on how existing international law, including the Charter of the United Nations and applicable international humanitarian law, applies in cyberspace.

The roundtable addresses the issue of global cyber governance from a Commonwealth perspective and will also include a discussion around the way forward, the needed capacity of the different Commonwealth countries and the cooperation between its members for better cyber governance.

Participants include UNGGE members from Commonwealth countries in addition to representatives to the UN Open-Ended Working Group from African countries as well as members from academia, civil society and industry.

Calum Inverarity

Research Analyst and Coordinator, International Security Department
+44 (0) 207 957 5751




y

Human Control Is Essential to the Responsible Use of Military Neurotechnology

8 August 2019

Yasmin Afina

Research Assistant, International Security Programme
The military importance of AI-connected brain–machine interfaces is growing. Steps must be taken to ensure human control at all times over these technologies.

2019-08-08-BABWIB.jpg

A model of a human brain is displayed at an exhibition in Lisbon, Portugal. Photo: Getty Images.

Technological progress in neurotechnology and its military use is proceeding apace. As early as the 1970s, brain-machine interfaces have been the subject of study. By 2014, the UK’s Ministry of Defence was arguing that the development of artificial devices, such as artificial limbs, is ‘likely to see refinement of control to provide… new ways to connect the able-bodied to machines and computers.’ Today, brain-machine interface technology is being investigated around the world, including in Russia, China and South Korea.

Recent developments in the private sector are producing exciting new capabilities for people with disabilities and medical conditions. In early July, Elon Musk and Neuralink presented their ‘high-bandwidth’ brain-machine interface system, with small and flexible electrode threads packaged into a small device containing custom chips and to be inserted and implanted into the user’s brain for medical purposes.

In the military realm, in 2018, the United States’ Defense Advanced Research Projects Agency (DARPA) put out a call for proposals to investigate the potential of nonsurgical brain-machine interfaces to allow soldiers to ‘interact regularly and intuitively with artificially intelligent, semi-autonomous and autonomous systems in a manner currently not possible with conventional interfaces’. DARPA further highlighted the need for these interfaces to be bidirectional – where information is sent both from brain to machine (neural recording) and from machine to brain (neural stimulation) – which will eventually allow machines and humans to learn from each other.

This technology may provide soldiers and commanders with a superior level of sensory sensitivity and the ability to process a greater amount of data related to their environment at a faster pace, thus enhancing situational awareness. These capabilities will support military decision-making as well as targeting processes.

Neural recording will also enable the obtention of a tremendous amount of data from operations, including visuals, real-time thought processes and emotions. These sets of data may be used for feedback and training (including for virtual wargaming and for machine learning training), as well as for investigatory purposes. Collected data will also feed into research that may help researchers understand and predict human intent from brain signals – a tremendous advantage from a military standpoint.

Legal and ethical considerations

The flip side of these advancements is the responsibilities they will impose and the risks and vulnerabilities of the technology as well as legal and ethical considerations.

The primary risk would be for users to lose control over the technology, especially in a military context; hence a fail-safe feature is critical for humans to maintain ultimate control over decision-making. Despite the potential benefits of symbiosis between humans and AI, users must have the unconditional possibility to override these technologies should they believe it is appropriate and necessary for them to do so.

This is important given the significance of human control over targeting, as well as strategic and operational decision-making. An integrated fail-safe in brain-machine interfaces may in fact allow for a greater degree of human control over critical, time-sensitive decision-making. In other words, in the event of incoming missiles alert, while the AI may suggest a specific course of action, users must be able to decide in a timely manner whether to execute it or not.

Machines can learn from coded past experiences and decisions, but humans also use gut feelings to make life and death decisions. A gut feeling is a human characteristic that is not completely transferable, as it relies on both rational and emotional traits – and is part of the ‘second-brain’ and the gut-brain axis which is currently poorly understood. It is however risky to take decisions solely on gut feelings or solely on primary brain analysis—therefore, receiving a comprehensive set of data via an AI-connected brain-machine interface may help to verify and evaluate the information in a timely manner, and complement decision-making processes. However, these connections and interactions would have to be much better understood than the current state of knowledge. 

Fail-safe features are necessary to ensure compliance with the law, including international humanitarian law and international human rights law. As a baseline, human control must be used to 1) define areas where technology may or may not be trusted and to what extent, and 2) ensure legal, political and ethical accountability, responsibility and explainability at all times. Legal and ethical considerations must be taken into account from as early as the design and conceptualizing stage of these technologies, and oversight must be ensured across the entirety of the manufacturing supply chain.  

The second point raises the need to further explore and clarify whether existing national, regional and international legal, political and ethical frameworks are sufficient to cover the development and use of these technologies. For instance, there is value in assessing to what extent AI-connected brain-machine interfaces will affect the assessment of the mental element in war crimes and their human rights implications.

In addition, these technologies need to be highly secure and invulnerable to cyber hacks. Neural recording and neural stimulation will be directly affecting brain processes in humans and if an adversary has the ability to connect to a human brain, steps need to be taken to ensure that memory and personality could not be damaged.

Future questions

Military applications of technological progress in neurotechnology is inevitable, and their implications cannot be ignored. There is an urgent need for policymakers to understand the fast-developing neurotechnical capabilities, develop international standards and best practices – and, if necessary, new and dedicated legal instruments – to frame the use of these technologies.

Considering the opportunities that brain-machine interfaces may present in the realms of security and defence, inclusive, multi-stakeholder discussions and negotiations leading to the development of standards must include the following considerations:

  • What degree of human control would be desirable, at what stage and by whom? To what extent could human users be trusted with their own judgment in decision-making processes?
  • How could algorithmic and human biases, the cyber security and vulnerabilities of these technologies and the quality of data be factored into these discussions?
  • How can ethical and legal considerations be incorporated into the design stage of these technologies?
  • How can it be ensured that humans cannot be harmed in the process, either inadvertently or deliberately?
  • Is there a need for a dedicated international forum to discuss the military applications of neurotechnology? How could these discussions be integrated to existing international processes related to emerging military applications of technological progress, such as the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons Systems?




y

Examining Measures to Mitigate Cyber Vulnerabilities of Space-based Strategic Assets

Invitation Only Research Event

30 October 2019 - 9:30am to 4:00pm

Chatham House | 10 St James's Square | London | SW1Y 4LE

Event participants

Beyza Unal, Senior Research Fellow, International Security Department, Chatham House
Patricia Lewis, Research Director, International Security Department, Chatham House

Strategic systems that depend on space-based assets, such as command, control and communication, early warning systems, weapons systems and weapons platforms, are essential for conducting successful NATO operations and missions. Given the increasing dependency on such systems, the alliance and key member states would therefore benefit from an in-depth analysis of possible mitigation and resilience measures.

This workshop is part of the International Security Department’s (ISD) project on space security and the vulnerability of strategic assets to cyberattacks, which includes a recently published report. This project aims to create resilience in NATO and key NATO member states, building the capacity of key policymakers and stakeholders to respond with effective policies and procedures. This workshop will focus on measures to mitigate the cyber vulnerabilities of NATO’s space-dependent strategic assets. Moreover, participants will discuss the type of resilience measures and mechanisms required.

Attendance at this event is by invitation only. 

Calum Inverarity

Research Analyst and Coordinator, International Security Department
+44 (0) 207 957 5751




y

Who’s Afraid of Huawei? Understanding the 5G Security Concerns

9 September 2019

Emily Taylor

Associate Fellow, International Security Programme
Emily Taylor examines the controversy around the Chinese tech giant’s mobile broadband equipment and the different approaches taken by Western countries.

2019-09-06-Huawei.jpg

Huawei's Ox Horn campus in Dongguan, China. Photo: Getty Images.

As countries move towards the fifth generation of mobile broadband, 5G, the United States has been loudly calling out Huawei as a security threat. It has employed alarmist rhetoric and threatened to limit trade and intelligence sharing with close allies that use Huawei in their 5G infrastructure.

While some countries such as Australia have adopted a hard line against Huawei, others like the UK have been more circumspect, arguing that the risks of using the firm’s technology can be mitigated without forgoing the benefits.

So, who is right, and why have these close allies taken such different approaches?

The risks

Long-standing concerns relating to Huawei are plausible. There are credible allegations that it has benefitted from stolen intellectual property, and that it could not thrive without a close relationship with the Chinese state.

Huawei hotly denies allegations that users are at risk of its technology being used for state espionage, and says it would resist any order to share information with the Chinese government. But there are questions over whether it could really resist China’s stringent domestic legislation, which compels companies to share data with the government. And given China’s track record of using cyberattacks to conduct intellectual property theft, there may be added risks of embedding a Chinese provider into critical communications infrastructure.

In addition, China’s rise as a global technological superpower has been boosted by the flow of financial capital through government subsidies, venture and private equity, which reveal murky boundaries between the state and private sector for domestic darlings. Meanwhile, the Belt and Road initiative has seen generous investment by China in technology infrastructure across Africa, South America and Asia.

There’s no such thing as a free lunch or a free network – as Sri Lanka discovered when China assumed shares in a strategic port in return for debt forgiveness; or Mexico when a 1% interest loan for its 4G network came on the condition that 80% of the funding was spent with Huawei.

Aside from intelligence and geopolitical concerns, the quality of Huawei’s products represents a significant cyber risk, one that has received less attention than it deserves.

On top of that, 5G by itself will significantly increase the threat landscape from a cybersecurity perspective. The network layer will be more intelligent and adaptable through the use of software and cloud services. The number of network antennae will increase by a factor of 20, and many will be poorly secured ‘things’; there is no need for a backdoor if you have any number of ‘bug doors’.

Finally, the US is threatening to limit intelligence sharing with its closest allies if they adopt Huawei. So why would any country even consider using Huawei in their 5G infrastructure?

Different situations

The truth is that not every country is free to manoeuvre; 5G technology will sit on top of existing mobile infrastructure.

Australia and the US can afford to take a hard line: their national infrastructure has been largely Huawei-free since 2012. However, the Chinese firm is deeply embedded in other countries’ existing structures – for example, in the UK, Huawei has provided telecommunications infrastructure since 2005. Even if the UK decided tomorrow to ditch Huawei, it cannot just rip up existing 4G infrastructure. To do so would cost a fortune, risk years of delay in the adoption of 5G and limit competition in 5G provisioning.

As a result, the UK has adopted a pragmatic approach resulting from years of oversight and analysis of Huawei equipment, during which it has never found evidence of malicious Chinese state cyber activity through Huawei.

At the heart of this process is the Huawei Cyber Security Evaluation Centre, which was founded in 2010 as a confidence-building measure. Originally criticized for ‘effectively policing itself’, as it was run and staffed entirely by Huawei, the governance has now been strengthened, with the National Cyber Security Centre chairing its oversight board.

The board’s 2019 report makes grim reading, highlighting ‘serious and system defects in Huawei’s software engineering and cyber security competence’. But it does not accuse the company of serving as a platform for state-sponsored surveillance.

Similar evidence-based policy approaches are emerging in other countries like Norway and Italy. They offer flexibility for governments, for example by limiting access to some contract competition through legitimate and transparent means, such as security reviews during procurement. The approaches also raise security concerns (both national and cyber) to a primary issue when awarding contracts – something that was not always done in the past, when price was the key driver.

The UK is also stressing the need to manage risk and increase vendor diversity in the ecosystem to avoid single points of failure. A further approach that is beginning to emerge is to draw a line between network ‘core’ and ‘periphery’ components, excluding some providers from the more sensitive ‘core’. The limited rollouts of 5G in the UK so far have adopted multi-provider strategies, and only one has reportedly not included Huawei kit.

Managing the risks to cyber security and national security will become more complex in a 5G environment. In global supply chains, bans based on the nationality of the provider offer little assurance. For countries that have already committed to Huawei in the past, and who may not wish to be drawn into an outright trade war with China, these moderate approaches offer a potential way forward.




y

How Is New Technology Driving Geopolitical Relations?

Research Event

22 October 2019 - 6:00pm to 7:00pm

Chatham House, London

Event participants

Rt Hon Baroness Neville-Jones DCMG, Minister of State for Security and Counter Terrorism (2010-11)
Jamie Condliffe, Editor, DealBook Newsletter and Writer, Bits Tech Newsletter, The New York Times
Jamie Saunders, Partner, Wychwood Partners LLP; Visiting Professor, University College London
Chair: Dr Patricia Lewis, Research Director, International Security Department, Chatham House

New technology such as 5G, artificial intelligence, nanotechnology and robotics have become, now more than ever, intertwined with geopolitical, economic and trade interests. Leading powers are using new technology to exert power and influence and to shape geopolitics more generally.

The ongoing race between the US and China around 5G technology is a case in point. Amid these tensions, the impact on developing countries is not sufficiently addressed.

Arguably, the existing digital divide will increase leading developing countries to the early, if not hasty, adoption of new technology for fear of lagging behind. This could create opportunities but will also pose risks.

This panel discusses how new technology is changing the geopolitical landscape. It also discusses the role that stakeholders, including governments, play in the creation of standards for new technologies and what that means for its deployment in key markets technically and financially.

Finally, the panel looks at the issue from the perspective of developing countries, addressing the choices that have to be made in terms of affordability, development priorities and security concerns.

This event was organized with the kind support of DXC Technology.

Nicole Darabian

Research Assistant, Cyber Policy, International Security Department




y

Building LGBTIQ+ Inclusivity in the Armed Forces, 20 Years After the Ban Was Lifted

16 January 2020

Will Davies

Army Chief of General Staff Research Fellow, International Security Programme
Change was slow to come but progress has since been swift. Not only can a continuing focus on inclusivity benefit service people and the organization, it is also an essential element of a values-based foreign policy.

2020-01-16-Westminster.jpg

Crew members from HMS Westminster march through Admiralty Arch as they exercise their freedom of the city in August 2019 in London. Photo: Getty Images.

The new UK government will conduct a review of foreign, security and defence policy in 2020. If the UK decides to use values as a framework for foreign policy this needs to be reflected in its armed forces. One area where this is essential is continuing to deepen inclusivity for LGBTIQ+ personnel, building on the progress made since the ban on their service was lifted in 2000.

I witnessed the ban first-hand as a young officer in the British Army in 1998. As the duty officer I visited soldiers being held in the regimental detention cells to check all was well. One day a corporal, who I knew, was there awaiting discharge from the army having been convicted of being gay. On the one hand, here was service law in action, which was officially protecting the army’s operational effectiveness and an authority not to be questioned at my level. On the other, here was an excellent soldier in a state of turmoil and public humiliation. How extreme this seems now.

On 12 January 2000 Tony Blair’s Labour government announced an immediate lifting of the ban for lesbian, gay and bisexual personnel (LGB) and introduced a new code of conduct for personal relationships. (LGB is the term used by the armed forces to describe those personnel who had been banned prior to 2000.) This followed a landmark ruling in a case taken to the European Court of Human Rights in 1999 by four LGB ex-service personnel – supported by Stonewall – who had been dismissed from service for their sexuality.

Up to that point the Ministry of Defence's long-held position had been that LGB personnel had a negative impact on the morale and cohesion of a unit and damaged operational effectiveness. Service personnel were automatically dismissed if it was discovered they were LGB, even though homosexuality had been decriminalized in the UK by 1967.

Proof that the armed forces had been lagging behind the rest of society was confirmed by the positive response to the change among service personnel, despite a handful of vocal political and military leaders who foresaw negative impacts. The noteworthy service of LGBTIQ+ people in Iraq and Afghanistan only served to debunk any residual myths.

Twenty years on, considerable progress has been made and my memories from 1998 now seem alien. This is a story to celebrate – however in the quest for greater inclusivity there is always room for improvement.

Defence Minister Johnny Mercer last week apologized following recent calls from campaign group Liberty for a fuller apology. In December 2019, the Ministry of Defence announced it was putting in place a scheme to return medals stripped from veterans upon their discharge.

The armed forces today have a range of inclusivity measures to improve workplace culture including assessments of workplace climate and diversity networks supported by champions drawn from senior leadership.

But assessing the actual lived experience for LGBTIQ+ people is challenging due to its subjectivity. This has not been helped by low participation in the 2015 initiative to encourage people to declare confidentially their sexual orientation, designed to facilitate more focused and relevant policies. As of 1 October 2019, only 20.3 per cent of regular service people had declared a sexual orientation.

A measure of positive progress is the annual Stonewall Workplace Equality Index, the definitive benchmarking tool for employers to measure their progress on LGBTIQ+ inclusion in the workplace; 2015 marked the first year in which all three services were placed in the top 100 employers in the UK and in 2019 the Royal Navy, British Army and Royal Air Force were placed 15th=, 51st= and 68th respectively.

Nevertheless, LGBTIQ+ service people and those in other protected groups still face challenges. The 2019 Ministry of Defence review of inappropriate behaviour in the armed forces, the Wigston Report, concluded there is an unacceptable level of sexual harassment, bullying and discrimination. It found that 26-36% of LGBTIQ+ service people have experienced negative comments or conduct at work because of their sexual orientation.

The Secretary of State for Defence accepted the report’s 36 recommendations on culture, incident reporting, training and a more effective complaints system. Pivotal to successful implementation will be a coherent strategy driven by fully engaged leaders.

Society is also expecting ever higher standards, particularly in public bodies. The armed forces emphasise their values and standards, including ‘respect for others’, as defining organisational characteristics; individuals are expected to live by them. Only in a genuinely inclusive environment can an individual thrive and operate confidently within a team.

The armed forces also recognize as a priority the need to connect to and reflect society more closely in order to attract and retain talent from across all of society. The armed forces’ active participation in UK Pride is helping to break down barriers in this area.

In a post-Brexit world, the UK’s values, support for human rights and reputation for fairness are distinctive strengths that can have an impact on the world stage and offer a framework for future policy. The armed forces must continue to push and promote greater inclusivity in support. When operating overseas with less liberal regimes, this will be sensitive and require careful handling; however it will be an overt manifestation of a broader policy and a way to communicate strong and consistent values over time.

The armed forces were damagingly behind the times 20 years ago. But good progress has been made since. Inclusion initiatives must continue to be pushed to bring benefits to the individual and the organization as well as demonstrate a values-based foreign policy.




y

The Commonwealth Cyber Declaration: Achievements and Way Forward

Invitation Only Research Event

4 February 2020 - 9:15am to 5:30pm

Chatham House, London

In April 2018, the Commonwealth Heads of Government Meeting (CHOGM), held in London, saw the creation and the adoption of the Commonwealth Cyber Declaration. The declaration outlines the framework for a concerted effort to advance cybersecurity practices to promote a safe and prosperous cyberspace for Commonwealth citizens, businesses and societies. 

The conference will aim to provide an overview on the progress made on cybersecurity in the Commonwealth since the declaration was announced in 2018. In addition, it will examine future challenges and potential solutions going forward.

This conference is part of the International Security Programme's project on Implementing the Commonwealth Cybersecurity Agenda and will convene a range of senior Commonwealth representatives as well as a selection of civil society and industry stakeholders. This project aims to develop a pan-Commonwealth platform to take the Commonwealth Cyber Declaration forward by means of a holistic, inclusive and representative approach.

Please see below meeting summaries from previous events on Cybersecurity in the Commonwealth:  

Attendance at this event is by invitation only. 

Esther Naylor

Research Assistant, International Security Programme
+44 (0)20 7314 3628




y

POSTPONED: Working Towards Cyber Resilience in the GCC: Opportunities and Challenges

Invitation Only Research Event

12 March 2020 - 9:00am to 5:00pm

Muscat, Oman

The GCC states have invested significantly in cybersecurity and have made large strides in protecting governments, businesses and individuals from cyber threats, with the aim of delivering on their ambitious national strategies and future visions. However, several challenges to cybersecurity and cyber resilience in the region persist, putting those ambitious plans at risk.

These challenges include the uneven nature of cybersecurity protections, the incomplete implementation of cybersecurity strategies and regulations, and the issues around international cooperation. Such challenges mean that GCC states need to focus on the more difficult task of cyber resilience, in addition to the simpler initial stages of cybersecurity capacity-building, to ensure they harness the true potential of digital technologies and mitigate associated threats.

Set against this background, this workshop will explore opportunities and challenges to cyber resilience in the GCC focusing on four main pillars:

1. Cyber resilience: in concept and in practice
2. Building an effective cybersecurity capacity
3. The potential of regional and international cooperation to cyber resilience
4. Deterrence and disruption: different approaches

This event will be held in collaboration with the Arab Regional Cybersecurity Centre (ARCC) and OMAN CERT.

PLEASE NOTE THIS EVENT IS POSTPONED UNTIL FURTHER NOTICE. 

Event attributes

Chatham House Rule

Esther Naylor

Research Assistant, International Security Programme
+44 (0)20 7314 3628




y

Cyber Security and Nuclear Weapons

This project aims to improve resilience in NATO’s nuclear weapons systems against cyber threats.

Cyber security is a vital part of the national and international strategic infrastructure and weapons systems. The increasing cyber capabilities of countries such as China, Russia and North Korea put the North Atlantic Treaty Organization’s (NATO’s) nuclear systems - capabilities that include nuclear command, control and communication, weapons systems and early warning systems - in danger.

There is an urgent need to study and address cyber challenges to nuclear assets within NATO and in key NATO countries. Greater awareness of the potential threats and vulnerabilities is key to improving preparedness and mitigating the risks of a cyber-attack on NATO nuclear weapons systems.

Chatham House produces research responding to the need for information on enhancing cybersecurity for command, control and communications. This project constitutes the beginning of the second phase of the Cyber Security of Nuclear Weapons Systems: Threats, Vulnerabilities and Consequences, a report published in January 2018 in partnership with the Stanley Foundation.

The project responds to the need both for more public information on cyber risks in NATO’s nuclear mission, and to provide policy-driven research to shape and inform nuclear policy within NATO member states and the Nuclear Planning Group.

This project is supported by the Ploughshares Fund and the Stanley Foundation.




y

Deterrence Perspectives in the 21st Century

The aim of this project is to provide a space to explore creative/disruptive ideas in order to make headway on perspectives concerning deterrence. This will encourage ‘responsible disruption’ in the nuclear field.

Concerns about transatlantic security are high following the US 2018 Nuclear Posture Review and its interpretation of the Russian doctrine, the demise of the Intermediate Range Nuclear Forces Treaty (INF), the uncertainty surrounding the potential extension of the New Strategic Arms Reduction Treaty (New START), and Russian deployment of Avangard hypersonic, nuclear-capable missile systems.

Emerging technologies, especially quantum technologies, jeopardize the reliability of existing encryption measures. Some of the most sophisticated cyber attacks are already assisted by artificial intelligence. The possibility that nuclear weapons systems can be interfered with both during conflict and peacetime by these technologies, without the knowledge of the possessor state, raises questions on the reliability and integrity of these systems, with implications for military decision-making, particularly to deterrence policy.

These issues and more indicate the changes in the security landscape that have a bearing on the future of nuclear deterrence.

This project is supported by the Hiroshima Prefecture and Government of Ireland.




y

Nuclear Weapons: Innovative Approaches for the Complex International Security Environment

This programme of work addresses the conundrum of nuclear weapons as a wicked problem in a complex adaptive system.

Understanding the complexity and the wickedness of the situation allows analysts and strategic planners to approach these complex and intractable issues in new and transformative ways – with a better chance of coping or succeeding and reducing the divisions between experts.

Using complexity theory, a complex adaptive system representing the international system and its interaction with the environment can be represented through an interactive visualization tool that will aid thought processes and policy decision-making. 

Until recently, analysts did not have the tools to be able to create models that could represent the complexity of the international system and the role that nuclear weapons play. Now that these tools are available, analysts should use them to enable decision-makers to gain insights into the range of possible outcomes from a set of possible actions.

This programme builds on work by Chatham House on cyber security and artificial intelligence (AI) in the nuclear/strategic realms.

In order to approach nuclear weapons as wicked problems in a complex adaptive system from different and sometimes competing perspectives, the programme of work involves the wider community of specialists who do not agree on what constitutes the problems of nuclear weapons nor on what are the desired solutions.

Different theories of deterrence, restraint and disarmament are tested. The initiative is international and inclusive, paying attention to gender, age and other aspects of diversity, and the network of MacArthur Grantees are given the opportunity to participate in the research, including in the writing of research papers, so that the complexity modelling can be tested against a wide range of approaches and hypotheses.

In addition, a Senior Reference Group will work alongside the programme, challenging its outcome and findings, and evaluating and guiding the direction of the research.

This project is supported by the MacArthur Foundation.




y

POSTPONED: What Impact of Sovereignty in the Internet?

Research Event

26 March 2020 - 6:00pm to 7:00pm

Chatham House

Event participants

Konstantinos Komaitis, Senior Director, Policy Development & Strategy, Internet Society
Gregory Asmolov, Leverhulme Early Career Fellow Russia Institute, King’s College London
Further speakers to be announced.
Chair: Joyce Hakmeh, Senior Research Fellow, International Security Programme, Chatham House and Co-Editor of the Journal of Cyber Policy.

 

Several governments have been moving towards a stronger sovereignty narrative when it comes to the internet with some trying to impose borders in cyberspace to extend their physical borders in cyberspace. From attempts to create isolatable domestic internets to data localization laws and to increased calls for sovereignty in the digital space, all these approaches are raising concerns regarding the fate of the internet.

While the impact of these approaches varies and the motivations behind them are arguably different too, all governments have been working towards the pursuit of greater technological independence and in some instances greater control.

The panellists will discuss the impact that these approaches have on the internet. They will address the question of whether the era of an 'open web' is drawing to an end and whether these territorialization efforts lead to a fragmentation of the internet, making a 'splinternet' inevitable?

This event is being organized with the kind support of DXC Technology.

This event will be followed by a reception. 

PLEASE NOTE THIS EVENT IS POSTPONED UNTIL FURTHER NOTICE.

Esther Naylor

Research Assistant, International Security Programme
+44 (0)20 7314 3628




y

Is the GCC Cyber Resilient?

9 March 2020

How would the states of the Gulf Cooperation Council (GCC) respond to a serious cyber incident? This could be a global ransomware event, a critical infrastructure incident targeted at the energy sector, or an attack on government departments. This paper examines cyber resilience in the states of the GCC. 

Joyce Hakmeh

Senior Research Fellow, International Security Programme; Co-Editor, Journal of Cyber Policy

James Shires

Assistant Professor at the Institute for Security and Global Affairs, University of Leiden

GettyImages-1052280468.jpg

Saudi nationals attend the Gitex 2018 exhibition at the Dubai World Trade Center in Dubai on 16 October 2018. Photo: Getty Images.

Summary

  • GCC states seek to be leaders in digital innovation, but this leaves them vulnerable to an increasing range of cyberthreats. Governments have invested significantly in cybersecurity but these measures have been unevenly implemented, makingit difficult for these states to be resilient against a large-scale cyber incident.
  • Strategies, structures and processes (‘approaches’) for achieving cyber resilience can be conceptualized along a scale from centralized to distributed: centralized approaches maintain decision-making power in a single body, while distributed ones disperse power over many sites.
  • Centralized approaches provide more resilience against unwanted influence, while distributed approaches provide more resilience against intrusions into infrastructure. The GCC states have so far prioritized centralized over distributed cyber resilience, seeking internet and social media control over sustainable network recovery.
  • GCC governments should make a sustainable commitment to cyber resilience that provides clear guidance to organizations and makes best use of emerging cybersecurity structures. This may involve further engagement with international initiatives and partners to increase cyber resilience.
  • Given limited resources, GCC governments should rebalance their efforts from centralized towards distributed approaches to resilience.
  • GCC governments should examine the impact of relevant new technologies, discussing openly the risks of these technologies and appropriate solutions.




y

Predictions and Policymaking: Complex Modelling Beyond COVID-19

1 April 2020

Yasmin Afina

Research Assistant, International Security Programme

Calum Inverarity

Research Analyst and Coordinator, International Security Programme
The COVID-19 pandemic has highlighted the potential of complex systems modelling for policymaking but it is crucial to also understand its limitations.

GettyImages-1208425931.jpg

A member of the media wearing a protective face mask works in Downing Street where Britain's Prime Minister Boris Johnson is self-isolating in central London, 27 March 2020. Photo by TOLGA AKMEN/AFP via Getty Images.

Complex systems models have played a significant role in informing and shaping the public health measures adopted by governments in the context of the COVID-19 pandemic. For instance, modelling carried out by a team at Imperial College London is widely reported to have driven the approach in the UK from a strategy of mitigation to one of suppression.

Complex systems modelling will increasingly feed into policymaking by predicting a range of potential correlations, results and outcomes based on a set of parameters, assumptions, data and pre-defined interactions. It is already instrumental in developing risk mitigation and resilience measures to address and prepare for existential crises such as pandemics, prospects of a nuclear war, as well as climate change.

The human factor

In the end, model-driven approaches must stand up to the test of real-life data. Modelling for policymaking must take into account a number of caveats and limitations. Models are developed to help answer specific questions, and their predictions will depend on the hypotheses and definitions set by the modellers, which are subject to their individual and collective biases and assumptions. For instance, the models developed by Imperial College came with the caveated assumption that a policy of social distancing for people over 70 will have a 75 per cent compliance rate. This assumption is based on the modellers’ own perceptions of demographics and society, and may not reflect all societal factors that could impact this compliance rate in real life, such as gender, age, ethnicity, genetic diversity, economic stability, as well as access to food, supplies and healthcare. This is why modelling benefits from a cognitively diverse team who bring a wide range of knowledge and understanding to the early creation of a model.

The potential of artificial intelligence

Machine learning, or artificial intelligence (AI), has the potential to advance the capacity and accuracy of modelling techniques by identifying new patterns and interactions, and overcoming some of the limitations resulting from human assumptions and bias. Yet, increasing reliance on these techniques raises the issue of explainability. Policymakers need to be fully aware and understand the model, assumptions and input data behind any predictions and must be able to communicate this aspect of modelling in order to uphold democratic accountability and transparency in public decision-making.

In addition, models using machine learning techniques require extensive amounts of data, which must also be of high quality and as free from bias as possible to ensure accuracy and address the issues at stake. Although technology may be used in the process (i.e. automated extraction and processing of information with big data), data is ultimately created, collected, aggregated and analysed by and for human users. Datasets will reflect the individual and collective biases and assumptions of those creating, collecting, processing and analysing this data. Algorithmic bias is inevitable, and it is essential that policy- and decision-makers are fully aware of how reliable the systems are, as well as their potential social implications.

The age of distrust

Increasing use of emerging technologies for data- and evidence-based policymaking is taking place, paradoxically, in an era of growing mistrust towards expertise and experts, as infamously surmised by Michael Gove. Policymakers and subject-matter experts have faced increased public scrutiny of their findings and the resultant policies that they have been used to justify.

This distrust and scepticism within public discourse has only been fuelled by an ever-increasing availability of diffuse sources of information, not all of which are verifiable and robust. This has caused tension between experts, policymakers and public, which has led to conflicts and uncertainty over what data and predictions can be trusted, and to what degree. This dynamic is exacerbated when considering that certain individuals may purposefully misappropriate, or simply misinterpret, data to support their argument or policies. Politicians are presently considered the least trusted professionals by the UK public, highlighting the importance of better and more effective communication between the scientific community, policymakers and the populations affected by policy decisions.

Acknowledging limitations

While measures can and should be built in to improve the transparency and robustness of scientific models in order to counteract these common criticisms, it is important to acknowledge that there are limitations to the steps that can be taken. This is particularly the case when dealing with predictions of future events, which inherently involve degrees of uncertainty that cannot be fully accounted for by human or machine. As a result, if not carefully considered and communicated, the increased use of complex modelling in policymaking holds the potential to undermine and obfuscate the policymaking process, which may contribute towards significant mistakes being made, increased uncertainty, lack of trust in the models and in the political process and further disaffection of citizens.

The potential contribution of complexity modelling to the work of policymakers is undeniable. However, it is imperative to appreciate the inner workings and limitations of these models, such as the biases that underpin their functioning and the uncertainties that they will not be fully capable of accounting for, in spite of their immense power. They must be tested against the data, again and again, as new information becomes available or there is a risk of scientific models becoming embroiled in partisan politicization and potentially weaponized for political purposes. It is therefore important not to consider these models as oracles, but instead as one of many contributions to the process of policymaking.




y

Supporting NHS Cybersecurity During COVID-19 is Vital

2 April 2020

Joyce Hakmeh

Senior Research Fellow, International Security Programme; Co-Editor, Journal of Cyber Policy
The current crisis is an opportunity for the UK government to show agility in how it deals with cyber threats and how it cooperates with the private sector in creating cyber resilience.

2020-04-02-NHS-nurse-tech-cyber

Nurse uses a wireless electronic tablet to order medicines from the pharmacy at The Queen Elizabeth Hospital, Birmingham, England. Photo by Christopher Furlong/Getty Images.

The World Health Organization, US Department of Health and Human Services, and hospitals in Spain, France and the Czech Republic have all suffered cyberattacks during the ongoing COVID-19 crisis.

In the Czech Republic, a successful attack targeted a hospital with one of the country’s biggest COVID-19 testing laboratories, forcing its entire IT network to shut down, urgent surgical operations to be rescheduled, and patients to be moved to nearby hospitals. The attack also delayed dozens of COVID-19 test results and affected the hospital’s data transfer and storage, affecting the healthcare the hospital could provide.

In the UK, the National Health Service (NHS) is already in crisis mode, focused on providing beds and ventilators to respond to one of the largest peacetime threats ever faced. But supporting the health sector goes beyond increasing human resources and equipment capacity.

Health services ill-prepared

Cybersecurity support, both at organizational and individual level, is critical so health professionals can carry on saving lives, safely and securely. Yet this support is currently missing and the health services may be ill-prepared to deal with the aftermath of potential cyberattacks.

When the NHS was hit by the Wannacry ransomware attack in 2017 - one of the largest cyberattacks the UK has witnessed to date – it caused massive disruption, with at least 80 of the 236 trusts across England affected and thousands of appointments and operations cancelled. Fortunately, a ‘kill-switch’ activated by a cybersecurity researcher quickly brought it to a halt.

But the UK’s National Cyber Security Centre (NCSC), has been warning for some time against a cyber attack targeting national critical infrastructure sectors, including the health sector. A similar attack, known as category one (C1) attack, could cripple the UK with devastating consequences. It could happen and we should be prepared.

Although the NHS has taken measures since Wannacry to improve cybersecurity, its enormous IT networks, legacy equipment and the overlap between the operational and information technology (OT/IT) does mean mitigating current potential threats are beyond its ability.

And the threats have radically increased. More NHS staff with access to critical systems and patient health records are increasingly working remotely. The NHS has also extended its physical presence with new premises, such as the Nightingale hospital, potentially the largest temporary hospital in the world.

Radical change frequently means proper cybersecurity protocols are not put in place. Even existing cybersecurity processes had to be side-stepped because of the outbreak, such as the decision by NHS Digital to delay its annual cybersecurity audit until September. During this audit, health and care organizations submit data security and protection toolkits to regulators setting out their cybersecurity and cyber resilience levels.

The decision to delay was made to allow the NHS organizations to focus capacity on responding to COVID-19, but cybersecurity was highlighted as a high risk, and the importance of NHS and Social Care remaining resilient to cyberattacks was stressed.

The NHS is stretched to breaking point. Expecting it to be on top of its cybersecurity during these exceptionally challenging times is unrealistic, and could actually add to the existing risk.

Now is the time where new partnerships and support models should be emerging to support the NHS and help build its resilience. Now is the time where innovative public-private partnerships on cybersecurity should be formed.

Similar to the economic package from the UK chancellor and innovative thinking on ventilator production, the government should oversee a scheme calling on the large cybersecurity capacity within the private sector to step in and assist the NHS. This support can be delivered in many different ways, but it must be mobilized swiftly.

The NCSC for instance has led the formation of the Cyber Security Information Sharing Partnership (CiSP)— a joint industry and UK government initiative to exchange cyber threat information confidentially in real time with the aim of reducing the impact of cyberattacks on UK businesses.

CiSP comprises organizations vetted by NCSC which go through a membership process before being able to join. These members could conduct cybersecurity assessment and penetration testing for NHS organizations, retrospectively assisting in implementing key security controls which may have been overlooked.

They can also help by making sure NHS remote access systems are fully patched and advising on sensible security systems and approved solutions. They can identify critical OT and legacy systems and advise on their security.

The NCSC should continue working with the NHS to enhance provision of public comprehensive guidance on cyber defence and response to potential attack. This would show they are on top of the situation, projecting confidence and reassurance.

It is often said in every crisis lies an opportunity. This is an opportunity for the UK government to show agility in how it deals with cyber threats and how it cooperates with the private sector in creating cyber resilience.

It is an opportunity to lead a much-needed cultural change showing cybersecurity should never be an afterthought.




y

Perspectives on Nuclear Deterrence in the 21st Century

20 April 2020

Nuclear deterrence theory, with its roots in the Cold War era, may not account for all eventualities in the 21st century. Researchers at Chatham House have worked with eight experts to produce this collection of essays examining four contested themes in contemporary policymaking on deterrence.

Dr Beyza Unal

Senior Research Fellow, International Security Programme

Yasmin Afina

Research Assistant, International Security Programme

Dr Patricia Lewis

Research Director, Conflict, Science & Transformation; Director, International Security Programme

Dr John Borrie

Associate Fellow, International Security Programme

Dr Jamie Shea

Associate Fellow, International Security Programme

Peter Watkins

Associate Fellow, International Security Programme

Dr Maria Rost Rublee

Associate Professor of International Relations, Monash University

Cristina Varriale

Research Fellow in Proliferation and Nuclear Policy, RUSI

Dr Tanya Ogilvie-White

Adjunct Senior Fellow, Griffith Asia Institute, Griffith University

Dr Andrew Futter

Associate Professor of International Politics, University of Leicester

Christine Parthemore

Chief Executive Officer, Council on Strategic Risks (CSR)

2020-04-20-NuclearDeterrence.jpeg

Royal Navy Vanguard Class submarine HMS Vigilant returning to HMNB Clyde after extended deployment. The four Vanguard-class submarines form the UK's strategic nuclear deterrent force. Photo: Ministry of Defence.

Summary

  • This collection of essays explores, from the perspectives of eight experts, four areas of deterrence theory and policymaking: the underlying assumptions that shape deterrence practice; the enduring value of extended deterrence; the impact of emerging technologies; and the ‘blurring’ of the lines between conventional and nuclear weapons.
  • Nuclear deterrence theory, with its roots in the Cold War era, may not account for all eventualities in security and defence in the 21st century, given the larger number of nuclear actors in a less binary geopolitical context. It is clear that a number of present factors challenge the overall credibility of ‘classical’ nuclear deterrence, meaning that in-depth analysis is now needed.
  • Uncertainty as to the appetite to maintain the current nuclear weapons policy architecture looms large in discussions and concerns on global and regional security. The demise of the Intermediate-Range Nuclear Forces Treaty, doubts over the potential extension of the New Strategic Arms Reduction Treaty, heightened regional tensions in Northeast and South Asia, together with the current and likely future risks and challenges arising from global technological competition, making it all the more urgent to examine long-held assumptions in the real-world context.
  • Extended deterrence practices differ from region to region, depending on the domestic and regional landscape. Increased focus on diplomatic capabilities to reduce risks and improve the long-term outlook at regional level, including by spearheading new regional arms-control initiatives, may be a viable way forward. Addressing the bigger picture – notably including, on the Korean peninsula, Pyongyang’s own threat perception – and the links between conventional and nuclear missile issues will need to remain prominent if long-term and concrete changes are to take hold.
  • Most states have long held nuclear weapons to be ‘exceptional’: their use would represent a dramatic escalation of a conflict that must never be attained. Latterly, however, some officials and scholars have made the case that the impact of the use of a low-yield nuclear weapon would not be entirely distinct from that of a large-scale conventional attack. This blurring of lines between conventional and nuclear deterrence strips nuclear weapons of their exceptional nature, in a context in which states are faced with diverse, complex and concurrent threats from multiple potential adversaries that are able to synchronize non-military and military options, up to and including nuclear forces. The use of nuclear weapons risks becoming a ‘new normal’, potentially reducing the threshold for use – to cyberattacks, for example. This has direct implications for discussions around strategic stability. 
  • While emerging technologies may offer tremendous opportunities in the modernization of nuclear weapons, they also present major risks and destabilizing challenges. Artificial intelligence, automation, and other developments in the cyber sphere affect dynamics on both the demand and supply sides of the nuclear deterrence equation. States and alliance such as NATO must adapt their deterrence thinking in light of these technological developments, and define their primary purpose and priorities in this shifting security context. Resilience planning, adaptation to the evolving security environment, threat anticipation, and consistent crisis management and incident response – as well as thinking about the mitigation measures necessary to prevent conflict escalation should deterrence fail – will all be critical in upholding nuclear deterrence as both policy and practice.