i

Molecular physiology of reverse cholesterol transport

CJ Fielding
Feb 1, 1995; 36:211-228
Reviews




i

Restriction isotyping of human apolipoprotein E by gene amplification and cleavage with HhaI

JE Hixson
Mar 1, 1990; 31:545-548
Articles




i

The Committee to Protect Journalists named winner of the Chatham House Prize 2018

The Committee to Protect Journalists named winner of the Chatham House Prize 2018 News Release sysadmin 5 October 2018

The Committee to Protect Journalists (CPJ) has been voted the winner of this year’s Chatham House Prize.




i

Chatham House awarded major centenary grant to establish Stavros Niarchos Foundation Wing

Chatham House awarded major centenary grant to establish Stavros Niarchos Foundation Wing News Release sysadmin 16 April 2019

Chatham House has been awarded a transformational £10m grant ahead of its upcoming 2020 centenary.




i

Chatham House appoints Tim Benton as Research Director for Energy, Environment and Resources

Chatham House appoints Tim Benton as Research Director for Energy, Environment and Resources News Release sysadmin 30 May 2019

Chatham House is pleased to announce that Professor Tim Benton has been appointed as research director of the Energy, Environment and Resources Department.




i

Chatham House appoints Rob Yates as the new head of the Centre on Global Health Security

Chatham House appoints Rob Yates as the new head of the Centre on Global Health Security News Release sysadmin 27 June 2019

Chatham House is pleased to announce that Rob Yates has been appointed as head of the Centre on Global Health Security.




i

Chatham House Commission on Democracy and Technology in Europe

Chatham House Commission on Democracy and Technology in Europe News Release sysadmin 25 July 2019

Our project on Democracy and Technology in Europe is now entering its final phase. Now we want your help in shaping the final report.




i

Creon Butler appointed to lead Global Economy and Finance Programme

Creon Butler appointed to lead Global Economy and Finance Programme News Release sysadmin 22 October 2019

Creon Butler has been appointed to lead the Global Economy and Finance programme at Chatham House, joining the institute at the beginning of December. He will also form part of the institute’s senior leadership team.




i

Sir David Attenborough and the BBC Studios Natural History Unit awarded Chatham House Prize 2019 for ocean advocacy

Sir David Attenborough and the BBC Studios Natural History Unit awarded Chatham House Prize 2019 for ocean advocacy News Release sysadmin 18 November 2019

The 2019 Chatham House Prize is awarded to Sir David Attenborough and Julian Hector, head of BBC Studios Natural History Unit, for the galvanizing impact of the Blue Planet II series on tackling ocean plastic pollution.




i

COVID-19 and Chatham House

COVID-19 and Chatham House News Release sysadmin 4 March 2020

Chatham House continues to operate during the coronavirus pandemic.




i

Announcing Design Resonance in an Age of Crisis

Announcing Design Resonance in an Age of Crisis News Release sysadmin 1 June 2020

London Design Biennale and Chatham House announce Design Resonance in an Age of Crisis, which calls for action by designers around the world to create radical design solutions to critical problems across four key areas: Health, Environment, Society and Work.




i

Remembering Rosemary Hollis (1952-2020)

Remembering Rosemary Hollis (1952-2020) News Release sysadmin 12 June 2020

Professor Rosemary Hollis, a highly respected authority on the Middle East, died suddenly last week. Rosy is remembered with great respect and affection, as a colleague and a friend.




i

Design In An Age of Crisis - Open Call

Design In An Age of Crisis - Open Call News Release sysadmin 21 July 2020

Chatham House and London Design Biennale announce full details of 'Design In An Age of Crisis,' a global Open Call for radical design thinking.




i

Renata Dwan Joins as Deputy Director and Senior Executive Officer

Renata Dwan Joins as Deputy Director and Senior Executive Officer News Release sysadmin 19 August 2020

Renata Dwan has been appointed deputy director and senior executive officer of Chatham House.




i

Chatham House Prize: Malawi Judges Win for Election Work

Chatham House Prize: Malawi Judges Win for Election Work News Release NCapeling 23 October 2020

Malawi’s constitutional court judges have won the 2020 Chatham House Prize in recognition of their 'courage and independence in the defence of democracy'.




i

Strengthening Our Commitment to the Next Generation

Strengthening Our Commitment to the Next Generation News Release NCapeling 9 November 2020

Panel of Young Advisers and Queen Elizabeth II Academy Ambassadors underscore our drive to reach, engage and inspire young people to change their world.




i

New Strategic Partnership with the Robert Bosch Stiftung

New Strategic Partnership with the Robert Bosch Stiftung News Release jon.wallace 23 November 2020

The Robert Bosch Stiftung becomes a founding donor to Chatham House’s second century.




i

Lord Hammond Joins Panel of Senior Advisers

Lord Hammond Joins Panel of Senior Advisers News Release NCapeling 10 December 2020

Chatham House is pleased to announce that Lord Hammond of Runnymede is joining our Panel of Senior Advisers.




i

Design in an Age of Crisis Launches

Design in an Age of Crisis Launches News Release jon.wallace 13 January 2021

Design open call receives 500 submissions from over 50 countries across six continents.




i

New Chatham House History Examines our Defining Moments

New Chatham House History Examines our Defining Moments News Release NCapeling 18 January 2021

'A History of Chatham House: its People and Influence from the 1920s to the 2020s' will examine the impact on policymaking of our first 100 years.




i

Supporting Next Generation of Leaders in Sustainability

Supporting Next Generation of Leaders in Sustainability News Release NCapeling 28 January 2021

A new programme offering paid internships for young people who are passionate about social, economic, and environmental sustainability has been launched.




i

Supporting Civic Space: The Role and Impact of the Private Sector

Supporting Civic Space: The Role and Impact of the Private Sector 23 September 2020 — 2:00PM TO 4:15PM Anonymous (not verified) 23 December 2020 Online

The meeting provides an opportunity to explore the drivers of – and barriers to – corporate activism.

A healthy civic space is vital for an enabling business environment. In recognition of this, a growing number of private sector actors are challenging, publicly or otherwise, the deteriorating environment for civic freedoms.

However, this corporate activism is often limited and largely ad hoc. It remains confined to a small cluster of multinationals leaving potential routes for effective coordination and collaboration with other actors underexplored.

This roundtable brings together a diverse and international group of business actors, civil society actors and foreign policy experts to exchange perspectives and experiences on how the private sector can be involved in issues around civic space.

The meeting provides an opportunity to explore the drivers of – and barriers to – corporate activism, develop a better understanding of existing initiatives, identify good practice and discuss practical strategies for the business community.

This meeting is the first of a series of roundtables at Chatham House in support of initiatives to build broad alliances for the protection of civic space. 




i

Supporting Civic Space: The Role and Impact of the Tech Sector

Supporting Civic Space: The Role and Impact of the Tech Sector 13 October 2020 — 2:00PM TO 4:15PM Anonymous (not verified) 23 December 2020 Online

This event brings together a diverse and international group of stakeholders to exchange perspectives and experiences on the role that tech actors can play in supporting civic space.

In a deteriorating environment for civic freedoms, tech sector actors are increasingly engaging, publicly or otherwise, on issues of civic space.

In the US, for example, a number of tech companies have cancelled contracts with the Pentagon and stopped censoring search results in China as a result of protests by employees. The Asia Internet Coalition recently wrote to Pakistan’s Prime Minister expressing human rights concerns about new rules regulating social media.

While we have recently seen technology companies show support for the social movements, including through substantial pledges, in some cases these have elicited criticism of hypocrisy, and the interventions of social media platforms on freedom of expression and privacy issues have been closely linked to the preservation of their own business models.

The COVID-19 crisis has also posed new dilemmas for the tech sector with the pervasiveness of disinformation, as well as new tools for tracking individuals which raise privacy issues.

This roundtable provides an opportunity to explore the drivers of (and barriers to) corporate activism, develop a better understanding of existing initiatives, identify good practice and routes to effective collaboration with other actors, and discuss practical strategies that could be adopted by the tech community.

It is the second of a series of roundtables at Chatham House in support of initiatives to build broad alliances for the protection of civic space.




i

How can companies defend civic space?

How can companies defend civic space? 2 February 2021 — 4:00PM TO 5:00PM Anonymous (not verified) 19 January 2021 Online

Panellists discuss how companies can go beyond corporate social responsibility and philanthropy initiatives to protect and support civic freedoms around the world.

Please click on the below link to confirm your participation and receive your individual joining details from Zoom for this event. You will receive a confirmation email from Zoom, which contains the option to add the event to your calendar if you so wish.

There is increasing pressure on companies to use their power and profits to engage with social and political causes. In doing so, companies can help to support the ‘shared civic space’ that enables the private sector and civil society organizations to benefit from a society that respects the rule of law and human rights, at a time when many of these rights are under threat around the world.

Many companies have introduced CSR initiatives, due diligence mechanisms and corporate philanthropy. Over 11,000 companies are now signatories to the UN’s Global Compact for sustainable and socially responsible business worldwide.

But as demonstrated by misguided corporate responses to the Black Lives Matter protests this year, there is a danger of corporate activism being perceived as ‘lip service’ rather than genuinely addressing the negative impacts of business operations on civic space.

Recent Chatham House research indicates that meaningful engagement by businesses on such issues must be timely, contextually sensitive and industry-relevant. For example, in 2015, Tiffany & Co. worked with other companies to intervene on behalf of Rafael Marques after he was arrested for reporting on widespread human rights abuses in the Angolan diamond industry. During COVID-19, Microsoft offered free cybersecurity software to healthcare and human rights organizations at increased risk of hacking attacks.

This panel event will draw upon practical examples of private sector support for civic space across different sectors, geographies and political environments.

Why might companies step up to defend freedom of association, expression or political participation even where this comes at a financial or political cost? How can companies resist complicity with governments or regulation that threaten civic space? And what forums exist, or should exist, for developing tactical alliances between companies and civil society actors?

This event is also the launch of a new Chatham House resource, The Role of the Private Sector in Protecting Civic Space.




i

Deplatforming Trump puts big tech under fresh scrutiny

Deplatforming Trump puts big tech under fresh scrutiny Expert comment NCapeling 22 January 2021

The response of digital platforms to the US Capitol riots raises questions about online content governance. The EU and UK are starting to come up with answers.

The ‘deplatforming’ of Donald Trump – including Twitter’s announcement that it has permanently banned him due to ‘the risk of further incitement of violence’ after the riots in the US – shows once more not only the sheer power of online platforms but also the lack of a coherent and consistent framework for online content governance.

Taking the megaphone away from Trump during the Capitol riots seems sensible, but was it necessary or proportionate to ban him from the platform permanently? Or consistent with the treatment of other ‘strongmen’ world leaders such as Modi, Duterte and Ayatollah Ali Khamenei who have overseen nationalistic violence but whose accounts remain intact?

Such complex decisions on online expression should not made unilaterally by powerful and unregulated tech actors, but instead should be subject to democratic oversight and grounded in the obligations of states and responsibilities of companies under international human rights law.

The speed and scale of digital information has left governments across the world struggling with how to tackle online harms such as hate speech, extremist content and disinformation since the emergence of mass social media 15 years ago.

The US’s hallowed approach to the First Amendment, under which speech on public issues – even hate speech – occupies the highest rank and is entitled to special protection, has contributed to a reluctance to regulate Silicon Valley’s digital platforms. But the irony is that by not regulating them, the government harmed freedom of expression by leaving complex speech decisions in the hands of private actors.

Meanwhile at the other extreme is the growing number of illiberal and authoritarian governments using a combination of vague laws, censorship, propaganda, and internet blackouts to severely restrict online freedom of expression, control the narrative and, in some cases, incite atrocities.

Regulation is on the way

The happy medium – flexible online content regulation providing clarity, predictability, transparency, and accountability – has until now been elusive. But even before the deplatforming of Trump, 2021 was set to be the year when this approach finally gained some traction, at least in Europe.

The EU’s recently-published draft Digital Services Act puts obligations on dominant social media platforms to manage ‘systemic risks’, for example through requirements for greater transparency about their content decisions, algorithms used for recommendations, and online advertising systems.

The UK will shortly publish its Online Safety Bill, which will establish a new regulatory framework for tackling online harms, including the imposition of a duty of care and codes of conduct on Big Tech, to be overseen by an independent regulator (Ofcom).

Both proposals are based on a ‘co-regulatory’ model under which the regulator sets out a framework substantiated with rules by the private sector, with the regulator performing a monitoring function to ensure the rules are complied with.

Both also draw on international human rights standards and the work of civil society in applying these standards in relation to the online public square, with the aim of increasing control for users over what they see online, requiring transparency about tech companies’ policies in a number of areas, and strengthening the accountability of platforms when they fall foul of the regulation.

The procedure for both proposals has also been inclusive, involving extensive multi-stakeholder consultations with civil society organizations and Big Tech, and the proposals will be subject to scrutiny in 2021, notably from the EU and UK parliaments.

Both proposals are at an early stage, and it remains to be seen whether they go far enough – or indeed will have a chilling effect on online platforms. But as an attempt to initiate a dialogue on globally coherent principles, they are positive first steps. They also provide food for thought for the new Joe Biden administration in the US as it turns its attention to the regulation of Big Tech.

For some time civil society – most prominently David Kaye, the former UN Special Rapporteur on freedom of expression and opinion – have called for content regulation to be informed by universal international human rights law standards.

The EU and UK are peculiarly well-placed to take the lead in this area because European countries have for decades been on the receiving end of judgments from the European Court of Human Rights on the appropriate limits to freedom of expression in cases brought under the European Convention on Human Rights.

In deciding these cases, the court has to balance the right to freedom of expression against the restrictions imposed – for example in the context of incitement to violence, political debate, and satire. Deciding where to draw the line on what can and cannot be expressed in a civilised society which prizes freedom of expression is inevitably a difficult exercise.

International human rights law provides a methodology that inquires whether the interference to freedom of expression was prescribed by law and pursues a legitimate aim, and also whether it was necessary in a democratic society to achieve those aims – including whether the interference was necessary and proportionate (as for example in Delfi AS v Estonia, which involved a news portal failing to take down unlawful hate speech).

To be effective, online content regulation has to bite on tech companies, which is a challenge given the internet is global but domestic law normally applies territorially. The EU’s proposals have an extraterritorial element as they apply to any online platforms providing services in the EU regardless of where the platform is headquartered.

Further, both the EU and UK want to give the regulator strong enforcement powers – it is proposed for example that Ofcom will have powers to fine platforms up to ten per cent of their turnover for breaches.

Although the proposals would not apply directly to the deplatforming of Trump which occurred in the US, the philosophy behind the EU and UK approach is likely to have an impact beyond European shores in promoting a co-regulatory model that some of the bigger tech companies have been inviting for some time, reluctant as they are to ‘play God’ on content moderation decisions without reference to any regulatory framework.

In the absence of regulation, the standards of tech platforms such as Facebook and Twitter have already evolved over time in response to pressure from civil rights groups, users, and advertisers, including updated policies on protecting civic conversation and hate speech.

Facebook has also set up an independent Oversight Board, whose members include leading human rights lawyers, to review decisions on content including – at its own request – the decision to indefinitely suspend Trump from Facebook and Instagram. Decisions on the Board’s first tranche of cases are expected imminently.

Gatekeeper status is key

Online content regulation also needs to address the role of Big Tech as the ‘digital gatekeepers’, because their monopoly power extends not just to editorial control of the news and information we consume, but also to market access.

The decision of Apple, Google, and Amazon to stop hosting right-wing social network Parler after it refused to combat calls for violence during the US Capitol riots was understandable in the circumstances, but also underlined the unilateral ability of Big Tech to decide the rules of the market.

Again, it is Europe where efforts are underway to tackle this issue: the EU’s draft Digital Market Act imposes obligations on online gatekeepers to avoid certain unfair practices, and the UK’s new Digital Markets Unit will have powers to write and enforce a new code of practice on those technology companies with ‘substantial and enduring’ market power.

In the US, Biden’s team will be following these developments with interest, given the growing bipartisan support for strengthening US antitrust rules and reviving antitrust enforcement. The EU’s recently published proposals for an EU-US tech agenda include a transatlantic dialogue on the responsibility of tech platforms and strengthened cooperation between antitrust authorities on digital markets.

Ultimately a consistent – and global – approach to online content is needed instead of fragmented approaches by different companies and governments. It is also important the framework is flexible so that it is capable of applying not only to major democracies but also to countries where too often sweeping state regulation has been used as a pretext to curtail online expression online.

The pursuit of a pluralistic framework tailored to different political and cultural contexts is challenging, and international human rights law cannot provide all the answers but, as a universal framework, it is a good place to start. The raft of regulatory measures from the EU and UK means that, regardless of whether Trump regains his online megaphone, 2021 is set to be a year of reckoning for Big Tech.




i

The UK's new Online Safety Bill

The UK's new Online Safety Bill 10 February 2021 — 3:00PM TO 3:45PM Anonymous (not verified) 26 January 2021 Online

Discussing the new proposals which include the establishment of a new ‘duty of care’ on companies to ensure they have robust systems in place to keep their users safe.

Governments, regulators and tech companies are currently grappling with the challenge of how to promote an open and vibrant internet at the same time as tackling harmful activity online, including the spread of hateful content, terrorist propaganda, and the conduct of cyberbullying, child sexual exploitation and abuse.

The UK government’s Online Harms proposals include the establishment of a new ‘duty of care’ on companies to ensure they have robust systems in place to keep their users safe. Compliance with this new duty will be overseen by an independent regulator.

On 15 December 2020, DCMS and the Home Office published the full UK government response, setting out the intended policy positions for the regulatory framework, and confirming Ofcom as the regulator.

With the legislation likely to be introduced early this year, the panel will discuss questions including:

  • How to strike the balance between freedom of expression and protecting adults from harmful material?

  • How to ensure the legislation’s approach to harm is sufficiently future-proofed so new trends and harms are covered as they emerge?

  • What additional responsibilities will tech companies have under the new regulation?

  • Will the regulator have sufficient powers to tackle the wide range of harms in question?

This event is invite-only for participants, but you can watch the livestream of the discussion on this page at 15.00 GMT on Wednesday 10 February.




i

Implications of post-COVID-19 Restructuring of Supply Chains for Global Investment Governance

Implications of post-COVID-19 Restructuring of Supply Chains for Global Investment Governance 14 July 2020 — 9:00AM TO 10:30AM Anonymous (not verified) 9 February 2021 Online

As companies rethink and diversify their supply chains in order to enhance resilience, what will this mean for current and future global investment governance?

What are the risks of negative effects on inclusivity and transparency? Does this shift create an opportunity to advance good governance of cross-border investment practices?

This event is part of the Inclusive Governance Initiative, which is examining how to build more inclusive models and mechanisms of global governance fit for purpose in today’s world.




i

The Future of Investment Dispute Settlement Regimes

The Future of Investment Dispute Settlement Regimes 30 June 2020 — 2:00PM TO 3:30PM Anonymous (not verified) 9 February 2021 Online

This event is part of the Inclusive Governance Initiative, which is examining how to build more inclusive models and mechanisms of global governance fit for purpose in today’s world.

Is an ‘atomized’ approach to cross-border investment dispute resolution inevitable? Has the multiplicity of mechanisms helped or hindered inclusivity in and transparency in governance? Is there a need for, and scope to, increase the international coordination of dispute resolution mechanisms? If so, what form should it take? What could be the implications for international economic law?




i

Insights from Climate Policy: Engaging Subnational Governments in Global Platforms

Insights from Climate Policy: Engaging Subnational Governments in Global Platforms 10 June 2020 — 2:45PM TO 6:00PM Anonymous (not verified) 9 February 2021 Online

How have subnational governments shaped the global agenda and created momentum on climate change where national and international governance processes could not?

Can these advances be converted into meaningful collaboration channels for policy development? What works, or does not, when it comes to engagement with multilateral negotiation processes? What ingredients are necessary for success? What are the broader implications of these trends for inclusivity and innovation in international governance?

This event is part of the Inclusive Governance Initiative, which is examining how to build more inclusive models and mechanisms of global governance fit for purpose in today’s world.




i

Innovating Governance: Examples from the Digital Arena

Innovating Governance: Examples from the Digital Arena 25 February 2020 TO 26 February 2020 — 10:00AM TO 11:30AM Anonymous (not verified) 9 February 2021 Chatham House

The Inclusive Governance Initiative is launched with this roundtable on digital governance.

The Inclusive Governance Initiative, a centenary project which is examining how to build more inclusive models and mechanisms of global governance fit for purpose in today’s world, is launched with this roundtable on digital governance.

The event brings together a diverse and multidisciplinary group of leading experts to consider where and how early initiatives around governance of the digital sphere have succeeded – or not – and how they are evolving today.

The conversation will include the debate between multilateral and multi-stakeholder approaches, the opportunities and challenges of collective non-binding commitments, and converting civil society collaboration into policy contribution.




i

The Implication of Greater Use of Investment Screening

The Implication of Greater Use of Investment Screening 26 June 2020 — 9:00AM TO 10:30AM Anonymous (not verified) 11 February 2021 Online

What is driving the trend towards greater use of investment screening by nation states and regional economic groupings?

  • How is the COVID-19 crisis affecting this trend?
  • What will the economic implications be?
  • Will this help or hinder inclusivity and transparency in investment governance?
  • Is there a role for international safeguards and/or international coordination of national/regional approaches to investment screening?

This event is part of the Inclusive Governance Initiative, which is examining how to build more inclusive models and mechanisms of global governance fit for purpose in today’s world.




i

Persuasion or manipulation? Limiting campaigning online

Persuasion or manipulation? Limiting campaigning online Expert comment NCapeling 15 February 2021

To tackle online disinformation and manipulation effectively, regulators must clarify the dividing line between legitimate and illegitimate campaign practices.

Democracy is at risk, not only from disinformation but from systemic manipulation of public debate online. Evidence shows social media drives control of narratives, polarization, and division on issues of politics and identity. We are now seeing regulators turn their attention to protecting democracy from disinformation and manipulation. But how should they distinguish between legitimate and illegitimate online information practices, between persuasive and manipulative campaigning?

Unregulated, the tactics of disinformation and manipulation have spread far and wide. They are no longer the preserve merely of disaffected individuals, hostile international actors, and authoritarian regimes. Facebook’s periodic reporting on coordinated inauthentic behaviour and Twitter’s on foreign information operations reveal that militaries, governments, and political campaigners in a wide range of countries, including parts of Europe and America, have engaged in manipulative or deceptive information campaigns.

For example, in September 2019, Twitter removed 259 accounts it says were ‘falsely boosting’ public sentiment online that it found to be operated by Spain’s conservative and Christian-democratic political party Partido Popular. In October 2020, Facebook removed accounts with around 400,000 followers linked to Rally Forge, a US marketing firm which Facebook claims was working on behalf of right-wing organisations Turning Point USA and Inclusive Conservation Group. And in December 2020, Facebook took down a network of accounts with more than 6,000 followers, targeting audiences in Francophone Africa and focusing on France’s policies there, finding it linked with individuals associated with the French military.

Public influence on a global scale

Even more revealingly, in its 2020 Global Inventory of Organized Social Media Manipulation, the Oxford Internet Institute (OII) found that in 81 countries, government agencies and/or political parties are using ‘computational propaganda’ in social media to shape public attitudes.

These 81 countries span the world and include not only authoritarian and less democratic regimes but also developed democracies such as many EU member states. OII found that countries with the largest capacity for computational propaganda – which include the UK, US, and Australia – have permanent teams devoted to shaping the online space overseas and at home.

OII categorizes computational propaganda as four types of communication strategy – the creation of disinformation or manipulated content such as doctored images and videos; the use of personal data to target specific segments of the population with disinformation or other false narratives; trolling, doxing or online harassment of political opponents, activists or journalists; and mass-reporting of content or accounts posted or run by opponents as part of gaming the platforms’ automated flagging, demotion, and take-down systems.

Doubtless some of the governments included within OII’s statistics argue their behaviour is legitimate and appropriate, either to disseminate information important to the public interest or to wrestle control of the narrative away from hostile actors. Similarly, no doubt some political campaigners removed by the platforms for alleged engagement in ‘inauthentic behaviour’ or ‘manipulation’ would defend the legitimacy of their conduct.

The fact is that clear limits of acceptable propaganda and information influence operations online do not exist. Platforms still share little information overall about what information operations they see being conducted online. Applicable legal principles such as international human rights law have not yet crystallised into clear rules. As information operations are rarely exposed to public view – with notable exceptions such as the Cambridge Analytica scandal – there is relatively little constraint in media and public scrutiny or censure.

OII’s annual reports and the platforms’ periodic reports demonstrate a continual expansion of deceptive and manipulative practices since 2016, and increasing involvement of private commercial companies in their deployment. Given the power of political influence as a driver, this absence of clear limits may result in ever more sophisticated techniques being deployed in the search for maximal influence.

Ambiguity over reasonable limits on manipulation plays into the hands of governments which regulate ostensibly in the name of combating disinformation, but actually in the interests of maintaining their own control of the narrative and in disregard of the human right to freedom of expression. Following Singapore’s 2019 prohibition of online untruths, 17 governments ranging from Bolivia to Vietnam to Hungary passed regulations during 2020 criminalising ‘fake news’ on COVID-19 while many other governments are alleged to censor opposition arguments or criticisms of official state narratives.

Clear limits are needed. Facebook itself has been calling for societal discussion about the limits of acceptable online behaviour for some time and has issued recommendations of its own.

The European Democracy Action Plan: Aiming to protect pluralism and vigour in democracy

The European Democracy Action Plan (EDAP), which complements the European Commission’s Digital Services Act and Digital Markets Act proposals, is a welcome step. It is ground-breaking in its efforts to protect the pluralism and vigour of European democracies by tackling all forms of online manipulation, while respecting human rights.

While the EDAP tackles disinformation, it also condemns two categories of online manipulation – information influence operations which EDAP describes as ‘coordinated efforts by either domestic or foreign actors to influence a target audience using a range of deceptive means’ and foreign interference, described as ‘coercive and deceptive efforts to disrupt the free formation and expression of individuals’ political will by a foreign state actor or its agents’. These categories include influence operations such as harnessing fake accounts or gaming algorithms, and the suppression of independent information sources through censorship or mass reporting.

But the categories are so broad they risk capturing disinformation practices not only of rogue actors, but also of governments and political campaigners both outside and within the EU. The European Commission plans to work towards refined definitions. Its discussions with member states and other stakeholders should start to determine which practices ought to be tackled as manipulative, and which ought to be tolerated as legitimate campaigning or public information practices.

The extent of the EDAP proposals on disinformation demonstrates the EU’s determination to tackle online manipulation. The EDAP calls for improved practical measures building on the Commission’s 2020 acceleration of effort in the face of COVID-19 disinformation. The Commission is considering how best to impose costs on perpetrators of disinformation, such as by disrupting financial incentives or even imposing sanctions for repeated offences.

Beyond the regulatory and risk management framework proposed by the Digital Services Act (DSA), the Commission says it will issue guidance for platforms and other stakeholders to strengthen their measures against disinformation, building on the existing EU Code of Practice on Disinformation and eventually leading to a strengthened Code with more robust monitoring requirements. These are elements of a broader package of measures in the EDAP to preserve democracy in Europe.

Until there are clear limits, manipulative practices will continue to develop and to spread. More actors will resort to them in order not to be outgunned by opponents. It is hoped forthcoming European discussions – involving EU member state governments, the European Parliament, civil society, academia and the online platforms – will begin to shape at least a European and maybe a global consensus on the limits of information influence, publicly condemning unacceptable practices while safeguarding freedom of expression.

Most importantly, following the example of the EDAP, the preservation of democracy and human rights – rather than the promotion of political or commercial interest – should be the lodestar for those discussions.




i

Imagine a World Without Fake News

Imagine a World Without Fake News Explainer Video NCapeling 25 February 2021

Harriet Moynihan and Mathieu Boulegue explain how we can avoid drowning in an ocean of fake news and information manipulation.

The flow of fake news is vast and unlikely to go away. What’s more, imagining a world where fake news is eradicated completely has implications for freedom of expression.

But what if, instead of wishing fake news away, we can adapt and become immune to it? 

Chatham House is built on big ideas. Help us imagine a better world.

Our researchers develop positive solutions to global challenges, working with governments, charities, businesses and society to build a better future.

SNF CoLab is our project supported by the Stavros Niarchos Foundation (SNF) to share our ideas in experimental, collaborative ways – and to learn about designing a better future, overcoming challenges such as fake news, COVID-19, food security, and conflict.




i

The regional and international implications of restrictions to online freedom of expression in Asia

The regional and international implications of restrictions to online freedom of expression in Asia 25 March 2021 — 12:30PM TO 1:30PM Anonymous (not verified) 12 March 2021 Online

Panellists discuss the latest developments affecting online freedom of expression in the Asia region.

Please note this is an online event. Please register using the link below to finalize your registration.

In recent years, state-led clampdowns on online freedom of expression have become widespread in several countries across Asia, further intensified by the COVID-19 crisis.

The reasons for this are complex and diverse – drawing upon history, culture and politics, in addition to external influences. Across the region, governments have been accused of silencing online criticism and failing to uphold rights to free speech.

Individuals have been arrested, fined or attacked for the alleged spread of ‘fake news’, raising concern among human rights organizations. In some countries, this has culminated in the imposition of new social media rules, which could require social media companies to censor posts and share decrypted messages.

In China, the government’s restrictive online regime has relied on a combination of legal, technical and manipulation tactics to manage control of the internet, and now includes attempts at censorship beyond its borders.

Panellists will discuss the latest regional developments affecting online freedom of expression in the Asia region, and will consider the broader regional and international implications for technology governance.

This webinar launches the publication Restrictions on online freedom of expression in China: The domestic, regional and international implications of China’s policies and practices.




i

Battle lines being drawn over online freedoms in Asia

Battle lines being drawn over online freedoms in Asia Expert comment NCapeling 22 March 2021

Social media giants are increasingly clashing with Asian governments over free expression and censorship as the region lurches towards digital authoritarianism.

Freedom of expression was subject to significant restrictions in Asia even before the pandemic, with several governments having enacted laws that stifle online debate. But since COVID-19, restrictions have increased even further due to a rash of so-called ‘emergency measures’ introduced by governments across the region.

Bangladesh, India, Indonesia, Malaysia, Myanmar, Nepal, Pakistan, the Philippines, Sri Lanka, Thailand, and Vietnam have all put new laws into place, and many restrictions are already being applied in a draconian fashion, such as in the Philippines and Bangladesh.

As outlined in a new Chatham House research paper, one inspiration behind this trend is China, home to the world’s most sophisticated and restrictive system of internet control. The Chinese government’s restrictive online regime, which has tightened further under COVID-19, relies on a combination of legal regulations, technical controls, and proactive manipulation of online debates.

The Chinese government is exporting both its technology – such as through the establishment of smart cities, the installation of AI, and surveillance technology – and its vision of how the internet should be governed

This model was an inspiration for Vietnam’s cybersecurity law, as well as Myanmar’s new draft cybersecurity bill, proposed by the Military-run State Administration Council in the wake of the military coup last month, which would give the military there extensive powers to access individuals’ data, restrict, or suspend access to the internet.

This ‘sovereignty and control’ model of internet governance is also gaining impetus through China’s ‘Digital Silk Road’ initiative, under which the Chinese government is exporting both its technology – such as through the establishment of smart cities, the installation of AI, and surveillance technology – and its vision of how the internet should be governed.

In November 2020, Xi Jinping pledged to further deepen cooperation with ASEAN through the Digital Silk Road, and the pandemic has expanded the appeal of Chinese surveillance technologies and data collection platforms to governments both in Asia and beyond. China’s Health Silk Road, which aims to promote global health cooperation, is centered on the Chinese government’s high-tech model under which civic freedoms are sacrificed in the name of public health.

An alternative model

This ‘sovereignty and control’ model is increasingly at odds with the more ‘human-centric’ model of tech governance favoured by many democratic states, Western social media companies, and international institutions, especially the United Nations (UN) and European Union (EU).

Although this emerging model also involves regulation, it is regulation which aims to be inclusive, risk-based, and proportionate – balancing the need for protection against online harms with the need to preserve freedom of expression. It is a multi-stakeholder, rights-based approach which brings together not just governments but also representatives of the private sector, civil society, and academia. The EU’s draft Digital Services Act and the UK’s proposals for an Online Safety Bill are both reflective of this approach.

Western social media giants such as Facebook and Twitter have recently introduced new policies which seek to identify and mitigate online harms, such as hate speech and disinformation. Industry bodies such as the Global Network Initiative, independent oversight bodies such as the Oversight Board established by Facebook, and civil society advocacy and initiatives such as the Santa Clara Principles on Transparency and Accountability in Content Moderation are also an important part of the picture.

This ‘sovereignty and control’ model is increasingly at odds with the more ‘human-centric’ model of tech governance favoured by many democratic states, Western social media companies, and international institutions

Admittedly, these various digital governance initiatives are in some cases embryonic, and are by no means a silver bullet solution to the complex problem of online content moderation, which continues to be hotly debated in democratic societies. But they are at least underpinned by the same philosophy – that international human rights law standards must continue to apply even during emergencies such as COVID-19. With the Biden administration in the US prioritizing tech governance in its policy agenda, there is added momentum to the international leadership behind this model.

A clash of ideology

These conflicting philosophies are playing out in debates on technology governance at the UN, with one group of countries led by China and Russia advocating for greater government control of the internet, and many Western democracies emphasizing the need for an open, global internet that protects human rights.

These differing ideologies are also creating tensions between Western social media companies operating in Asia and the various governments in that region which have increased restrictions on online expression. And the gulf between the two appears to be widening.

In 2017, the Thailand government threatened Facebook with legal action unless it agreed to remove content critical of Thailand’s royal family and, in 2020, Facebook announced it had been ‘forced to block’ such material. Also in 2020, the Vietnam government pressured state-owned telecom companies to throttle internet traffic to Facebook, effectively restricting access to the platform, until Facebook agreed to take down content the government deemed to be anti-state.

Platforms refuse to silence legitimate criticism

However, Silicon Valley’s social media companies have also been pushing back. Facebook restricted the accounts of Myanmar’s military on the basis of ‘spreading misinformation’ in the wake of the military’s imposition of an internet shutdown that blocked access to Facebook, Twitter, and Instagram. And Twitter resisted requests by the Indian government to block accounts involved in protests by farmers.

Twitter stated that while it would block any accounts which it felt incited violence, it would not take action on accounts belonging to news media entities, journalists, activists, and politicians because it believed that would ‘violate the fundamental right to free expression under the Indian law’. The Indian government responded by fast-tracking stringent new social media regulations heavily criticized by rights groups for increasing government power over content on social media platforms, including online news.

So how can social media companies find avenues for operating in Asia and beyond without being co-opted into the lurch towards digital authoritarianism? There are no easy answers here, but collaboration is key. Cooperation between tech companies and local civil society partners can help companies better understand risks to human rights in the country concerned and how they might be mitigated. And tech companies are more effective in alliance with each other than acting on their own, such as the refusal by Facebook, Google, Telegram, and Twitter to hand over data on protestors to the Hong Kong police.

Twitter stated that while it would block any accounts which it felt incited violence, it would not take action on accounts belonging to news media entities, journalists, activists, and politicians

The fact that in many countries in Asia there are no alternatives to Western social media companies – unlike China, where platforms such as WeChat are part of the government’s internet control apparatus – gives the companies concerned some leverage. In February 2020, Facebook, Google, and Twitter together – through the Asia Internet Coalition – threatened to leave Pakistan in response to the government’s draconian proposals to regulate social media. Along with pressure and lawsuits from civil society, this forced the government into retreat, although the tussle over the new rules, introduced in November, continues.

At a time when illiberalism was already on the rise in Asia (including in democracies – Freedom House has just downgraded India’s status from ‘free’ to ‘partly free’), COVID-19 has made tighter state control of online freedom of expression even more attractive to many governments. As it seems increasingly unlikely that restrictions enacted under the guise of pandemic-related emergency measures will be repealed once the COVID-19 crisis ends, it is even more important that tech companies work with civil society on the ground to minimize the censorship of citizen voices.




i

Rebuilding trust is central to the UN’s future

Rebuilding trust is central to the UN’s future Expert comment NCapeling 25 March 2021

António Guterres is under scrutiny as he prepares to report on the future of the United Nations, with a renewed focus on trust, resilience and prevention.

The United Nations Secretary-General’s inbox is full as his organization celebrates its 75th anniversary. Trust must be rebuilt amid increased geo-political rivalry, North-South divisions, and sceptical citizens left behind by globalization. The international community has manifestly underinvested in institutional resilience and prevention. Better partnerships are needed with the private sector, and innovative forms of cross-regional cooperation fostered.

There are positive signs UN member states want things to change. They unanimously agreed a Political Declaration last September strongly reaffirming multilateralism, and they gave António Guterres one year to present a roadmap on how to respond, ‘building back better’ in the face of climate change and COVID-19.

Mobilized by populist movements and ‘fake news’ online, individuals left behind by the uneven economic benefits of globalization view governments and international organizations as unaccountable and lacking their interests at heart

A key challenge is to steer mandates and resources towards prevention. The World Bank-WHO Global Preparedness Monitoring Board, which eerily predicted the pandemic in its inaugural report in September 2019, reminds us successful prevention rests not on warning alone, but on aligned incentives for early action.

Geopolitical tensions persist

China has invested significantly in the multilateral system over the last decade, both in formal organizations such as the UN and the African Union, and in fostering a set of China-centred ‘mini-lateral’ fora such as the SCO, BRICS and BRI. It has also deepened its ties with Russia in the UN Security Council. Western countries both begrudgingly admire and deeply distrust China’s nimbleness in advancing its interests and values in this way but are divided on how to respond.

The Biden administration has recommitted itself to multilateral processes but US bilateral relations are likely to remain the main foreign policy driver. The UK has sought to convert the G7 into an enlarged summit-level meeting for democracies but Europe is divided over the wisdom of formalizing a group which may increase divisions with China, and some major democracies – India for example – have divergent approaches on issues such as trade protection.

An increase in cross-regional informal caucusing within the UN system to advance norms and progress around specific common objectives is likely. Guterres can encourage smaller powers to become ‘bridge builders’ sitting in the middle of a ‘Venn diagram’ of such new member state constellations at the UN.

Guterres can also build on the recent Abraham Accords to encourage cross-regional cultural, political and security relationships on the back of trade and investment, and map practical opportunities for strategic cooperation between China and the West in health and food security, climate and biodiversity, and global macroeconomic management, while fostering new normative frameworks to manage strategic competition in artificial intelligence (AI), big data, cyber resilience, gene-editing, and automation.

North-South mistrust

Realizing the Sustainable Development Goals (SDGs) and climate objectives rests in part in mobilizing the expertise and resources of sub-state actors such as business and city and regional authorities. However, developing countries remain wary of granting the UN Secretary-General a greater role in fostering partnerships with the private sector and mobilizing private finance, out of fear this may overshadow the global North’s promises to provide aid and create fairer trade and debt conditions.

In addition, African governments are expressing growing frustration at their continued lack of ‘agency’ in UN decision-making, the reneging of promises on climate financing by the global North, and the slow rollout of the COVAX facility to developing countries.

Progress may lie in two areas. First, developing country leadership of initiatives – such as the Friends Group on SDG financing established by the Jamaican and Canadian ambassadors to the UN – can help build trust and allay concerns, which is vital to incentivise transformative investment by sovereign wealth, pension, and insurance funds in pro-poor low carbon infrastructure in developing countries.

The second area is curating multi-stakeholder initiatives outside the UN framework and then linking them back to the organization once they have proven to be beneficial to both developed and developing countries. Successful initiatives such as the Vaccine Alliance can be a model of how to do this while not detracting from state obligations.

Scepticism among citizens

Trust in governance also needs rebuilding at the level of the individual citizen. Mobilized by populist movements and ‘fake news’ online, individuals left behind by the uneven economic benefits of globalization view governments and international organizations as unaccountable and lacking their interests at heart.

Alongside trust and accountability, fostering inclusiveness is likely to be central to Guterres’ report as he navigates how the UN can legitimize multi-stakeholder partnerships, enhance transparency, and bring coherence to diverse ‘mini-lateral’ initiatives

Guterres has called for a new ‘social contract’ between governments and their citizens, and for ‘Multilateralism 2.0’ to demonstrate a practical ‘hard interest’ as well as a ‘values’ case for why international cooperation inclusively benefits individuals as well as states. And technological innovation can also help citizens hold governments to account. As the first Secretary-General with a science and engineering background, Guterres has championed how technology enhances UN delivery of its objectives.

The pairing of artificial intelligence (AI) with satellites and drones for geospatial insight has been pioneered by both the United Nations Environment Programme (UNEP) and the Food and Agriculture Organization (FAO) to help communities preserve ecosystems and agricultural productivity. The resultant data, accessible on smart phones and computers, enables civil society to measure governments’ promises against real-time progress, through monitoring greenhouse gas emissions from power stations.

Alongside trust and accountability, fostering inclusiveness is likely to be central to Guterres’ report as he navigates how the UN can legitimize multi-stakeholder partnerships, enhance transparency, and bring coherence to diverse ‘mini-lateral’ initiatives.

These themes are explored further in the forthcoming synthesis paper ‘Reflections on building more inclusive global governance: Ten insights into emerging practice




i

A seat at the table – why inclusivity matters in global governance

A seat at the table – why inclusivity matters in global governance 10 May 2021 — 1:30PM TO 3:00PM Anonymous (not verified) 22 April 2021 Online

Exploring the changing dynamics of global cooperation and the role inclusivity can play in building collaborative action.

Please click on the below link to confirm your participation and receive your individual joining details from Zoom for this event. You will receive a confirmation email from Zoom, which contains the option to add the event to your calendar if you so wish.

The scale of today’s global challenges demand collaborative and coordinated action. But deepening geopolitical competition is threatening multilateralism while growing inequality and social tensions continue to undermine public confidence in the ability of international institutions to deliver.

Into this challenging environment, add the complexity and sheer pace of many global challenges such as the climate crisis and the proliferation of new technologies – issues that cannot be addressed effectively by governments alone.

  • How do global institutions and mechanisms need to adapt to address the demands for a fairer distribution of power between states and to engage the diverse set of actors essential today for effective solutions?
  • What can be learnt from existing initiatives that bring together governments, civil society, private sector, cities, next generation leaders and other stakeholders?
  • And what are the political obstacles to greater inclusivity?

This event supports the launch of a synthesis paper from Chatham House’s Inclusive Governance Initiative.




i

Can global technology governance anticipate the future?

Can global technology governance anticipate the future? Expert comment NCapeling 27 April 2021

Trying to govern disruption is perilous as complex technology is increasingly embedded in societies and omnipresent in economic, social, and political activity.

Technology governance is beset by the challenges of how regulation can keep pace with rapid digital transformation, how governments can regulate in a context of deep knowledge asymmetry, and how policymakers can address the transnational nature of technology.

Keeping pace with, much less understanding, the implications of digital platforms and artificial intelligence for societies is increasingly challenging as technology becomes more sophisticated and yet more ubiquitous.

To overcome these obstacles, there is an urgent need to move towards a more anticipatory and inclusive model of technology governance. There are some signs of this in recent proposals by the European Union (EU) and the UK on the regulation of online harms.

Regulation failing to keep up

The speed of the digital revolution, further accelerated by the pandemic, has largely outstripped policymakers’ ability to provide appropriate frameworks to regulate and direct technology transformations.

Governments around the world face a ‘pacing problem’, a phenomenon described by Gary Marchant in 2011 as ‘the growing gap between the pace of science and technology and the lagging responsiveness of legal and ethical oversight that society relies on to govern emerging technologies’.

The speed of the digital revolution, further accelerated by the pandemic, has largely outstripped policymakers’ ability to provide appropriate frameworks to regulate and direct technology transformations

This ever-growing rift, Marchant argues, has been exacerbated by the increasing public appetite for and adoption of new technologies, as well as political inertia. As a result, legislation on emerging technologies risks being ineffective or out-of-date by the time it is implemented.

Effective regulation requires a thorough understanding of both the underlying technology design, processes and business model, and how current or new policy tools can be used to promote principles of good governance.

Artificial intelligence, for example, is penetrating all sectors of society and spanning multiple regulatory regimes without any regard for jurisdictional boundaries. As technology is increasingly developed and applied by the private sector rather than the state, officials often lack the technical expertise to adequately comprehend and act on emerging issues. This increases the risk of superficial regulation which fails to address the underlying structural causes of societal harms.

The significant lack of knowledge from those who aim to regulate compared to those who design, develop and market technology is prevalent in most technology-related domains, including powerful online platforms and providers such as Facebook, Twitter, Google and YouTube.

For example, the ability for governments and researchers to access the algorithms used in the business model of social media companies to promote online content – harmful or otherwise – remains opaque so, to a crucial extent, the regulator is operating in the dark.

The transnational nature of technology also poses additional problems for effective governance. Digital technologies intensify the gathering, harvesting, and transfer of data across borders, challenging administrative boundaries both domestically and internationally.

While there have been some efforts at the international level to coordinate approaches to the regulation of – for example – artificial intelligence (AI) and online content governance, more work is needed to promote global regulatory alignment, including on cross-border data flows and antitrust.

Reactive national legislative approaches are often based on targeted interventions in specific policy areas, and so risk failing to address the scale, complexity, and transnational nature of socio-technological challenges. Greater attention needs to be placed on how regulatory functions and policy tools should evolve to effectively govern technology, requiring a shift from a reactionary and rigid framework to a more anticipatory and adaptive model of governance.

Holistic and systemic versus mechanistic and linear

Some recent proposals for technology governance may offer potential solutions. The EU publication of a series of interlinked regulatory proposals – the Digital Services Act, Digital Markets Act and European Democracy Action Plan – integrates several novel and anticipatory features.

The EU package recognizes that the solutions to online harms such as disinformation, hate speech, and extremism lie in a holistic approach which draws on a range of disciplines, such as international human rights law, competition law, e-commerce, and behavioural science.

By tackling the complexity and unpredictability of technology governance through holistic and systemic approaches rather than mechanistic and linear ones, the UK and EU proposals represent an important pivot from reactive to anticipatory digital governance

It consists of a combination of light touch regulation – such as codes of conduct – and hard law requirements such as transparency obligations. Codes of conduct provide flexibility as to how requirements are achieved by digital platforms, and can be updated and tweaked relatively easily enabling regulations to keep pace as technology evolves.

As with the EU Digital Services Act, the UK’s recent proposals for an online safety bill are innovative in adopting a ‘systems-based’ approach which broadly focuses on the procedures and policies of technology companies rather than the substance of online content.

This means the proposals can be adapted to different types of content, and differentiated according to the size and reach of the technology company concerned. This ‘co-regulatory’ model recognizes the evolving nature of digital ecosystems and the ongoing responsibilities of the companies concerned. The forthcoming UK draft legislation will also be complemented by a ‘Safety by Design’ framework, which is forward-looking in focusing on responsible product design.

By tackling the complexity and unpredictability of technology governance through holistic and systemic approaches rather than mechanistic and linear ones, the UK and EU proposals represent an important pivot from reactive to anticipatory digital governance.

Both sets of proposals were also the result of extensive multistakeholder engagement, including between policy officials and technology actors. This engagement broke down silos within the technical and policy/legal communities and helped bridge the knowledge gap between dominant technology companies and policymakers, facilitating a more agile, inclusive, and pragmatic regulatory approach.

Coherence rather than fragmentation

Anticipatory governance also recognizes the need for new coalitions to promote regulatory coherence rather than fragmentation at the international level. The EU has been pushing for greater transatlantic engagement on regulation of the digital space, and the UK – as chair of the G7 presidency in 2021 – aims to work with democratic allies to forge a coherent response to online harms.

Meanwhile the OECD’s AI Policy Observatory enables member states to share best practice on the regulation of AI, and an increasing number of states such as France, Norway, and the UK are using ‘regulatory sandboxes’ to test and build AI or personal data systems that meet privacy standards.

Not all states currently have the organizational capacity and institutional depth to design and deliver regulatory schemes of this nature, as well as the resource-intensive consultation processes which often accompany them.

So, as an increasing number of states ponder how to ‘futureproof’ their regulation of tomorrow’s technology – whether 6G, quantum computing or biotechnology – there is a need for capacity building in governments both on the theory of anticipatory governance and on how it can be applied in practice to global technology regulation.




i

Facebook's power under scrutiny as Trump ban upheld

Facebook's power under scrutiny as Trump ban upheld Expert comment NCapeling 6 May 2021

Keeping Donald Trump’s Facebook ban in place shows the vast power social media platforms hold, raising questions of whether that power is appropriately used.

Kate Jones

From a human rights perspective, the Oversight Board’s decision is a strong one, and not at all surprising. The board decided Facebook was right to suspend the former president’s access to post content on Facebook and Instagram, but not indefinitely.

It found Donald Trump’s posts violated Facebook’s community standards because they amounted to praise or support of people engaged in violence and that, applying a human rights assessment, Facebook’s suspension of Trump was a necessary and proportionate restriction of his right to freedom of expression.

It is in content amplification, not just content moderation, that Facebook should face scrutiny and accountability for the sake of the human rights of its users

However the board also found Trump’s indefinite suspension was neither in conformity with a clear Facebook procedure nor consistent with its commitment to respect human rights. Its decision requires Facebook to make a new decision on the future of Donald Trump’s account, grounded in its rules.

While opinions on this result will differ, the increased call for clear and accessible rules and respect for human rights in their implementation that the Oversight Board brings to Facebook’s operations is welcome.

But the Oversight Board’s powers are limited to content moderation – Facebook declined to answer the board’s questions about amplification of Trump’s posts through the platform’s design decisions and algorithms. This limitation on the board’s role should be lifted. It is in content amplification, not just content moderation, that Facebook should face scrutiny and accountability for the sake of the human rights of its users.

Fundamentally, human rights is not a veneer which can mask or legitimize underlying power dynamics or public policy – those still fall to be assessed for themselves.

The Trump/Facebook saga does highlight the vast power Facebook and other major social media platforms have over political discussion and persuasion. Through granting or denying, or through amplifying or quietening the voices of political figures, Facebook has the power to shape politics, electorates, and democratic processes. Improving content moderation through the Oversight Board, although important, does little to constrain that power.

Facebook itself, unlike a government, has no accountability to the general public, and the Oversight Board must not distract us from the need for a full conversation about the extent to which Facebook’s power is appropriately held and properly wielded.

Emily Taylor

This decision marks a coming of age for Facebook’s content moderation process. For years, decisions to take down content or ban users have been opaque, conducted by a human workforce that Facebook and other platforms have been hesitant to acknowledge. The platforms have also been worried that being seen to exercise an editorial function might put at risk the legal protections which prevent the platforms being held responsible for user-generated content.

When the Oversight Board was first posited, observers questioned whether a body funded by Facebook could properly exercise a legitimate appeals function. Now there is a reasoned decision which partly supports the decision to de-platform a serving president, but also takes issue with the indefinite nature of the ban.

If the process is to gain respect as a truly independent oversight on the platform’s decisions, greater transparency over the identity of decision-makers will be needed

Facebook specifically asked the Oversight Board to consider specific challenges involved when the person involved is a political leader. The board concluded that Trump’s ‘status as head of state with a high position of trust not only imbued his words with greater force and credibility but also created risks that his followers would understand they could act with impunity’. The storming of the US Capitol and role President Trump played in stirring up the violence underlined that political leaders’ words can motivate others to take harmful actions.

Just as the events of January 6 remain shocking, it remains shocking that private platforms have exercised the power to curb the speech of a US president. It also remains shocking that the platforms sat back and took no action over the previous four years, but waited until the final days of the transition.

The board’s decision is an evolution in private-sector content moderation, with a diverse board giving a reasoned opinion on a Facebook decision. But to fully comply with the principles of open justice, board decisions should include more detail on the individuals who have made the decision – at present, it appears all members of the board review the decision but it is not clear which individuals were involved in its drafting, or that they were clear from conflicts. If the process is to gain respect as a truly independent oversight on the platform’s decisions, greater transparency over the identity of decision-makers will be needed.

Mark Zuckerberg expressed concern about Facebook becoming an arbiter of truth or free speech and, overall, the difficulty of having private companies managing the application of fundamental rights on their platforms has not been solved. Just because companies have the financial resources to do it, does not mean they necessarily should.

Yet no other international governance or arbitration system has emerged to handle the complexities of platform power over speech. In the context of that vacuum, the Oversight Board’s decision is a welcome step.




i

Building trust in trade deals – is human rights monitoring the answer?

Building trust in trade deals – is human rights monitoring the answer? 27 May 2021 — 4:00PM TO 5:15PM Anonymous (not verified) 14 May 2021 Online

Exploring the arguments in favour of more robust human rights monitoring systems and why effective monitoring mechanisms have proved so difficult to get up and running.

Please click on the below link to confirm your participation and receive your individual joining details from Zoom for this event. You will receive a confirmation email from Zoom, which contains the option to add the event to your calendar if you so wish.

The recent signing of the EU-China Investment Agreement has reignited arguments about trade and human rights. While many trade agreements envisage human rights monitoring in some shape or form, the monitoring systems that have emerged so far are not especially coherent, systematic or impactful. 

Are the human rights commitments in trade agreements more than just window-dressing?  If so, what kind of monitoring is needed to ensure they are lived up to? 

At this panel event, which marks the launch of a new Chatham House research paper, participants explore the arguments in favour of more robust human rights monitoring systems and why effective monitoring mechanisms have proved so difficult to get up and running in this context. 

  • What factors are presently holding governments back, and where is innovation and investment most needed?
  • What are the political, economic and structural conditions for fair and effective human rights monitoring of trade agreements? 
  • Is human rights monitoring best done unilaterally – or should more effort be put into developing joint approaches? 
  • What role might human rights monitoring have to play in governments’ strategies to ‘build back better’ from the COVID-19 pandemic?




i

Digital governance must not marginalize smaller states

Digital governance must not marginalize smaller states Expert comment LToremark 19 May 2021

For effective and inclusive digital governance, multi-stakeholderism must raise its game.

Last month, the G7 announced it is to work towards a trusted, values-driven digital ecosystem. While this is commendable, the G7 must recognize that key international digital governance decisions should involve all states whose populations will be affected. Not doing so is to deny the legitimate interests of those populations and may cause a lack of trust in international digital governance that embeds longer-term instability.

While a multi-stakeholder approach to digital governance is important, it must be structured in a way that allows for meaningful representation of states’ interests and ensures their representatives have the opportunity and capacity to take part. As the internet becomes fundamental to life in every country of the world, international digital governance is increasingly important to all governments and excluding some states’ perspectives may engender wider risks to international security and governance.

The ‘glitter ball’ of digital governance

International digital governance is playing catch-up with the digital sphere it needs to govern.

International digital governance is playing catch-up with the digital sphere it needs to govern. Its starting point is a ‘glitter ball’ of governance initiatives: a large number of complex facets with overlapping impacts – and an almost impenetrable core. Governance initiatives (see infographic) include governance of the internet itself and its uses, international cybersecurity, international human rights, data management, as well as the impact of digital developments in areas such as armed conflict, trade and health.

Many of the bodies involved – such as the Internet Governance Forum, the Internet Corporation for Assigned Names and Numbers (ICANN) and technical standards bodies – include a wide range of stakeholders, yet there is no one accessible, central body. Furthermore, certain key issues, such as the role and responsibilities of tech platforms, are barely touched upon by international governance mechanisms. There is also currently only a limited role for traditional UN multilateral decision-making, a process which builds in a role for smaller states.

The sheer number of forums involved, each with a different set of working methods and rules on participation, makes it difficult to fully grasp what digital governance looks like as a whole. The UN secretary-general’s High-level Panel on Digital Cooperation recognized the complexity of digital cooperation arrangements and the barriers to inclusion facing small and developing countries as well as under-represented groups. In response, the June 2020 UN Roadmap on Digital Cooperation accepts the need to streamline digital governance while ensuring marginalized voices are heard.

The sheer number of forums involved, each with a different set of working methods and rules on participation, makes it difficult to fully grasp what digital governance looks like as a whole.

The UN is considering potential models for future governance, each of which would – reassuringly – involve multi-stakeholder participation, dedicated funds to boost participation, consolidation of discussions currently split between different forums and a minor coordinating role for the UN.  

Building in roles for smaller states

As the UN designs new digital governance architecture, it is particularly important to build in roles for small and medium states. Core constituencies affected by decisions should be at the centre and governments – as guardians of public interest – should have a key say in the decision-making process. The distrust generated by built-in power imbalances needs to be addressed, as does the dominance of voices from the Global North in bodies such as ICANN.   

There has been some progress made to increase participation. For example, the Freedom Online Coalition includes a number of developing countries and the 2020 Internet Governance Forum included input from 175 states.

Multi-stakeholderism needs to raise its game.

However, participation is not only a matter of having a seat at the table. As discussed at the March 2021 UN Open-ended Working Group on ICTs in the context of international security, capacity-building is vital. The group’s conclusions include the suggested development of a global cyber capacity-building agenda with information sharing and norms guidance under the auspices of the UN. Representatives of small and medium states need a roadmap to understand in which forums they can defend and pursue their interests, and the financial help to do so if necessary.

Managing multi-stakeholder participation

A multi-stakeholder approach has been fundamental to digital governance from the start and has played a vital role in helping to secure the openness and universality of the internet. This approach is rightly seen as essential to effective governance because it introduces diverse expertise, allows the interests of all impacted sectors to be taken into account and helps ensure decisions are accepted by those affected.

There is a perennial risk of debate and decision-making being captured by the wealthiest companies or the most powerful states.

However, as identified in a Chatham House report on inclusive global governance, multi-stakeholderism needs to raise its game. One of its downsides is that in the cacophony some important voices may not be heard because they lack resource or capacity to speak up. There is a perennial risk of debate and decision-making being captured by the wealthiest companies or the most powerful states. At present, small and medium states are under-represented in multi-stakeholder forums and it is important that those managing such forums seek to identify and include previously excluded voices.

Multi-stakeholderism should not come at the expense of efficiency. While it does not have to mean huge, inefficient meetings or endless discussion, it should also not mean that smaller, less well-funded voices are not heard. Instead, such processes should enable representation of appropriate interest groups, complemented by wider meetings (such as regional meetings, or sector-specific meetings) as needed. While inclusivity and transparency are key, synergies between regional and global forums can work well –  for example, some countries have adopted national versions of the Internet Governance Forum –  and so too can hybrid models such as the Freedom Online Coalition, which meets both as government members and for regular multi-stakeholder dialogue.

A multi-stakeholder approach should also not lose sight of the key role of states – and where mandated, sub-state entities – in making public policy decisions.

An important role for the UN

For 75 years, the UN has acted as a bulwark of international security and shared values, and a promoter of economic and social development. If misused, technology has the potential to undermine this bulwark, to facilitate conflict, erode rights and undermine development. The UN must encourage the harnessing of technology for society’s benefit, while leading a collective effort to guard against the risks through the retention and growth of a universal, open internet – particularly in the face of growing digital authoritarianism exacerbated by COVID-19.

The UN can also help protect against a commercial culture that threatens to trample fundamental freedoms of privacy and autonomy in its pursuit of wealth and to widen economic and social gulfs by leaving large swathes of the world behind. If the UN is to play this role effectively – and for the benefit of all its members ­– it requires the active participation of all states, large and small.




i

Monitoring of trade deals needs a risk-based approach

Monitoring of trade deals needs a risk-based approach Expert comment NCapeling 24 May 2021

On human rights issues, trading partners must do more than trust to luck.

The recent row within the UK government about the treatment of agricultural products in a proposed new trade deal with Australia provides a reminder that changes to trading arrangements can have social and environmental costs, as well as benefits.

Although the UK government clearly feels political pressure to demonstrate its ‘Global Britain’ credentials with some speedily concluded new deals, rushing ahead without a full understanding of the social, environmental, and human rights implications risks storing up problems for later. In the meantime, calls for better evaluation and monitoring of trade agreements against sustainability-related commitments and goals – ideally with statutory backing – will only get stronger.

EU experiences with these kinds of processes are instructive. For more than 20 years the Directorate General for Trade of the European Commission (DG Trade) has been commissioning sustainability impact assessments (SIAs) from independent consultants in support of trade negotiations, and since 2012 these assessments have explicitly encompassed human rights impacts as a core part of the analysis.

The Commission should be transparent about how it plans to respond to the EU-Mercosur SIA recommendations regarding flanking measures and follow up

These processes have since been augmented with a programme of periodic ‘ex post’ evaluations of trade agreements to ‘analyse the observed economic, social, human rights, and environmental impacts’ of live trade deals and to make recommendations about any mitigation action which may be needed.

For credibility and objectivity, the Commission outsources much of its sustainability assessment and ex post evaluation activities to independent consultants, who are encouraged to innovate and tailor their approaches subject to broad methodological parameters laid down by the Commission. Over time, experiences with specific assessment and monitoring assignments have enabled external SIA practitioners – and the Commission itself – to progressively strengthen these processes and underlying methodologies.

Yet despite the improvements there remains legitimate questions about whether the human rights aspects of these SIA processes – and subsequent evaluations – are having real policy impact. The difficulty of predicting human rights impacts of trade agreements in advance – as the COVID-19 crisis amply demonstrates – suggests a need for realism about the extent to which a ‘one off’ process, often carried out at a time when there is only ‘agreement in principle’ as to future trading terms, can produce a robust roadmap for heading off future human rights-related risks.

Human rights impact assessments have a potentially valuable role to play in laying down the substantive and structural foundations for future human rights monitoring as part of a broader, iterative, human rights risk management strategy. But the fragmented manner in which many trade agreements approach human rights issues, and the fact that outcomes are the product of negotiation rather than necessarily design, make it difficult to turn this vision into reality.

Controversies surrounding the SIA process for the EU-Mercosur agreement illustrate why striving for more coherence in the identification and subsequent management of human rights-related risks is important. In June 2019, the Commission decided to wrap up negotiations with the South America Mercosur bloc, even though the SIA process for the proposed agreement was still incomplete and the interim and final SIA reports yet to be delivered. Frustrated NGOs made their feelings clear in the form of a formal complaint – and a slap on the wrist from the EU Ombudsman duly followed.

While there may be opportunities for EU institutions to follow up the recommendations through unilateral ex post evaluation processes, current legal, policy, and institutional arrangements provide few guarantees this will take place

However, when it eventually appeared in December 2020, the final SIA report for the EU-Mercosur deal did include a number of interesting recommendations for responding to specific areas of human rights-related risk identified through the pre-signing assessment process – such as flanking measures designed to address issues pertaining to health, equality, and protection of indigenous peoples, and stressing the need for ‘continuous monitoring’.

Hopefully these recommendations will be proactively followed up, but there are reasons not to be overly optimistic about that. To the extent that these recommendations might have required, or benefitted from, some tweaks to the terms of the trade agreement itself, it was clearly too late. And while there may be opportunities for EU institutions to follow up the recommendations through unilateral ex post evaluation processes, current legal, policy, and institutional arrangements provide few guarantees this will take place.

The credibility of the EU SIA programme has clearly taken a knock because of the problems with the EU-Mercosur process, and stakeholders could be forgiven for questioning whether expending time and effort on engaging in these processes is actually worthwhile. As a first step towards rectifying this, the Commission should be transparent about how it plans to respond to the EU-Mercosur SIA recommendations regarding flanking measures and follow up – ideally consulting with stakeholders about the various human rights monitoring options available.

Looking further ahead, the Commission should be urging SIA practitioners to deal more expansively with the options for follow up human rights monitoring in future SIA reports, setting out recommendations not just on the need for ongoing monitoring of human rights-related issues but on the detail of how this might be done, and how progress towards human rights-related goals could be tracked. And creativity should be encouraged because, as detailed in a newly-published Chatham House research paper, there may be more opportunities for human rights monitoring than first appear.

The SIA process could also provide a forum for exploring complementary measures needed to make future monitoring efforts as effective as possible – jointly and unilaterally; politically, structurally, and resources-wise; both within the framework of the trading relationship and extraneously. The credibility of the process – and hence stakeholder trust – would be further enhanced by commitments from the Commission to be more transparent in future about how different human rights monitoring recommendations laid out in SIAs have been taken into account in subsequent negotiations, in the supervisory arrangements developed for specific trading relationships, and in the implementation of EU trade policy more generally.




i

Geopolitical shifts and evolving social challenges – what role for human rights?

Geopolitical shifts and evolving social challenges – what role for human rights? 29 June 2021 — 3:00PM TO 4:30PM Anonymous (not verified) 10 June 2021 Online

Speakers reflect on some of the key themes that will influence the future of human rights.

Please click on the below link to confirm your participation and receive your individual joining details from Zoom for this event. You will receive a confirmation email from Zoom, which contains the option to add the event to your calendar if you so wish.

Shifts in geopolitical power and the rise of authoritarianism are disrupting the dynamics for making progress on human rights globally.

At the same time, the relevance of the global human rights framework is being called into question by some of our most acute social challenges – rapidly evolving technology, deepening inequality and the climate crisis.

Chatham House’s Human Rights Pathways project is exploring how alliances, strategies and institutions are adapting, and will need to evolve, to strengthen human rights protection in this increasingly contested and complex global environment.

At this panel event speakers reflect on some of the key themes that will influence the future of human rights, including the long-term impacts of the pandemic, the place of human rights diplomacy in the new geopolitics, the relevance of human rights to social movements, and the potential of human rights law to galvanise efforts on urgent challenges such as the climate crisis.




i

Influence of soft law grows in international governance

Influence of soft law grows in international governance Expert comment NCapeling 17 June 2021

Soft law is increasingly being used by policymakers to enable greater cooperation and inclusivity, and its role is here to stay in creating effective regimes.

As the UK government’s recent Integrated Review points out, international law-making in a fragmented international order is becoming increasingly difficult.

Geopolitical tensions, and the length of time required to agree multilateral treaties – typically decades – make it challenging to reach binding agreements in complex and fast-evolving policy areas such as climate change and technology governance.

As a result, the regulation of international behaviour through soft law – meaning non-binding instruments such as principles, codes of conduct or declarations – is starting to assume greater significance. And states increasingly find soft law-making attractive because there are relatively fewer decision costs involved.

Soft law also lays the ground for the possibility of transforming into hard law if, over time, its principles become widely accepted and it is evident states are treating them as legal obligations. And the emergence of a hybrid of both soft and hard law components in treaties has started to develop in recent years, such as the Paris Agreement on Climate Change.

Opening access to global governance

A major attraction of soft law-making is that it provides for non-traditional, non-state actors to take part in the process of global governance. Non-governmental organizations (NGOs), social movements, corporate sector, and individuals are more easily drawn into soft law-making compared to treaties, to which only states can be party.

States increasingly find soft law-making attractive because there are relatively fewer decision costs involved

This holds out the promise for greater inclusiveness in global rulemaking and governance, but soft law processes also pose many challenges. Soft law provides an avenue for states to avoid legal obligations on important subjects and developing rules in such an informal manner can lead to fragmentation and a lack of coherence in the international system.

As noted in dialogues held under Chatham House’s Inclusive Governance Initiative, some areas of international interaction require hard law, such as economic competition, certain international security issues, and aspects of the global commons. In these areas, soft law is just not appropriate or enough.

Soft law measures such as codes of conduct may be useful in rapidly developing areas such as technology, as they are more flexible and adaptable than hard law. And they may be particularly effective if used in conjunction with binding regulation, and subject to monitoring and enforcement by a regulator, as in recent proposals by the European Union (EU) for a Digital Services Act.

The Chatham House Inclusive Governance Initiative report highlights that the proliferation of soft law does not necessarily have to compete with the existing system of hard law, so long as soft law solutions do not conflict with, or undermine, hard law such as existing treaty provisions.

Case study: Business and human rights

The UN Guiding Principles on Business and Human Rights (UNGPs) are an interesting example of both the promise of soft law-making, and its challenges. Officially adopted by the UN General Assembly in 2011, the UNGPs set out the global standard of what is expected of companies as regards human rights due diligence (HRDD) to prevent and address business-related human rights harms.

The sections on HRDD in the UNGPs have been constructed as a non-binding ‘social’ standard of conduct, though with the expectation that this would eventually be reinforced through a “smart mix” of both soft law and hard law initiatives. Arguments in favour of the predominantly soft law approach at the time – subsequently borne out in practice – were that this would encourage a higher level of participation, by states and businesses in particular, and better foster creativity and innovation in a still-developing field.

The UNGPs recognize and reinforce the importance of meaningful and inclusive stakeholder engagement for both the credibility and legitimacy of processes, and for the quality of substantive outcomes. The Ruggie process which led to the UNGPs, drew extensively from a wide range of stakeholder engagement processes covering many different jurisdictions and all UN regional groupings. The importance of deep and inclusive stakeholder engagement is also recognized in the mandate of the UN Working Group on Business and Human Rights.

The annual UN Forum on Business and Human Rights is one of the largest and most vibrant multi-stakeholder events in the UN calendar. Now in its tenth year, the forum provides an opportunity for an annual review by stakeholders – government, business and civil society – of past achievements in implementing the UNGPs and knowledge sharing on ways to address more persistent, underlying challenges.

The sluggish responses of many companies, coupled with revulsion at reports of serious abuses in the value chains of many well-known brands, have prompted some governments to seek ways of translating some aspects of HRDD methodologies into binding legal standards

Its relatively informal approach to agenda setting has, year on year, enabled an increasingly diverse array of stakeholder-organized sessions, supporting a ‘bottom up’ approach which raises awareness of under-reported issues and undervalued solutions.

In addition, while the UNGPs provide the substantive framework for discussion, flexible governance arrangements allow for rapid reorientation to respond to present and emerging crises, such as COVID-19 pandemic and climate change.

However, the sluggish responses of many companies, coupled with revulsion at reports of serious abuses in the value chains of many well-known brands, have prompted some governments to seek ways of translating some aspects of HRDD methodologies into binding legal standards. France passed a Corporate Duty of Vigilance Law in 2017 and Germany adopted a new law on supply chain due diligence in June 2021 which is to enter into effect on 1 January 2023. The European Commission is also working up proposals for an EU-wide regime to be unveiled in mid-2021.

Soft law versus hard law

At the international level, there are signs of divergence between those states which see value in persevering with the soft law route towards better regulation and corporate standards, and those which want to move as rapidly as possible to a hard law framework for business and human rights, enshrined in treaty, to improve domestic-level regulation and access to effective remedies.

Ultimately, the most effective domestic regimes are likely to be a mix of hard law standards supported by more flexible standards and guidance

Those supporting the hard law route – largely less industrialized states – received a boost in 2016 when the UN Human Rights Council mandated an Intergovernmental Working Group to explore options for a new treaty on business and human rights.

This initiative, known as the ‘treaty process’, has completed six rounds of negotiations. Despite the necessarily greater formality, these treaty negotiation sessions continue to emphasize the importance of stakeholder consultation. NGOs with ECOSOC status are invited to contribute views on the framing and content of draft treaty provisions immediately following the interventions by states, intergovernmental organizations and national human rights institutions, in that order.

The key question is whether this dynamism and inclusivity can be preserved as the transition is made from soft law to more binding approaches. Translating soft law standards into binding regimes inevitably means making hard choices, and different stakeholder groups have different views as to where legal lines should be drawn, how key concepts should be defined, and where the balance between legal certainty and flexibility should be struck.

The negotiations needed to strike an effective balance between competing objectives and needs can be challenging and time-consuming, as experiences with the treaty process have shown. But stakeholder demand for inclusive processes to help shape the law remains strong. Stakeholder groups clearly want a say in how the new EU-wide regime for ‘mandatory human rights due diligence’ will work in practice. A recent online ‘stakeholder survey’ garnered more than 400,000 responses.

Ultimately, the most effective domestic regimes are likely to be a mix of hard law standards supported by more flexible standards and guidance. Civil society organizations and trade unions will continue to have a multi-faceted role to play. Not only are they vital sources of expertise on human rights challenges connected to business activities, at home and abroad, they can also act as private enforcers of standards and advocates for affected people and communities.




i

Why the next generation is key to protecting human rights

Why the next generation is key to protecting human rights Expert comment LToremark 23 June 2021

Strengthening youth participation in public affairs is essential to building inclusive and democratic societies that respect human rights.

Young people have always been drivers of social and economic reform, and today’s global youth population is more numerous and interconnected than ever before. While they have been at the forefront of civic rights movements in recent years, young people are largely excluded from discussions around human rights norms and how to monitor their protection and defence.

Today’s global youth population is more numerous and interconnected than ever before.

Young people are consistently underrepresented in intergovernmental mechanisms and national dialogues, which not only squanders their potential to contribute to effective solutions but also risks disengagement and disillusionment with multilateralism more broadly, at a time when many are already warning of the fraying of the international liberal order. Although there are actors and initiatives working to lift barriers to youth participation in governance – such as the UN Secretary-General’s Envoy on Youth, Jayathma Wickramanayake, or the UN 2016 Not Too Young To Run campaign – these efforts tend to fall short in effecting real change and rarely translate into institutionalized procedures.

While ‘the youth’ is a heterogenous group, comprising different ages, ethnicities, national identities and interests, their participation in realizing human rights is essential to addressing the current challenges and possibilities of human rights for future generations. This will help foster more effective solutions to rights-related challenges, re-build trust in the international human rights framework among younger demographics and broaden and deepen commitments to human rights across generations.

Human rights policies and the online environment

Young people tend to be more technologically literate than their predecessors and also represent the majority of internet users and social media consumers in many countries. They can therefore play a key role in innovating and imagining rights-based solutions to emerging problems for the human rights framework, such as illegitimate collection of data by governments and companies, microtargeting by online platforms, and the sharing of harmful content online. In many cases, international human rights practices have failed to keep pace with these changes and the challenges they bring.

Younger demographics may also approach these novel human rights issues from different starting points. For example, a UK study found that 30 per cent of 18-24 year-olds were ‘unconcerned’ about data privacy compared with only 12 per cent of those aged 55-64, and it has been shown that younger people tend to be more discerning of fake news compared to older generations. There may be a need for human rights institutions and practitioners to acknowledge and bridge these gaps in perspective and understanding to ensure long-term support for proposed solutions.

International cooperation for human rights protection

It has been suggested that young people have reaped the benefits of previous human rights-based policy reforms and have a strong sense of what rights they are entitled to and why these need to be protected through an international framework. Young people are also generally more supportive of multilateralism compared to their older counterparts, as demonstrated by a 2020 survey by Pew Research Center on global attitudes, which showed that 72 per cent of respondents aged 18-29 stated they have a favourable view of the UN, compared with 58 per cent of respondents aged 50 and older.

At a recent Chatham House workshop, young participants from countries as diverse as Lebanon, Kenya and the United States expressed concern that growing hostility towards globalization threatens to undo progress in human rights standards and multilateralism more broadly, progress that they have seen and benefitted from. The rise of nationalist and populist parties has also seen countries shift their attention inwards, as evidenced by former president Trump’s decision to withdraw the US from the Paris Agreement on climate change, and threats by Brazil’s president, Jair Bolsonaro, to follow suit.

Engaging more actively with younger individuals on global human rights reform will help ensure the long-term relevance of multilateral cooperation as well as domestic buy-in of human rights commitments.

Awareness of the interconnectivity of global problems

Young people’s proficiency on online platforms has enabled greater coordination and knowledge sharing without geographical constraints, allowing young activists – like Greta Thunberg – to inspire global movements and foster online discussions about intersectional solutions to modern-day challenges.

This intersectional and transnational lens will be a vital component of building solutions to politically or historically complex issues and can be leveraged to foster better understanding of competing human rights claims relating to issues such as land re-distribution in South Africa or limitations on freedom of movement during the COVID-19 pandemic. These democratic forums and platforms will ultimately help build a global community committed to and engaged with human rights.

Tokenism can discourage future engagement and dilute the effectiveness of the forums in question.

Capturing the next generation’s potential

With these concerns and areas of potential in mind, how can human rights institutions and mechanisms create more meaningful avenues for youth input? 

Recent Chatham House research has suggested that multilateral institutions’ efforts to engage youth has often taken the form of ‘superficial listening’, for example inviting a high-profile youth actor to a one-off event or appointing youth delegates who are not able to participate in formal discussions or mainstream governance forums. While encouraging youth participation in meetings focused on human rights can lead to positive change, tokenism can discourage future engagement and dilute the effectiveness of the forums in question.

Capitalizing on the potential of the next generation can be achieved through integrating youth councils and advisers into national and international human rights policy processes, as well as human rights institutions. A few replicable models are already operational, such as the Y7 and the Y20 delegations – the official youth engagement groups for the G7 and G20 – that advance evidence-based proposals to world leaders ahead of the G7 and G20 summits.

At the domestic level, grassroots youth-led movements can help bridge the gap between local constituencies and international policymakers, with youth activists on the ground helping to implement human rights standards and fighting against the spread of misinformation. Strong local networks and civic spaces are essential for pushing back against human rights abuses, and youth activists should be mobilized to connect the efforts of domestic and international bodies to the real issues on the ground; for example, canvassing grassroots youth networks on domestic and traditional customs before implementing development agendas around women’s rights.

As well as providing insertion points for youth policy actors, human rights institutions must communicate their goals more effectively to younger generations and promote intergenerational and inclusive dialogue, for example by holding virtual consultations that  give access to individuals from different backgrounds. Similarly, they should ask young people about their priorities for human rights reform using regular and accessible surveys or by sharing information on online platforms regularly used by this demographic. This will ensure lasting buy-in from the next generation, essential for the relevance and sustainability of the human rights framework in the years to come.

This piece draws upon insights gathered at a workshop hosted by Chatham House in March 2021, which brought together the Institute’s networks of next generation groups including representatives of the QEII Academy Ambassadors, the Panel of Young Advisers, and the Common Futures Conversations community, as well as young members from the South African Institute of International Affairs.




i

Undercurrents: The Oversight Board's Trump decision, and Merkel's legacy

Undercurrents: The Oversight Board's Trump decision, and Merkel's legacy Audio bhorton.drupal 25 June 2021

Was Facebook right to suspend Trump? And how will Merkel be remembered?

In the wake of the storming of Capitol Hill on 6 January 2021, social media platforms took steps to remove former President Donald Trump from their websites for infringing community standards. This step was welcomed by many, but also raised serious questions about the power of social media companies to limit free speech and censor elected officials. The suspension of President Trump from Facebook was referred to the Oversight Board, an independent body of experts set up to scrutinise the platform’s content moderation decisions.  

In this episode, Ben speaks to Thomas Hughes and Kate Jones about the outcome of the Oversight Board’s inquiry into the Trump suspension, and the wider implications for content moderation on social media.  

Then Lara is joined by Hans Kundnani to assess the political outlook in Germany and reflect on the legacy of outgoing Chancellor Angela Merkel.  




i

How can governance be more inclusive?

How can governance be more inclusive? Explainer Video NCapeling 28 June 2021

Short animation exploring how global governance can be reshaped to meet the challenges of today’s world.

The COVID-19 pandemic has illustrated the urgent need for change in the structures and mechanisms of international cooperation.

This animation supports the release of a major synthesis paper as part of the Inclusive Governance Initiative, which was launched in 2020 to mark Chatham House’s centenary.

Read the synthesis paper Reflections on building more inclusive global governance.




i

Strengthening Transatlantic Digital Cooperation

Strengthening Transatlantic Digital Cooperation

This project explores opportunities for increased cooperation via the transatlantic ‘tech triangle’ of the European Union, United Kingdom and United States.

jon.wallace 2 July 2021

This project serves as a cross-house initiative (involving the US and Americas Programme, the Europe Programme, the International Law Programme, the Digital Society Initiative and the International Security Programme).

Its long-term goal is to support the emergence of a global vision for technology governance: a vision drawing on democratic values and human rights principles. The project aims to extend the application of these principles to the digital space.

The first phase centres around a knowledge-exchange series, with findings and recommendations disseminated around targeted multilateral events such as G7, the United Nations General Assembly and the 2021 Internet Governance Forum.

Building on this exchange, the second phase will shift its focus to other democratic states and broaden digital cooperation dialogues from like-minded countries in the OECD, in addition to non-Western democracies and under-represented stakeholders from developing countries.

 




i

New UK bill can fight fresh wave of online racist abuse

New UK bill can fight fresh wave of online racist abuse Expert comment NCapeling 21 July 2021

The Euros final and Grand Prix put online abuse once more in the spotlight. The UK’s Online Safety Bill provides a strong framework for tackling the problem.

The ugly online abuse targeted at members of the England football team following the Euros final, and then at Lewis Hamilton after the British Grand Prix, was not only hateful to the individuals concerned, but divisive for the UK more broadly.

More needs to be done to regulate online platforms to avoid the spread of such abuse at scale. Online platforms are making increasing efforts to ‘self-regulate’ in order to tackle online abuse. Over the past year, Facebook and Twitter have strengthened their policies on hateful speech and conduct, such as Facebook’s policy banning Holocaust denial. Both have become more vigilant at deplatforming those who violate their terms of service, such as Donald Trump, and at removing online abuse using a combination of machines and humans.

Twitter announced in the 24 hours following the Euros final that it had removed more than 1,000 tweets, and permanently suspended several accounts, for violating its rules. But inevitably not all abusive posts are picked up given the scale of the issue and, once the post has been seen, arguably the damage is done.

Platforms have also partnered with NGOs on initiatives to counter hate speech and have launched initiatives to tackle the rise in coordinated inauthentic behaviour and information operations that seek to sow distrust and division. But while these efforts are all laudable, they are not enough.

The UK government’s Online Safety Bill, published in May 2021, aims to tackle harmful content online by placing a duty of care on online platforms

The root of the problem is not the content but a business model in which platforms’ revenue from advertising is directly linked to engagement. This encourages the use of ‘recommender’ algorithms which amplify divisive content by microtargeting users based on previous behaviour, as seen not just with racist abuse but also other toxic content such as anti-vaccination campaigns. Abusers can also remain anonymous, giving them protection from consequences.

Creating a legal duty of care

The UK government’s Online Safety Bill, published in May 2021, aims to tackle harmful content online by placing a duty of care on online platforms to keep users safe and imposing obligations tailored to the size, functionality, and features of the service.

Social media companies will be expected to comply with their duties by carrying out risk assessments for specified categories of harm, guided by codes of practice published by the independent regulator, OFCOM. The bill gives OFCOM the power to fine platforms up to £18 million or ten per cent of global turnover, whichever is higher, for failure to comply.

Following the Euros final, the UK government spoke of referring some racist messages and conduct online to the police. But only a small proportion of it can be prosecuted given the scale of the abuse and the fact only a minority constitutes criminal activity. The majority is ‘lawful but harmful’ content – toxic and dangerous but not technically falling foul of any law.

When addressing ‘lawful but harmful’ material, it is crucial that regulation negotiates the tension between tackling the abuse and preserving freedom of expression. The scale at which such expression can spread online is key here – freedom of speech should not automatically mean freedom of reach. But it is equally important that regulation does not have a chilling effect on free speech, as with the creeping digital authoritarianism in much of the world.

When addressing ‘lawful but harmful’ material, it is crucial that regulation negotiates the tension between tackling the abuse and preserving freedom of expression

The Online Safety Bill’s co-regulatory approach aims to address these tensions by requiring platforms within the scope of the bill to specify in their terms and conditions how they deal with content on their services that is legal but harmful to adults, and by giving the regulator powers to police how platforms enforce them. Platforms such as Facebook and Twitter may already have strong policies on hate speech – now there will be a regulator to hold them to account.

Devil is in the detail

How successful OFCOM is in doing so will depend on the precise powers bestowed on it in the bill, and how OFCOM chooses to use them. It’s still early days - the bill will be scrutinized this autumn by a committee of MPs before being introduced to parliament. This committee stage will provide an opportunity for consideration of how the bill may need to evolve to get to grips with online abuse.

These latest two divisive and toxic episodes in UK sport are only likely to increase pressure from the public, parliament, and politicians for the bill to reserve robust powers for OFCOM in this area. If companies do not improve at dealing with online abuse, then OFCOM should have the power to force platforms to take more robust action, including by conducting an audit of platforms’ algorithms, enabling it to establish the extent to which their ‘recommender’ settings play a part in spreading hateful content.

Currently, the bill’s definition of harm is confined to harm to individuals, and the government has stated it does not intend this bill to tackle harm to society more broadly. But if racist abuse of individuals provokes racist attacks more widely, as has happened, the regulator should be able to take that wider context into account in its investigation and response.

Responses to the draft bill so far indicate challenges ahead. Some argue the bill does not go far enough to tackle online abuse, especially on the issue of users’ anonymity, while others fear the bill goes too far in stifling freedom of expression, labelling it a recipe for censorship.

Parliamentary scrutiny will need to take into account issues of identity, trust, and authenticity in social networks. While some call for a ban on the cloak of anonymity behind which racist abusers can hide online, anonymity does have benefits for those in vulnerable groups trying to expose hate.

An alternative approach gaining attention is each citizen being designated a secure digital identity, which would both provide users with greater control over what they can see online and enable social media platforms to verify specific accounts. Instituted with appropriate privacy and security safeguards, a secure digital ID would have benefits beyond social media, particularly in an online COVID-19 era.

The online public square is global so countries other than the UK and international organizations must also take measures. It is encouraging to see synergies between the UK’s Online Safety Bill and the EU’s Digital Services Act, published in draft form in December 2020, which also adopts a risk-based, co-regulatory approach to tackling harmful online content. And the UK is using its G7 presidency to work with allies to forge a more coherent response to internet regulation at the international level, at least among democratic states.

Addressing the scourge of online hate speech is challenging so the UK’s Online Safety Bill will not satisfy everyone. But it can give the public, parliament, and politicians a structure to debate these crucial issues and, ultimately, achieve more effective ways of tackling them.




i

Counter-terrorism measures and sanctions: How to avoid negative consequences for humanitarian action?

Counter-terrorism measures and sanctions: How to avoid negative consequences for humanitarian action? 9 September 2021 — 2:00PM TO 3:30PM Anonymous (not verified) 21 July 2021 Online

Exploring current endeavours to address the tensions between counter-terrorism measures, sanctions and humanitarian action.

Counter-terrorism measures  address broad forms of support to terrorist acts. Their expansion, internationally and domestically, has given rise to new points of friction with international humanitarian law. Unless the measures include adequate safeguards, they  can impede humanitarian action. Country-specific sanctions imposed for other objectives, such as ending conflicts or protecting civilians, raise similar challenges for humanitarian action. 

These problems are not new, but solutions at international and national level remain elusive. 

At this panel event, which marks the launch of a new Chatham House research paper, panellists explore current endeavours to address the tensions between counter-terrorism measures, sanctions and humanitarian action.

  • What are the current dynamics and developments at Security Council level?  
  • What are the opportunities now that the UK is developing its independent sanctions strategy? 
  • What challenges do counter-terrorism requirements in funding agreements for humanitarian action  pose? 
  • What is necessary to make progress?