l

Lipid extraction by methyl-tert-butyl ether for high-throughput lipidomics

Vitali Matyash
May 1, 2008; 49:1137-1146
Methods




l

Identification of multiple subclasses of plasma low density lipoproteins in normal humans

Ronald M. Krauss
Jan 1, 1982; 23:97-104
Articles




l

Thematic review series: The Pathogenesis of Atherosclerosis. Effects of infection and inflammation on lipid and lipoprotein metabolism mechanisms and consequences to the host

Weerapan Khovidhunkit
Jul 1, 2004; 45:1169-1196
Thematic Reviews




l

Adipose differentiation-related protein is an ubiquitously expressed lipid storage droplet-associated protein

DL Brasaemle
Nov 1, 1997; 38:2249-2263
Articles




l

Cell cholesterol efflux: integration of old and new observations provides new insights

George H. Rothblat
May 1, 1999; 40:781-796
Reviews




l

Plasma cholesteryl ester transfer protein

AR Tall
Aug 1, 1993; 34:1255-1274
Reviews




l

Thematic review series: Adipocyte Biology. The perilipin family of structural lipid droplet proteins: stabilization of lipid droplets and control of lipolysis

Dawn L. Brasaemle
Dec 1, 2007; 48:2547-2559
Thematic Reviews




l

Lipid rafts: bringing order to chaos

Linda J. Pike
Apr 1, 2003; 44:655-667
Thematic Reviews




l

Quantitation of atherosclerosis in murine models: correlation between lesions in the aortic origin and in the entire aorta, and differences in the extent of lesions between sexes in LDL receptor-deficient and apolipoprotein E-deficient mice

RK Tangirala
Nov 1, 1995; 36:2320-2328
Articles




l

Role of the peroxisome proliferator-activated receptor (PPAR) in mediating the effects of fibrates and fatty acids on gene expression

K Schoonjans
May 1, 1996; 37:907-925
Reviews




l

Remnant lipoprotein metabolism: key pathways involving cell-surface heparan sulfate proteoglycans and apolipoprotein E

Robert W. Mahley
Jan 1, 1999; 40:1-16
Reviews




l

Use of cyclodextrins for manipulating cellular cholesterol content

AE Christian
Nov 1, 1997; 38:2264-2272
Articles




l

Direct transesterification of all classes of lipids in a one-step reaction

G Lepage
Jan 1, 1986; 27:114-120
Articles




l

Bile salt biotransformations by human intestinal bacteria

Jason M. Ridlon
Feb 1, 2006; 47:241-259
Reviews




l

Preparation of fatty acid methyl esters and dimethylacetals from lipids with boron fluoride--methanol

William R. Morrison
Oct 1, 1964; 5:600-608
Articles




l

Lipoprotein lipase and lipolysis: central roles in lipoprotein metabolism and atherogenesis

IJ Goldberg
Apr 1, 1996; 37:693-707
Reviews




l

The amphipathic helix in the exchangeable apolipoproteins: a review of secondary structure and function

JP Segrest
Feb 1, 1992; 33:141-166
Reviews




l

Adipocyte death defines macrophage localization and function in adipose tissue of obese mice and humans

Saverio Cinti
Nov 1, 2005; 46:2347-2355
Research Articles




l

The plasma lecithin:cholesterol acyltransferase reaction

John A. Glomset
Mar 1, 1968; 9:155-167
Reviews




l

Molecular physiology of reverse cholesterol transport

CJ Fielding
Feb 1, 1995; 36:211-228
Reviews




l

Restriction isotyping of human apolipoprotein E by gene amplification and cleavage with HhaI

JE Hixson
Mar 1, 1990; 31:545-548
Articles




l

The Committee to Protect Journalists named winner of the Chatham House Prize 2018

The Committee to Protect Journalists named winner of the Chatham House Prize 2018 News Release sysadmin 5 October 2018

The Committee to Protect Journalists (CPJ) has been voted the winner of this year’s Chatham House Prize.




l

Chatham House awarded major centenary grant to establish Stavros Niarchos Foundation Wing

Chatham House awarded major centenary grant to establish Stavros Niarchos Foundation Wing News Release sysadmin 16 April 2019

Chatham House has been awarded a transformational £10m grant ahead of its upcoming 2020 centenary.




l

Chatham House appoints Rob Yates as the new head of the Centre on Global Health Security

Chatham House appoints Rob Yates as the new head of the Centre on Global Health Security News Release sysadmin 27 June 2019

Chatham House is pleased to announce that Rob Yates has been appointed as head of the Centre on Global Health Security.




l

Chatham House Commission on Democracy and Technology in Europe

Chatham House Commission on Democracy and Technology in Europe News Release sysadmin 25 July 2019

Our project on Democracy and Technology in Europe is now entering its final phase. Now we want your help in shaping the final report.




l

Creon Butler appointed to lead Global Economy and Finance Programme

Creon Butler appointed to lead Global Economy and Finance Programme News Release sysadmin 22 October 2019

Creon Butler has been appointed to lead the Global Economy and Finance programme at Chatham House, joining the institute at the beginning of December. He will also form part of the institute’s senior leadership team.




l

Sir David Attenborough and the BBC Studios Natural History Unit awarded Chatham House Prize 2019 for ocean advocacy

Sir David Attenborough and the BBC Studios Natural History Unit awarded Chatham House Prize 2019 for ocean advocacy News Release sysadmin 18 November 2019

The 2019 Chatham House Prize is awarded to Sir David Attenborough and Julian Hector, head of BBC Studios Natural History Unit, for the galvanizing impact of the Blue Planet II series on tackling ocean plastic pollution.




l

Remembering Rosemary Hollis (1952-2020)

Remembering Rosemary Hollis (1952-2020) News Release sysadmin 12 June 2020

Professor Rosemary Hollis, a highly respected authority on the Middle East, died suddenly last week. Rosy is remembered with great respect and affection, as a colleague and a friend.




l

Design In An Age of Crisis - Open Call

Design In An Age of Crisis - Open Call News Release sysadmin 21 July 2020

Chatham House and London Design Biennale announce full details of 'Design In An Age of Crisis,' a global Open Call for radical design thinking.




l

Chatham House Prize: Malawi Judges Win for Election Work

Chatham House Prize: Malawi Judges Win for Election Work News Release NCapeling 23 October 2020

Malawi’s constitutional court judges have won the 2020 Chatham House Prize in recognition of their 'courage and independence in the defence of democracy'.




l

Centenary Summer School Draws Over 500 Students

Centenary Summer School Draws Over 500 Students News Release jon.wallace 4 December 2020

Our inaugural summer school took place in July, drawing 547 students from countries including Indonesia, the United States, Nigeria, India and Sri Lanka.




l

Lord Hammond Joins Panel of Senior Advisers

Lord Hammond Joins Panel of Senior Advisers News Release NCapeling 10 December 2020

Chatham House is pleased to announce that Lord Hammond of Runnymede is joining our Panel of Senior Advisers.




l

Design in an Age of Crisis Launches

Design in an Age of Crisis Launches News Release jon.wallace 13 January 2021

Design open call receives 500 submissions from over 50 countries across six continents.




l

Supporting Next Generation of Leaders in Sustainability

Supporting Next Generation of Leaders in Sustainability News Release NCapeling 28 January 2021

A new programme offering paid internships for young people who are passionate about social, economic, and environmental sustainability has been launched.




l

Supporting Civic Space: The Role and Impact of the Private Sector

Supporting Civic Space: The Role and Impact of the Private Sector 23 September 2020 — 2:00PM TO 4:15PM Anonymous (not verified) 23 December 2020 Online

The meeting provides an opportunity to explore the drivers of – and barriers to – corporate activism.

A healthy civic space is vital for an enabling business environment. In recognition of this, a growing number of private sector actors are challenging, publicly or otherwise, the deteriorating environment for civic freedoms.

However, this corporate activism is often limited and largely ad hoc. It remains confined to a small cluster of multinationals leaving potential routes for effective coordination and collaboration with other actors underexplored.

This roundtable brings together a diverse and international group of business actors, civil society actors and foreign policy experts to exchange perspectives and experiences on how the private sector can be involved in issues around civic space.

The meeting provides an opportunity to explore the drivers of – and barriers to – corporate activism, develop a better understanding of existing initiatives, identify good practice and discuss practical strategies for the business community.

This meeting is the first of a series of roundtables at Chatham House in support of initiatives to build broad alliances for the protection of civic space. 




l

Supporting Civic Space: The Role and Impact of the Tech Sector

Supporting Civic Space: The Role and Impact of the Tech Sector 13 October 2020 — 2:00PM TO 4:15PM Anonymous (not verified) 23 December 2020 Online

This event brings together a diverse and international group of stakeholders to exchange perspectives and experiences on the role that tech actors can play in supporting civic space.

In a deteriorating environment for civic freedoms, tech sector actors are increasingly engaging, publicly or otherwise, on issues of civic space.

In the US, for example, a number of tech companies have cancelled contracts with the Pentagon and stopped censoring search results in China as a result of protests by employees. The Asia Internet Coalition recently wrote to Pakistan’s Prime Minister expressing human rights concerns about new rules regulating social media.

While we have recently seen technology companies show support for the social movements, including through substantial pledges, in some cases these have elicited criticism of hypocrisy, and the interventions of social media platforms on freedom of expression and privacy issues have been closely linked to the preservation of their own business models.

The COVID-19 crisis has also posed new dilemmas for the tech sector with the pervasiveness of disinformation, as well as new tools for tracking individuals which raise privacy issues.

This roundtable provides an opportunity to explore the drivers of (and barriers to) corporate activism, develop a better understanding of existing initiatives, identify good practice and routes to effective collaboration with other actors, and discuss practical strategies that could be adopted by the tech community.

It is the second of a series of roundtables at Chatham House in support of initiatives to build broad alliances for the protection of civic space.




l

Deplatforming Trump puts big tech under fresh scrutiny

Deplatforming Trump puts big tech under fresh scrutiny Expert comment NCapeling 22 January 2021

The response of digital platforms to the US Capitol riots raises questions about online content governance. The EU and UK are starting to come up with answers.

The ‘deplatforming’ of Donald Trump – including Twitter’s announcement that it has permanently banned him due to ‘the risk of further incitement of violence’ after the riots in the US – shows once more not only the sheer power of online platforms but also the lack of a coherent and consistent framework for online content governance.

Taking the megaphone away from Trump during the Capitol riots seems sensible, but was it necessary or proportionate to ban him from the platform permanently? Or consistent with the treatment of other ‘strongmen’ world leaders such as Modi, Duterte and Ayatollah Ali Khamenei who have overseen nationalistic violence but whose accounts remain intact?

Such complex decisions on online expression should not made unilaterally by powerful and unregulated tech actors, but instead should be subject to democratic oversight and grounded in the obligations of states and responsibilities of companies under international human rights law.

The speed and scale of digital information has left governments across the world struggling with how to tackle online harms such as hate speech, extremist content and disinformation since the emergence of mass social media 15 years ago.

The US’s hallowed approach to the First Amendment, under which speech on public issues – even hate speech – occupies the highest rank and is entitled to special protection, has contributed to a reluctance to regulate Silicon Valley’s digital platforms. But the irony is that by not regulating them, the government harmed freedom of expression by leaving complex speech decisions in the hands of private actors.

Meanwhile at the other extreme is the growing number of illiberal and authoritarian governments using a combination of vague laws, censorship, propaganda, and internet blackouts to severely restrict online freedom of expression, control the narrative and, in some cases, incite atrocities.

Regulation is on the way

The happy medium – flexible online content regulation providing clarity, predictability, transparency, and accountability – has until now been elusive. But even before the deplatforming of Trump, 2021 was set to be the year when this approach finally gained some traction, at least in Europe.

The EU’s recently-published draft Digital Services Act puts obligations on dominant social media platforms to manage ‘systemic risks’, for example through requirements for greater transparency about their content decisions, algorithms used for recommendations, and online advertising systems.

The UK will shortly publish its Online Safety Bill, which will establish a new regulatory framework for tackling online harms, including the imposition of a duty of care and codes of conduct on Big Tech, to be overseen by an independent regulator (Ofcom).

Both proposals are based on a ‘co-regulatory’ model under which the regulator sets out a framework substantiated with rules by the private sector, with the regulator performing a monitoring function to ensure the rules are complied with.

Both also draw on international human rights standards and the work of civil society in applying these standards in relation to the online public square, with the aim of increasing control for users over what they see online, requiring transparency about tech companies’ policies in a number of areas, and strengthening the accountability of platforms when they fall foul of the regulation.

The procedure for both proposals has also been inclusive, involving extensive multi-stakeholder consultations with civil society organizations and Big Tech, and the proposals will be subject to scrutiny in 2021, notably from the EU and UK parliaments.

Both proposals are at an early stage, and it remains to be seen whether they go far enough – or indeed will have a chilling effect on online platforms. But as an attempt to initiate a dialogue on globally coherent principles, they are positive first steps. They also provide food for thought for the new Joe Biden administration in the US as it turns its attention to the regulation of Big Tech.

For some time civil society – most prominently David Kaye, the former UN Special Rapporteur on freedom of expression and opinion – have called for content regulation to be informed by universal international human rights law standards.

The EU and UK are peculiarly well-placed to take the lead in this area because European countries have for decades been on the receiving end of judgments from the European Court of Human Rights on the appropriate limits to freedom of expression in cases brought under the European Convention on Human Rights.

In deciding these cases, the court has to balance the right to freedom of expression against the restrictions imposed – for example in the context of incitement to violence, political debate, and satire. Deciding where to draw the line on what can and cannot be expressed in a civilised society which prizes freedom of expression is inevitably a difficult exercise.

International human rights law provides a methodology that inquires whether the interference to freedom of expression was prescribed by law and pursues a legitimate aim, and also whether it was necessary in a democratic society to achieve those aims – including whether the interference was necessary and proportionate (as for example in Delfi AS v Estonia, which involved a news portal failing to take down unlawful hate speech).

To be effective, online content regulation has to bite on tech companies, which is a challenge given the internet is global but domestic law normally applies territorially. The EU’s proposals have an extraterritorial element as they apply to any online platforms providing services in the EU regardless of where the platform is headquartered.

Further, both the EU and UK want to give the regulator strong enforcement powers – it is proposed for example that Ofcom will have powers to fine platforms up to ten per cent of their turnover for breaches.

Although the proposals would not apply directly to the deplatforming of Trump which occurred in the US, the philosophy behind the EU and UK approach is likely to have an impact beyond European shores in promoting a co-regulatory model that some of the bigger tech companies have been inviting for some time, reluctant as they are to ‘play God’ on content moderation decisions without reference to any regulatory framework.

In the absence of regulation, the standards of tech platforms such as Facebook and Twitter have already evolved over time in response to pressure from civil rights groups, users, and advertisers, including updated policies on protecting civic conversation and hate speech.

Facebook has also set up an independent Oversight Board, whose members include leading human rights lawyers, to review decisions on content including – at its own request – the decision to indefinitely suspend Trump from Facebook and Instagram. Decisions on the Board’s first tranche of cases are expected imminently.

Gatekeeper status is key

Online content regulation also needs to address the role of Big Tech as the ‘digital gatekeepers’, because their monopoly power extends not just to editorial control of the news and information we consume, but also to market access.

The decision of Apple, Google, and Amazon to stop hosting right-wing social network Parler after it refused to combat calls for violence during the US Capitol riots was understandable in the circumstances, but also underlined the unilateral ability of Big Tech to decide the rules of the market.

Again, it is Europe where efforts are underway to tackle this issue: the EU’s draft Digital Market Act imposes obligations on online gatekeepers to avoid certain unfair practices, and the UK’s new Digital Markets Unit will have powers to write and enforce a new code of practice on those technology companies with ‘substantial and enduring’ market power.

In the US, Biden’s team will be following these developments with interest, given the growing bipartisan support for strengthening US antitrust rules and reviving antitrust enforcement. The EU’s recently published proposals for an EU-US tech agenda include a transatlantic dialogue on the responsibility of tech platforms and strengthened cooperation between antitrust authorities on digital markets.

Ultimately a consistent – and global – approach to online content is needed instead of fragmented approaches by different companies and governments. It is also important the framework is flexible so that it is capable of applying not only to major democracies but also to countries where too often sweeping state regulation has been used as a pretext to curtail online expression online.

The pursuit of a pluralistic framework tailored to different political and cultural contexts is challenging, and international human rights law cannot provide all the answers but, as a universal framework, it is a good place to start. The raft of regulatory measures from the EU and UK means that, regardless of whether Trump regains his online megaphone, 2021 is set to be a year of reckoning for Big Tech.




l

The UK's new Online Safety Bill

The UK's new Online Safety Bill 10 February 2021 — 3:00PM TO 3:45PM Anonymous (not verified) 26 January 2021 Online

Discussing the new proposals which include the establishment of a new ‘duty of care’ on companies to ensure they have robust systems in place to keep their users safe.

Governments, regulators and tech companies are currently grappling with the challenge of how to promote an open and vibrant internet at the same time as tackling harmful activity online, including the spread of hateful content, terrorist propaganda, and the conduct of cyberbullying, child sexual exploitation and abuse.

The UK government’s Online Harms proposals include the establishment of a new ‘duty of care’ on companies to ensure they have robust systems in place to keep their users safe. Compliance with this new duty will be overseen by an independent regulator.

On 15 December 2020, DCMS and the Home Office published the full UK government response, setting out the intended policy positions for the regulatory framework, and confirming Ofcom as the regulator.

With the legislation likely to be introduced early this year, the panel will discuss questions including:

  • How to strike the balance between freedom of expression and protecting adults from harmful material?

  • How to ensure the legislation’s approach to harm is sufficiently future-proofed so new trends and harms are covered as they emerge?

  • What additional responsibilities will tech companies have under the new regulation?

  • Will the regulator have sufficient powers to tackle the wide range of harms in question?

This event is invite-only for participants, but you can watch the livestream of the discussion on this page at 15.00 GMT on Wednesday 10 February.




l

Implications of post-COVID-19 Restructuring of Supply Chains for Global Investment Governance

Implications of post-COVID-19 Restructuring of Supply Chains for Global Investment Governance 14 July 2020 — 9:00AM TO 10:30AM Anonymous (not verified) 9 February 2021 Online

As companies rethink and diversify their supply chains in order to enhance resilience, what will this mean for current and future global investment governance?

What are the risks of negative effects on inclusivity and transparency? Does this shift create an opportunity to advance good governance of cross-border investment practices?

This event is part of the Inclusive Governance Initiative, which is examining how to build more inclusive models and mechanisms of global governance fit for purpose in today’s world.




l

The Future of Investment Dispute Settlement Regimes

The Future of Investment Dispute Settlement Regimes 30 June 2020 — 2:00PM TO 3:30PM Anonymous (not verified) 9 February 2021 Online

This event is part of the Inclusive Governance Initiative, which is examining how to build more inclusive models and mechanisms of global governance fit for purpose in today’s world.

Is an ‘atomized’ approach to cross-border investment dispute resolution inevitable? Has the multiplicity of mechanisms helped or hindered inclusivity in and transparency in governance? Is there a need for, and scope to, increase the international coordination of dispute resolution mechanisms? If so, what form should it take? What could be the implications for international economic law?




l

Insights from Climate Policy: Engaging Subnational Governments in Global Platforms

Insights from Climate Policy: Engaging Subnational Governments in Global Platforms 10 June 2020 — 2:45PM TO 6:00PM Anonymous (not verified) 9 February 2021 Online

How have subnational governments shaped the global agenda and created momentum on climate change where national and international governance processes could not?

Can these advances be converted into meaningful collaboration channels for policy development? What works, or does not, when it comes to engagement with multilateral negotiation processes? What ingredients are necessary for success? What are the broader implications of these trends for inclusivity and innovation in international governance?

This event is part of the Inclusive Governance Initiative, which is examining how to build more inclusive models and mechanisms of global governance fit for purpose in today’s world.




l

Innovating Governance: Examples from the Digital Arena

Innovating Governance: Examples from the Digital Arena 25 February 2020 TO 26 February 2020 — 10:00AM TO 11:30AM Anonymous (not verified) 9 February 2021 Chatham House

The Inclusive Governance Initiative is launched with this roundtable on digital governance.

The Inclusive Governance Initiative, a centenary project which is examining how to build more inclusive models and mechanisms of global governance fit for purpose in today’s world, is launched with this roundtable on digital governance.

The event brings together a diverse and multidisciplinary group of leading experts to consider where and how early initiatives around governance of the digital sphere have succeeded – or not – and how they are evolving today.

The conversation will include the debate between multilateral and multi-stakeholder approaches, the opportunities and challenges of collective non-binding commitments, and converting civil society collaboration into policy contribution.




l

The Implication of Greater Use of Investment Screening

The Implication of Greater Use of Investment Screening 26 June 2020 — 9:00AM TO 10:30AM Anonymous (not verified) 11 February 2021 Online

What is driving the trend towards greater use of investment screening by nation states and regional economic groupings?

  • How is the COVID-19 crisis affecting this trend?
  • What will the economic implications be?
  • Will this help or hinder inclusivity and transparency in investment governance?
  • Is there a role for international safeguards and/or international coordination of national/regional approaches to investment screening?

This event is part of the Inclusive Governance Initiative, which is examining how to build more inclusive models and mechanisms of global governance fit for purpose in today’s world.




l

Persuasion or manipulation? Limiting campaigning online

Persuasion or manipulation? Limiting campaigning online Expert comment NCapeling 15 February 2021

To tackle online disinformation and manipulation effectively, regulators must clarify the dividing line between legitimate and illegitimate campaign practices.

Democracy is at risk, not only from disinformation but from systemic manipulation of public debate online. Evidence shows social media drives control of narratives, polarization, and division on issues of politics and identity. We are now seeing regulators turn their attention to protecting democracy from disinformation and manipulation. But how should they distinguish between legitimate and illegitimate online information practices, between persuasive and manipulative campaigning?

Unregulated, the tactics of disinformation and manipulation have spread far and wide. They are no longer the preserve merely of disaffected individuals, hostile international actors, and authoritarian regimes. Facebook’s periodic reporting on coordinated inauthentic behaviour and Twitter’s on foreign information operations reveal that militaries, governments, and political campaigners in a wide range of countries, including parts of Europe and America, have engaged in manipulative or deceptive information campaigns.

For example, in September 2019, Twitter removed 259 accounts it says were ‘falsely boosting’ public sentiment online that it found to be operated by Spain’s conservative and Christian-democratic political party Partido Popular. In October 2020, Facebook removed accounts with around 400,000 followers linked to Rally Forge, a US marketing firm which Facebook claims was working on behalf of right-wing organisations Turning Point USA and Inclusive Conservation Group. And in December 2020, Facebook took down a network of accounts with more than 6,000 followers, targeting audiences in Francophone Africa and focusing on France’s policies there, finding it linked with individuals associated with the French military.

Public influence on a global scale

Even more revealingly, in its 2020 Global Inventory of Organized Social Media Manipulation, the Oxford Internet Institute (OII) found that in 81 countries, government agencies and/or political parties are using ‘computational propaganda’ in social media to shape public attitudes.

These 81 countries span the world and include not only authoritarian and less democratic regimes but also developed democracies such as many EU member states. OII found that countries with the largest capacity for computational propaganda – which include the UK, US, and Australia – have permanent teams devoted to shaping the online space overseas and at home.

OII categorizes computational propaganda as four types of communication strategy – the creation of disinformation or manipulated content such as doctored images and videos; the use of personal data to target specific segments of the population with disinformation or other false narratives; trolling, doxing or online harassment of political opponents, activists or journalists; and mass-reporting of content or accounts posted or run by opponents as part of gaming the platforms’ automated flagging, demotion, and take-down systems.

Doubtless some of the governments included within OII’s statistics argue their behaviour is legitimate and appropriate, either to disseminate information important to the public interest or to wrestle control of the narrative away from hostile actors. Similarly, no doubt some political campaigners removed by the platforms for alleged engagement in ‘inauthentic behaviour’ or ‘manipulation’ would defend the legitimacy of their conduct.

The fact is that clear limits of acceptable propaganda and information influence operations online do not exist. Platforms still share little information overall about what information operations they see being conducted online. Applicable legal principles such as international human rights law have not yet crystallised into clear rules. As information operations are rarely exposed to public view – with notable exceptions such as the Cambridge Analytica scandal – there is relatively little constraint in media and public scrutiny or censure.

OII’s annual reports and the platforms’ periodic reports demonstrate a continual expansion of deceptive and manipulative practices since 2016, and increasing involvement of private commercial companies in their deployment. Given the power of political influence as a driver, this absence of clear limits may result in ever more sophisticated techniques being deployed in the search for maximal influence.

Ambiguity over reasonable limits on manipulation plays into the hands of governments which regulate ostensibly in the name of combating disinformation, but actually in the interests of maintaining their own control of the narrative and in disregard of the human right to freedom of expression. Following Singapore’s 2019 prohibition of online untruths, 17 governments ranging from Bolivia to Vietnam to Hungary passed regulations during 2020 criminalising ‘fake news’ on COVID-19 while many other governments are alleged to censor opposition arguments or criticisms of official state narratives.

Clear limits are needed. Facebook itself has been calling for societal discussion about the limits of acceptable online behaviour for some time and has issued recommendations of its own.

The European Democracy Action Plan: Aiming to protect pluralism and vigour in democracy

The European Democracy Action Plan (EDAP), which complements the European Commission’s Digital Services Act and Digital Markets Act proposals, is a welcome step. It is ground-breaking in its efforts to protect the pluralism and vigour of European democracies by tackling all forms of online manipulation, while respecting human rights.

While the EDAP tackles disinformation, it also condemns two categories of online manipulation – information influence operations which EDAP describes as ‘coordinated efforts by either domestic or foreign actors to influence a target audience using a range of deceptive means’ and foreign interference, described as ‘coercive and deceptive efforts to disrupt the free formation and expression of individuals’ political will by a foreign state actor or its agents’. These categories include influence operations such as harnessing fake accounts or gaming algorithms, and the suppression of independent information sources through censorship or mass reporting.

But the categories are so broad they risk capturing disinformation practices not only of rogue actors, but also of governments and political campaigners both outside and within the EU. The European Commission plans to work towards refined definitions. Its discussions with member states and other stakeholders should start to determine which practices ought to be tackled as manipulative, and which ought to be tolerated as legitimate campaigning or public information practices.

The extent of the EDAP proposals on disinformation demonstrates the EU’s determination to tackle online manipulation. The EDAP calls for improved practical measures building on the Commission’s 2020 acceleration of effort in the face of COVID-19 disinformation. The Commission is considering how best to impose costs on perpetrators of disinformation, such as by disrupting financial incentives or even imposing sanctions for repeated offences.

Beyond the regulatory and risk management framework proposed by the Digital Services Act (DSA), the Commission says it will issue guidance for platforms and other stakeholders to strengthen their measures against disinformation, building on the existing EU Code of Practice on Disinformation and eventually leading to a strengthened Code with more robust monitoring requirements. These are elements of a broader package of measures in the EDAP to preserve democracy in Europe.

Until there are clear limits, manipulative practices will continue to develop and to spread. More actors will resort to them in order not to be outgunned by opponents. It is hoped forthcoming European discussions – involving EU member state governments, the European Parliament, civil society, academia and the online platforms – will begin to shape at least a European and maybe a global consensus on the limits of information influence, publicly condemning unacceptable practices while safeguarding freedom of expression.

Most importantly, following the example of the EDAP, the preservation of democracy and human rights – rather than the promotion of political or commercial interest – should be the lodestar for those discussions.




l

Imagine a World Without Fake News

Imagine a World Without Fake News Explainer Video NCapeling 25 February 2021

Harriet Moynihan and Mathieu Boulegue explain how we can avoid drowning in an ocean of fake news and information manipulation.

The flow of fake news is vast and unlikely to go away. What’s more, imagining a world where fake news is eradicated completely has implications for freedom of expression.

But what if, instead of wishing fake news away, we can adapt and become immune to it? 

Chatham House is built on big ideas. Help us imagine a better world.

Our researchers develop positive solutions to global challenges, working with governments, charities, businesses and society to build a better future.

SNF CoLab is our project supported by the Stavros Niarchos Foundation (SNF) to share our ideas in experimental, collaborative ways – and to learn about designing a better future, overcoming challenges such as fake news, COVID-19, food security, and conflict.




l

The regional and international implications of restrictions to online freedom of expression in Asia

The regional and international implications of restrictions to online freedom of expression in Asia 25 March 2021 — 12:30PM TO 1:30PM Anonymous (not verified) 12 March 2021 Online

Panellists discuss the latest developments affecting online freedom of expression in the Asia region.

Please note this is an online event. Please register using the link below to finalize your registration.

In recent years, state-led clampdowns on online freedom of expression have become widespread in several countries across Asia, further intensified by the COVID-19 crisis.

The reasons for this are complex and diverse – drawing upon history, culture and politics, in addition to external influences. Across the region, governments have been accused of silencing online criticism and failing to uphold rights to free speech.

Individuals have been arrested, fined or attacked for the alleged spread of ‘fake news’, raising concern among human rights organizations. In some countries, this has culminated in the imposition of new social media rules, which could require social media companies to censor posts and share decrypted messages.

In China, the government’s restrictive online regime has relied on a combination of legal, technical and manipulation tactics to manage control of the internet, and now includes attempts at censorship beyond its borders.

Panellists will discuss the latest regional developments affecting online freedom of expression in the Asia region, and will consider the broader regional and international implications for technology governance.

This webinar launches the publication Restrictions on online freedom of expression in China: The domestic, regional and international implications of China’s policies and practices.




l

Battle lines being drawn over online freedoms in Asia

Battle lines being drawn over online freedoms in Asia Expert comment NCapeling 22 March 2021

Social media giants are increasingly clashing with Asian governments over free expression and censorship as the region lurches towards digital authoritarianism.

Freedom of expression was subject to significant restrictions in Asia even before the pandemic, with several governments having enacted laws that stifle online debate. But since COVID-19, restrictions have increased even further due to a rash of so-called ‘emergency measures’ introduced by governments across the region.

Bangladesh, India, Indonesia, Malaysia, Myanmar, Nepal, Pakistan, the Philippines, Sri Lanka, Thailand, and Vietnam have all put new laws into place, and many restrictions are already being applied in a draconian fashion, such as in the Philippines and Bangladesh.

As outlined in a new Chatham House research paper, one inspiration behind this trend is China, home to the world’s most sophisticated and restrictive system of internet control. The Chinese government’s restrictive online regime, which has tightened further under COVID-19, relies on a combination of legal regulations, technical controls, and proactive manipulation of online debates.

The Chinese government is exporting both its technology – such as through the establishment of smart cities, the installation of AI, and surveillance technology – and its vision of how the internet should be governed

This model was an inspiration for Vietnam’s cybersecurity law, as well as Myanmar’s new draft cybersecurity bill, proposed by the Military-run State Administration Council in the wake of the military coup last month, which would give the military there extensive powers to access individuals’ data, restrict, or suspend access to the internet.

This ‘sovereignty and control’ model of internet governance is also gaining impetus through China’s ‘Digital Silk Road’ initiative, under which the Chinese government is exporting both its technology – such as through the establishment of smart cities, the installation of AI, and surveillance technology – and its vision of how the internet should be governed.

In November 2020, Xi Jinping pledged to further deepen cooperation with ASEAN through the Digital Silk Road, and the pandemic has expanded the appeal of Chinese surveillance technologies and data collection platforms to governments both in Asia and beyond. China’s Health Silk Road, which aims to promote global health cooperation, is centered on the Chinese government’s high-tech model under which civic freedoms are sacrificed in the name of public health.

An alternative model

This ‘sovereignty and control’ model is increasingly at odds with the more ‘human-centric’ model of tech governance favoured by many democratic states, Western social media companies, and international institutions, especially the United Nations (UN) and European Union (EU).

Although this emerging model also involves regulation, it is regulation which aims to be inclusive, risk-based, and proportionate – balancing the need for protection against online harms with the need to preserve freedom of expression. It is a multi-stakeholder, rights-based approach which brings together not just governments but also representatives of the private sector, civil society, and academia. The EU’s draft Digital Services Act and the UK’s proposals for an Online Safety Bill are both reflective of this approach.

Western social media giants such as Facebook and Twitter have recently introduced new policies which seek to identify and mitigate online harms, such as hate speech and disinformation. Industry bodies such as the Global Network Initiative, independent oversight bodies such as the Oversight Board established by Facebook, and civil society advocacy and initiatives such as the Santa Clara Principles on Transparency and Accountability in Content Moderation are also an important part of the picture.

This ‘sovereignty and control’ model is increasingly at odds with the more ‘human-centric’ model of tech governance favoured by many democratic states, Western social media companies, and international institutions

Admittedly, these various digital governance initiatives are in some cases embryonic, and are by no means a silver bullet solution to the complex problem of online content moderation, which continues to be hotly debated in democratic societies. But they are at least underpinned by the same philosophy – that international human rights law standards must continue to apply even during emergencies such as COVID-19. With the Biden administration in the US prioritizing tech governance in its policy agenda, there is added momentum to the international leadership behind this model.

A clash of ideology

These conflicting philosophies are playing out in debates on technology governance at the UN, with one group of countries led by China and Russia advocating for greater government control of the internet, and many Western democracies emphasizing the need for an open, global internet that protects human rights.

These differing ideologies are also creating tensions between Western social media companies operating in Asia and the various governments in that region which have increased restrictions on online expression. And the gulf between the two appears to be widening.

In 2017, the Thailand government threatened Facebook with legal action unless it agreed to remove content critical of Thailand’s royal family and, in 2020, Facebook announced it had been ‘forced to block’ such material. Also in 2020, the Vietnam government pressured state-owned telecom companies to throttle internet traffic to Facebook, effectively restricting access to the platform, until Facebook agreed to take down content the government deemed to be anti-state.

Platforms refuse to silence legitimate criticism

However, Silicon Valley’s social media companies have also been pushing back. Facebook restricted the accounts of Myanmar’s military on the basis of ‘spreading misinformation’ in the wake of the military’s imposition of an internet shutdown that blocked access to Facebook, Twitter, and Instagram. And Twitter resisted requests by the Indian government to block accounts involved in protests by farmers.

Twitter stated that while it would block any accounts which it felt incited violence, it would not take action on accounts belonging to news media entities, journalists, activists, and politicians because it believed that would ‘violate the fundamental right to free expression under the Indian law’. The Indian government responded by fast-tracking stringent new social media regulations heavily criticized by rights groups for increasing government power over content on social media platforms, including online news.

So how can social media companies find avenues for operating in Asia and beyond without being co-opted into the lurch towards digital authoritarianism? There are no easy answers here, but collaboration is key. Cooperation between tech companies and local civil society partners can help companies better understand risks to human rights in the country concerned and how they might be mitigated. And tech companies are more effective in alliance with each other than acting on their own, such as the refusal by Facebook, Google, Telegram, and Twitter to hand over data on protestors to the Hong Kong police.

Twitter stated that while it would block any accounts which it felt incited violence, it would not take action on accounts belonging to news media entities, journalists, activists, and politicians

The fact that in many countries in Asia there are no alternatives to Western social media companies – unlike China, where platforms such as WeChat are part of the government’s internet control apparatus – gives the companies concerned some leverage. In February 2020, Facebook, Google, and Twitter together – through the Asia Internet Coalition – threatened to leave Pakistan in response to the government’s draconian proposals to regulate social media. Along with pressure and lawsuits from civil society, this forced the government into retreat, although the tussle over the new rules, introduced in November, continues.

At a time when illiberalism was already on the rise in Asia (including in democracies – Freedom House has just downgraded India’s status from ‘free’ to ‘partly free’), COVID-19 has made tighter state control of online freedom of expression even more attractive to many governments. As it seems increasingly unlikely that restrictions enacted under the guise of pandemic-related emergency measures will be repealed once the COVID-19 crisis ends, it is even more important that tech companies work with civil society on the ground to minimize the censorship of citizen voices.




l

Rebuilding trust is central to the UN’s future

Rebuilding trust is central to the UN’s future Expert comment NCapeling 25 March 2021

António Guterres is under scrutiny as he prepares to report on the future of the United Nations, with a renewed focus on trust, resilience and prevention.

The United Nations Secretary-General’s inbox is full as his organization celebrates its 75th anniversary. Trust must be rebuilt amid increased geo-political rivalry, North-South divisions, and sceptical citizens left behind by globalization. The international community has manifestly underinvested in institutional resilience and prevention. Better partnerships are needed with the private sector, and innovative forms of cross-regional cooperation fostered.

There are positive signs UN member states want things to change. They unanimously agreed a Political Declaration last September strongly reaffirming multilateralism, and they gave António Guterres one year to present a roadmap on how to respond, ‘building back better’ in the face of climate change and COVID-19.

Mobilized by populist movements and ‘fake news’ online, individuals left behind by the uneven economic benefits of globalization view governments and international organizations as unaccountable and lacking their interests at heart

A key challenge is to steer mandates and resources towards prevention. The World Bank-WHO Global Preparedness Monitoring Board, which eerily predicted the pandemic in its inaugural report in September 2019, reminds us successful prevention rests not on warning alone, but on aligned incentives for early action.

Geopolitical tensions persist

China has invested significantly in the multilateral system over the last decade, both in formal organizations such as the UN and the African Union, and in fostering a set of China-centred ‘mini-lateral’ fora such as the SCO, BRICS and BRI. It has also deepened its ties with Russia in the UN Security Council. Western countries both begrudgingly admire and deeply distrust China’s nimbleness in advancing its interests and values in this way but are divided on how to respond.

The Biden administration has recommitted itself to multilateral processes but US bilateral relations are likely to remain the main foreign policy driver. The UK has sought to convert the G7 into an enlarged summit-level meeting for democracies but Europe is divided over the wisdom of formalizing a group which may increase divisions with China, and some major democracies – India for example – have divergent approaches on issues such as trade protection.

An increase in cross-regional informal caucusing within the UN system to advance norms and progress around specific common objectives is likely. Guterres can encourage smaller powers to become ‘bridge builders’ sitting in the middle of a ‘Venn diagram’ of such new member state constellations at the UN.

Guterres can also build on the recent Abraham Accords to encourage cross-regional cultural, political and security relationships on the back of trade and investment, and map practical opportunities for strategic cooperation between China and the West in health and food security, climate and biodiversity, and global macroeconomic management, while fostering new normative frameworks to manage strategic competition in artificial intelligence (AI), big data, cyber resilience, gene-editing, and automation.

North-South mistrust

Realizing the Sustainable Development Goals (SDGs) and climate objectives rests in part in mobilizing the expertise and resources of sub-state actors such as business and city and regional authorities. However, developing countries remain wary of granting the UN Secretary-General a greater role in fostering partnerships with the private sector and mobilizing private finance, out of fear this may overshadow the global North’s promises to provide aid and create fairer trade and debt conditions.

In addition, African governments are expressing growing frustration at their continued lack of ‘agency’ in UN decision-making, the reneging of promises on climate financing by the global North, and the slow rollout of the COVAX facility to developing countries.

Progress may lie in two areas. First, developing country leadership of initiatives – such as the Friends Group on SDG financing established by the Jamaican and Canadian ambassadors to the UN – can help build trust and allay concerns, which is vital to incentivise transformative investment by sovereign wealth, pension, and insurance funds in pro-poor low carbon infrastructure in developing countries.

The second area is curating multi-stakeholder initiatives outside the UN framework and then linking them back to the organization once they have proven to be beneficial to both developed and developing countries. Successful initiatives such as the Vaccine Alliance can be a model of how to do this while not detracting from state obligations.

Scepticism among citizens

Trust in governance also needs rebuilding at the level of the individual citizen. Mobilized by populist movements and ‘fake news’ online, individuals left behind by the uneven economic benefits of globalization view governments and international organizations as unaccountable and lacking their interests at heart.

Alongside trust and accountability, fostering inclusiveness is likely to be central to Guterres’ report as he navigates how the UN can legitimize multi-stakeholder partnerships, enhance transparency, and bring coherence to diverse ‘mini-lateral’ initiatives

Guterres has called for a new ‘social contract’ between governments and their citizens, and for ‘Multilateralism 2.0’ to demonstrate a practical ‘hard interest’ as well as a ‘values’ case for why international cooperation inclusively benefits individuals as well as states. And technological innovation can also help citizens hold governments to account. As the first Secretary-General with a science and engineering background, Guterres has championed how technology enhances UN delivery of its objectives.

The pairing of artificial intelligence (AI) with satellites and drones for geospatial insight has been pioneered by both the United Nations Environment Programme (UNEP) and the Food and Agriculture Organization (FAO) to help communities preserve ecosystems and agricultural productivity. The resultant data, accessible on smart phones and computers, enables civil society to measure governments’ promises against real-time progress, through monitoring greenhouse gas emissions from power stations.

Alongside trust and accountability, fostering inclusiveness is likely to be central to Guterres’ report as he navigates how the UN can legitimize multi-stakeholder partnerships, enhance transparency, and bring coherence to diverse ‘mini-lateral’ initiatives.

These themes are explored further in the forthcoming synthesis paper ‘Reflections on building more inclusive global governance: Ten insights into emerging practice




l

A seat at the table – why inclusivity matters in global governance

A seat at the table – why inclusivity matters in global governance 10 May 2021 — 1:30PM TO 3:00PM Anonymous (not verified) 22 April 2021 Online

Exploring the changing dynamics of global cooperation and the role inclusivity can play in building collaborative action.

Please click on the below link to confirm your participation and receive your individual joining details from Zoom for this event. You will receive a confirmation email from Zoom, which contains the option to add the event to your calendar if you so wish.

The scale of today’s global challenges demand collaborative and coordinated action. But deepening geopolitical competition is threatening multilateralism while growing inequality and social tensions continue to undermine public confidence in the ability of international institutions to deliver.

Into this challenging environment, add the complexity and sheer pace of many global challenges such as the climate crisis and the proliferation of new technologies – issues that cannot be addressed effectively by governments alone.

  • How do global institutions and mechanisms need to adapt to address the demands for a fairer distribution of power between states and to engage the diverse set of actors essential today for effective solutions?
  • What can be learnt from existing initiatives that bring together governments, civil society, private sector, cities, next generation leaders and other stakeholders?
  • And what are the political obstacles to greater inclusivity?

This event supports the launch of a synthesis paper from Chatham House’s Inclusive Governance Initiative.




l

Can global technology governance anticipate the future?

Can global technology governance anticipate the future? Expert comment NCapeling 27 April 2021

Trying to govern disruption is perilous as complex technology is increasingly embedded in societies and omnipresent in economic, social, and political activity.

Technology governance is beset by the challenges of how regulation can keep pace with rapid digital transformation, how governments can regulate in a context of deep knowledge asymmetry, and how policymakers can address the transnational nature of technology.

Keeping pace with, much less understanding, the implications of digital platforms and artificial intelligence for societies is increasingly challenging as technology becomes more sophisticated and yet more ubiquitous.

To overcome these obstacles, there is an urgent need to move towards a more anticipatory and inclusive model of technology governance. There are some signs of this in recent proposals by the European Union (EU) and the UK on the regulation of online harms.

Regulation failing to keep up

The speed of the digital revolution, further accelerated by the pandemic, has largely outstripped policymakers’ ability to provide appropriate frameworks to regulate and direct technology transformations.

Governments around the world face a ‘pacing problem’, a phenomenon described by Gary Marchant in 2011 as ‘the growing gap between the pace of science and technology and the lagging responsiveness of legal and ethical oversight that society relies on to govern emerging technologies’.

The speed of the digital revolution, further accelerated by the pandemic, has largely outstripped policymakers’ ability to provide appropriate frameworks to regulate and direct technology transformations

This ever-growing rift, Marchant argues, has been exacerbated by the increasing public appetite for and adoption of new technologies, as well as political inertia. As a result, legislation on emerging technologies risks being ineffective or out-of-date by the time it is implemented.

Effective regulation requires a thorough understanding of both the underlying technology design, processes and business model, and how current or new policy tools can be used to promote principles of good governance.

Artificial intelligence, for example, is penetrating all sectors of society and spanning multiple regulatory regimes without any regard for jurisdictional boundaries. As technology is increasingly developed and applied by the private sector rather than the state, officials often lack the technical expertise to adequately comprehend and act on emerging issues. This increases the risk of superficial regulation which fails to address the underlying structural causes of societal harms.

The significant lack of knowledge from those who aim to regulate compared to those who design, develop and market technology is prevalent in most technology-related domains, including powerful online platforms and providers such as Facebook, Twitter, Google and YouTube.

For example, the ability for governments and researchers to access the algorithms used in the business model of social media companies to promote online content – harmful or otherwise – remains opaque so, to a crucial extent, the regulator is operating in the dark.

The transnational nature of technology also poses additional problems for effective governance. Digital technologies intensify the gathering, harvesting, and transfer of data across borders, challenging administrative boundaries both domestically and internationally.

While there have been some efforts at the international level to coordinate approaches to the regulation of – for example – artificial intelligence (AI) and online content governance, more work is needed to promote global regulatory alignment, including on cross-border data flows and antitrust.

Reactive national legislative approaches are often based on targeted interventions in specific policy areas, and so risk failing to address the scale, complexity, and transnational nature of socio-technological challenges. Greater attention needs to be placed on how regulatory functions and policy tools should evolve to effectively govern technology, requiring a shift from a reactionary and rigid framework to a more anticipatory and adaptive model of governance.

Holistic and systemic versus mechanistic and linear

Some recent proposals for technology governance may offer potential solutions. The EU publication of a series of interlinked regulatory proposals – the Digital Services Act, Digital Markets Act and European Democracy Action Plan – integrates several novel and anticipatory features.

The EU package recognizes that the solutions to online harms such as disinformation, hate speech, and extremism lie in a holistic approach which draws on a range of disciplines, such as international human rights law, competition law, e-commerce, and behavioural science.

By tackling the complexity and unpredictability of technology governance through holistic and systemic approaches rather than mechanistic and linear ones, the UK and EU proposals represent an important pivot from reactive to anticipatory digital governance

It consists of a combination of light touch regulation – such as codes of conduct – and hard law requirements such as transparency obligations. Codes of conduct provide flexibility as to how requirements are achieved by digital platforms, and can be updated and tweaked relatively easily enabling regulations to keep pace as technology evolves.

As with the EU Digital Services Act, the UK’s recent proposals for an online safety bill are innovative in adopting a ‘systems-based’ approach which broadly focuses on the procedures and policies of technology companies rather than the substance of online content.

This means the proposals can be adapted to different types of content, and differentiated according to the size and reach of the technology company concerned. This ‘co-regulatory’ model recognizes the evolving nature of digital ecosystems and the ongoing responsibilities of the companies concerned. The forthcoming UK draft legislation will also be complemented by a ‘Safety by Design’ framework, which is forward-looking in focusing on responsible product design.

By tackling the complexity and unpredictability of technology governance through holistic and systemic approaches rather than mechanistic and linear ones, the UK and EU proposals represent an important pivot from reactive to anticipatory digital governance.

Both sets of proposals were also the result of extensive multistakeholder engagement, including between policy officials and technology actors. This engagement broke down silos within the technical and policy/legal communities and helped bridge the knowledge gap between dominant technology companies and policymakers, facilitating a more agile, inclusive, and pragmatic regulatory approach.

Coherence rather than fragmentation

Anticipatory governance also recognizes the need for new coalitions to promote regulatory coherence rather than fragmentation at the international level. The EU has been pushing for greater transatlantic engagement on regulation of the digital space, and the UK – as chair of the G7 presidency in 2021 – aims to work with democratic allies to forge a coherent response to online harms.

Meanwhile the OECD’s AI Policy Observatory enables member states to share best practice on the regulation of AI, and an increasing number of states such as France, Norway, and the UK are using ‘regulatory sandboxes’ to test and build AI or personal data systems that meet privacy standards.

Not all states currently have the organizational capacity and institutional depth to design and deliver regulatory schemes of this nature, as well as the resource-intensive consultation processes which often accompany them.

So, as an increasing number of states ponder how to ‘futureproof’ their regulation of tomorrow’s technology – whether 6G, quantum computing or biotechnology – there is a need for capacity building in governments both on the theory of anticipatory governance and on how it can be applied in practice to global technology regulation.