Why We’ve Invested in Fighting Disinformation and Dangerous Speech on WhatsApp and Other Encrypted Messaging Platforms
The COVID-19 pandemic has highlighted to great effect something we’ve known to be true for a while – disinformation is a serious threat to people’s health and safety.
Research has already shown that lies spread faster than truth on social media. In normal times, and more troublingly in this crisis, conspiracy theories, destabilizing hoaxes, fake news, false claims, minority-targeting speech, and organized disinformation campaigns—let alone well-intentioned falsehoods—are rampant on both open (e.g. Twitter, Reddit, Facebook) and encrypted (e.g. WhatsApp) platforms. Much like a virus, these are complex problems, more so when they remain undetectable and untreatable on privacy-respecting, encrypted messaging platforms such as WhatsApp, Signal, and Telegram.
Just over two years ago, we started to hear stories from media and civil society leaders on how encrypted messaging platforms in particular had enabled a series of lynchings in India, swayed an election in Brazil, incited riots in Indonesia, become a breeding ground for white supremacy, and supported dark propaganda campaigns. With little response from the tech companies themselves, we witnessed blunt, knee-jerk responses from governments; for instance, law enforcement requested access to people’s private communication and telecoms authorities shut down the Internet altogether to try to get control of the problem.
And while encrypted platforms are most popular in the Southern Hemisphere and our work started there, the sheer extent of the misinformation problem during COVID-19 has forced the key decision makers of these companies to finally engage, here in the US. We welcome the new scrutiny on the dominant digital platforms, who until recently had to be pushed to even deal with the issues prevalent in the Northern Hemisphere on these closed platforms: hate speech, anti-vaxxers, and exploitation.
Encrypted messaging platforms have more than 5.5 billion active, monthly users—often arranged into groups of trusted networks such as family or friends, but also larger public fora. High-levels of convenience and low-bandwidth requirements make it very easy for users to rapidly send text, images, and videos to literally thousands of other users within seconds. WhatsApp users alone exchange more than 75 billion messages each day. And Facebook announced last year that it would integrate and encrypt its three messaging platforms, meaning closed networks will soon become the norm globally.
It’s important to acknowledge how vexing a problem this is. Scale aside, many of the already imperfect tools currently used to identify and address disinformation on open platforms (e.g., fact checking, community policing, pattern-based flagging, and takedowns) don’t translate very well to private, encrypted messaging platforms.
In light of the mounting risk and need for nuanced solutions, we quickly recognized ways we could play an important role in:
o helping the world to better understand how the platforms’ dominance and design contribute to these problems;
o preserving the privacy-respecting features of encrypted messaging platforms; and
o pushing the big tech companies to make the necessary product and policy changes so their platforms are safe, trustworthy, and healthy.
We are proud to be working with a set of high-quality, influential partners who are marshalling many of the things required to address this issue—including research, technical partnerships, dialogue and convening with policy makers and technology leaders, and public advocacy—while always holding the line on privacy and encryption.
Our first grants included support to:
- the Shorenstein Center at the Harvard Kennedy School for a five-country study in the Southern Hemisphere, taking note of users’ patterns and behaviors on encrypted messaging platforms in order to learn more about how mis- and disinformation spreads;
- Dr. Sam Woolley with the University of Texas at Austin for research that broadens decision-makers’ understanding of who designs, builds, and profits from disinformation campaigns and propaganda efforts on encrypted messaging platforms;
- Alex Stamos with the Stanford Internet Observatory to convene workshops in the US, EU, and Southern Hemisphere to source potential technical and policy options to balance trust and safety, privacy, and security on encrypted messaging platforms without allowing backdoor access; and
- Dr. Zeynep Tufekci with the University of North Carolina at Chapel Hill to share her experience-driven opinions, research-driven frameworks, and potential solutions, such as adding in friction, to address the nuanced issues at hand;
- Carnegie Endowment for International Peace’s “Encryption Working Group” to share their findings in the Moving the Encryption Policy Conversation Forward report via meetings and events as well as related publications and country briefs.
We’ve recently expanded our impact with some exciting partnerships that both get at the urgency of what’s required to stop the current “infodemic” and continue to strengthen civil society capacity to engage multiple stakeholders and perspectives, working toward sustainable change.
Meedan, for example, is a technology nonprofit that builds software and designs human-powered initiatives for newsrooms, NGOs, and academic institutions. Since last year, we have supported the organization in studying databases of encrypted messages to better understand the dynamics of content, especially dangerous content, on WhatsApp and Telegram. Their researchers are identifying how mis/disinformation, hate speech, and calls for violence move within the encrypted platforms and how they compare to what is posted on open platforms to better quantify and qualify the problem. And in doing so, Meedan is testing legal and ethical frameworks to support additional, privacy-respecting research on these platforms in the future.
This month, their Digital Health Lab also announced they are developing an expert-sourced database of information specifically related to COVID-19 to aid in the fight against misinformation across all, open and closed, platforms. Omidyar Network along with Google, Facebook, Twitter, the Swedish International Development Cooperation Agency (SIDA), and the Robert Wood Johnson Foundation have contributed to the project in order to give fact-checkers easy, transparent and unmatched access to credible information sourced and contextualized by scientists during the pandemic
Africa Check is Africa’s first independent fact-checking organization, with offices in Johannesburg, Nairobi, Lagos and Dakar. They have a multi-point approach to fact checking (including media literacy, retail and wholesale claims checks, partnerships and engagement with platforms, etc.) and produce reports in English and French, testing claims made by public figures, institutions and the media against the best available evidence. They have been a long-time partner of The Omidyar Group in promoting accuracy in public debate and in African media.
This new grant support will enable them to expand and strengthen efforts to fight COVID-19 misinformation, mainly on WhatsApp, in South Africa, Kenya, Senegal and Nigeria. In the process, they will track misinformation patterns on WhatsApp to inform future solutions. And they will focus on increasing the circulation of accurate, evidence-based information via multiple, narrow and broadcast channels, empowering users with the required media literacy to engage more critically in the public debate.
In addition to exposing the gravity of the misinformation problem to the world, the coronavirus outbreak has also shown us that the big tech companies can, when pushed, promise to make necessary product and policy changes. In March, soon after the WHO declared a pandemic, several companies committed to jointly “combat fraud and misinformation about the virus” and elevate credible content from health authorities. Leading up to this moment, Mark Zuckerberg and other big tech executives had wished to remain neutral in response to what was happening on their platforms, despite continuing to profit from such behavior. It will be important for all of us to hold these companies accountable, to ensure that they keep these promises and make the necessary changes to slow the velocity and extraordinary reach of misinformation.
Enabling privacy is virtuous and essential. Enabling the rapid and large-scale spread of dangerous, distorted, and deceitful content is irresponsible and dangerous.