Digital trust and safety and the Israel-Hamas war: Analysis

Map with livestream zoom in on the Gaza Strip

By Noah Cole, Programs Associate

The latest war between Israel and Hamas has revealed an inflection point for the digital trust and safety (T&S) industry. Between the need to prioritize the privacy and safety of people in the region; the spread of mis- and disinformation globally; and the adoption of novel tactics for digital abuse, propaganda, and warfare, advocates for online safety and digital rights must pay close attention to the ongoing conflict and how it is impacting T&S at large.

By reviewing 80 articles related to T&S and the war published between Oct. 9 and Nov. 17, 2023, Omidyar Network identified trends in how T&S is being discussed in the media, technology industry, and civil society. The articles reviewed were published by media outlets or organizations based in the US and abroad, with the most frequently cited publishers being the New York Times, WIRED, The Verge, Reuters, and Al Jazeera.

Our review solidified the core principle that underlies our digital trust and safety work: There is a strong need for philanthropy to support organizations and individuals working to secure digital rights and online safety on a global scale. Philanthropic resources and capital can help foster necessary connections between civil society, academia, governments, and industry to address the current and emerging issues highlighted by the Israel-Hamas war.

Quick read: Themes and emerging issues

Three clear themes around digital T&S in relation to platforms emerged from our review:

  1. X (formerly Twitter) has seen an increase in disinformation, misinformation, and hate speech related to both pro-Palestine and pro-Israel content on the platform, and tied to specific product features and Elon Musk’s leadership of the company.
  2. Telegram has been used by members of Hamas to organize, generate support, and spread graphic content around the war due to the platform’s lack of standard content moderation policies and enforcement.
  3. Instagram has been accused of “shadow banning,” or reducing the amplification of, users sharing content about the war, particularly users sharing pro-Palestine content, which Meta initially attributed to a bug.

Beyond issues that were brought to light on established platforms, several emerging T&S issues were identified through our review:

  1. Artificial intelligence (AI): AI image detectors have increased the prevalence of the “liar’s dividend.” AI-generated images of the war have been sold as stock images on Adobe, and generative AI is being used to create propaganda.
  2. Digital Services Act (DSA): The EU began investigating X, TikTok, and Meta for DSA compliance issues. Digital rights organizations and tech policy experts responded to defend free expression and urge collaboration with civil society.
  3. Livestreaming: Several victims of the October 7th attack had their Facebook accounts taken over by members of Hamas whom livestreamed content from their profiles. TikTok’s livestream feature was exploited by grifters who raised money in name of the war for their own financial gain, as well as the financial gain of the company itself. Roblox was used by children to engage in safe livestreamed pro-Palestine virtual protests.
  4. Open-source intelligence (OSINT): Verified X accounts with incorrect information, sometimes amplified by Elon Musk, posed as OSINT sources on the war, increasing the level of mis- and disinformation.
  5. Tooling: New startups that serve as T&S vendors focused on content moderation including Cove, Cinder, SeroAI, TrustLab, Intrinsic, ActiveFence, CheckStep, and Openweb received the attention of investors.

Our key takeaways

  • The DSA is bringing T&S regulation to the forefront of coverage on platform safety, but there is a lack of proportional coverage focused on digital rights.
  • Tooling startups focused on content moderation are becoming more popular, which is promising due to the increased investment and advancements in tooling technology, but also may lead to further divestment from supporting in-house T&S teams.
  • Generative AI presents new abuse vectors along with new opportunities for the development of responsible tech tools and expertise.
  • X is a toxic platform as it relates to T&S, which leads to greater disinformation in the ecosystem and an opportunity for competing platforms to emerge.
  • Livestreaming presents a variety of T&S issues across platforms, which may be indicative of more widespread abuse in the future.
  • TikTok and YouTube were not the focus of much coverage in the articles reviewed as compared to X, Telegram, and Instagram.

Deep read: Platform check

X (formerly Twitter)

X was the subject of the bulk of articles on content moderation issues tied to the war due to:

  • Layoffs of the T&S team in recent years
  • The relatively new paid verified accounts feature, which allowed bad actors to increase the reach of their dis- and misinformation and attributed to 74 percent of the platform’s most viral false or unsubstantiated claims relating to the Israel-Hamas war.
  • The amplification of antisemitic and untrustworthy profiles by CEO Elon Musk

Graphic content was also an issue on X as violent content remained on the platform weeks after the October 7th attack. Several articles compared the difference between the news cycle on the platform at the outset of the Russia-Ukraine war to the platform at the outset of the Israel-Hamas war (before and during Musks’ leadership of the company, respectively) and found the current cycle to be less trustworthy for sourcing accurate news and information on the event.

Musk’s leadership threatened the legitimacy of the platform as a trusted source for information after his antisemitic posts and the placement of pro-Nazi content next to ads for major companies caused an exodus of major advertisers from the platform.

Telegram

Telegram was the major platform of graphic and violent content around the war and was thus the subject of much of the coverage focused on content moderation. Telegram use skyrocketed among pro-Hamas channels at the beginning of the war, with membership in the channel for the military wing of Hamas more than tripling in the five days after the attack.

Telegram cut off the channels Hamas uses to communicate for Android users, but only due to violations of Google’s app store policies.

Instagram

Most T&S content from the first few weeks of the war covered the perspective of pro-Palestine users on Instagram who believed they were experiencing shadow bans on their profiles and/or the content they posted about the war. Instagram was not the only platform to receive this criticism, but saw the most media coverage within the articles reviewed in comparison to YouTube, X, and Facebook (also owned by Meta).

  • Meta initially announced that the issues were the result of a bug, which many pro-Palestine users were skeptical of. Activists cited similar issues during the 2021 conflict between Israel and Hamas and pointed towards moderation bias as an additional cause for the issues. It is unclear if users continued to experience similar issues on the platform after the bug fix.

Instagram also briefly added “terrorist” to Palestinian user profiles, a supposed auto-translation error, which the company later apologized for.

Threads: On the horizon

Meta’s X competitor Threads saw very little coverage related to T&S in the articles reviewed, though Casey Newton of tech and democracy newsletter Platformer posited that X’s waning dominance could lead Threads’ leadership to build out more user-requested features or continue focusing less on amplifying news and more on driving engagement through other means.

Deep read: Emerging issues

Artificial Intelligence

Both AI image detectors and AI-generated images were common subjects within the set of articles focused on AI more broadly.

  • AI Image Detector “AI or Not” was frequently cited as a tool that incorrectly labeled a real image from the war as a deepfake or AI-generated image. This phenomenon spurred a particular focus on the “liar’s dividend,” or the payoff for bad actors who leverage the existence of deepfakes as cover for their bad behavior, and discussions on the efficacy of AI image detectors.
  • Adobe sold multiple AI-generated images based off of images from the war, which were then used on small websites and blogs. These images do not appear to explicitly violate Adobe’s content guidelines, as the guidelines “make no mention of whether users should upload images depicting ongoing violent conflicts.”
  • Extremist groups also used generative AI to create propaganda in support of their goals.

Outside of AI image tools, AI was also used as a tool to identify misinformation on the war by a vendor offering its services to the US Army.

Digital Services Act

Much of the coverage on the Digital Services Act (DSA) discussed both the EU’s investigation of platforms and the difficulty of enforcing the law now that is in effect.

  • Initial coverage revolved around the European Commission’s probe into X, as the Commission was interested in investigating the platform for potential violation of the DSA’s mission to stop the spread of illegal content, disinformation, and other harmful material.
  • The Commission also sent letters to Tiktok and Meta. While the letters began as inquiries into the platform’s disinformation policies, the EU later sent formal requests with legally-binding deadlines.
  • In response to the EU’s actions, digital rights organization Access Now issued an open letter to EU commissioner Thierry Breton, calling on the Commission to delineate illegal content from “disinformation,” criticizing the 24-hour deadline for platforms to reply to the EU and law enforcement as arbitrary and lacking of legal basis and the mandate for companies to enforce their terms and conditions as overly restrictive. The letter was signed by several Omidyar Network grantees.
  • Tech Policy Press featured an article that argued the war underscored the difficulty of DSA enforcement on a global scale and emphasized the need for the EU to work more with civil society organizations, especially in the “Global South.”

Livestreaming

Livestreaming presented itself as a unique cross-platform issue with notable cases of abuse on Facebook and TikTok.

  • Members of Hamas used victims’ Facebook accounts to livestream their actions on October 7, broadcasting attacks to victims’ friends and family members. Instagram and WhatsApp accounts were also hijacked to post content on the victims’ accounts.
  • Grifters used TikTok’s livestreaming feature to raise money for themselves through live “battles” or “player knock-outs.” In these battles, one user would represent Palestine and another would represent Israel. The streamers would debate, mostly through shouting their side’s name or comments such as: “Like, like, like” and ‘Follow me’” as viewers provided gifts that the streaming users could then convert to cash. Some of the livestreams would continue for hours. Perhaps the most troubling aspect of this abuse is that TikTok receives a significant financial cut from money raised through these livestreamed battles.

Introducing some levity to the space, Roblox served as a forum for children to attend virtual pro-Palestine protests, presenting a positive use-case for livestreaming as a tool for civic engagement on social media platforms.

Open-source Intelligence

Although open-source intelligence (OSINT) was not heavily featured in coverage on the war, the main reference to the practice underscored the serious danger that comes with relying on misleading or false OSINT accounts.

  • Several verified X accounts posing as reliable OSINT sources were cited as destroying the Israel-Palestine information ecosystem, signaling trouble for the practice especially as it relates to OSINT on X. One account that is known to spread disinformation was amplified by Elon Musk who encouraged followers to follow the account. While Musk deleted the tweet, he continued to follow the account as of November 2023.

Tooling

Several new startups, some led by former Google and Meta employees, focus on content moderation and earned the attention of investors.

  • Cove, Cinder, SeroAI, TrustLab, Intrinsic, ActiveFence, CheckStep, and Openweb were cited as new startups that offer content moderation and tooling to social media companies. Venture capital firms investing in these startups include Radium Ventures and Accel. The T&S industry is estimated to represent about $11 billion of the overall $300 billion business process services market.

Key takeaways

DSA enforcement is bringing T&S regulation to the forefront of media coverage on platforms, but there has yet to be proportional coverage on digital rights threatened by improper enforcement of the DSA.

Greater international attention toward platform accountability is a positive step forward and the EU’s particular investigation into X and initial letters to Meta and TikTok seemed warranted. However, there is a lack of consistent mainstream media coverage on the threat that improperly enforcing the DSA could have on the privacy and freedom of expression of platform users, a discussion primarily led by advocacy organizations such as Access Now and publisher Tech Policy Press. There is an opportunity to shift the Overton window so that tech journalists and policymakers take a perspective and approach to regulation that both holds social media companies accountable and respects users’ rights.

Tooling startups focused on content moderation are becoming more popular, a positive sign for innovation, but may lead to further divestment from in-house T&S teams.

Venture capital firms focused solely on investing in T&S startups are beginning to take root, as well as new startups themselves, which speaks to high demand for these tools and services. The growing popularity of these startups and firms signals greater innovation in the field, but may also further the trend of the industry’s divestment from in-house T&S teams.

Generative AI presents new abuse vectors and opportunities for responsible tech development.

The growing popularity of AI-generated images made it more difficult for news consumers and social media users to decipher news about the war. As generative AI becomes more mainstream in both practice and the public psyche, the “liar’s dividend” is likely to become more commonplace during future events, wars, and conflicts. Dr. Hany Farid, from the University of California, Berkeley, Information School, noted that images detectors are “just one part of the toolkit” of identifying AI images. There is an opportunity for funders to invest in organizations and initiatives that seek to improve the public’s ability to decipher deepfakes or otherwise look toward adding to this toolkit.

X is a toxic platform as it relates to T&S.

Musk’s leadership has led the platform to descend into a space rife with dis- and misinformation. The platform lacks the credibility it once had as a trusted source for live breaking news, especially during times of crises. This development presents both an issue in the increase in dis- and misinformation across the ecosystem (as content tends to spread across platforms) and an opportunity for a competitor to serve as a more trusted go-to source for breaking information that Twitter once was.

Livestreaming presents a variety of issues across platforms, which could become more commonplace as platforms integrate this feature into their services.

The two examples previously cited (Facebook and TikTok) stand out because of how different they are from one another in the threats that they pose. Both involve bad actors, but Facebook livestreams were used to exploit victims of the attack by the perpetrators, while TikTok livestreams were used to exploit the overall war for the financial gain of individuals disconnected from the war. The former presents an issue that can plausibly occur on any mainstream platform, while the latter is more specifically tied to the culture around livestreaming that is specific to the platform. As perpetrators become more social media savvy and niche communities on platforms adopt livestreaming for their own unique uses (including financial competitions on TikTok), we are likely to see livestreaming become a more normal part of these crises.

TikTok and YouTube were not the focus of much coverage, which is surprising considering their large user bases and past problems with dis- and misinformation.

This is not to suggest that either platforms have been absolved of any T&S issues on their platform related to the war. In comparison with X, Instagram, and Telegram, TikTok, and YouTube were not the central foci of nearly as many articles reviewed. It is possible that the lack of material on TikTok can be attributed to the lack of transparency that the firm has been known to have or the difficulty of studying short-form video content on TikTok, considering it is a relatively new platform and medium. The dearth of reporting on YouTube may be due to its affiliation with Google, an explanation posited by DFRLab’s Rose Jackson. A future review of T&S related articles on the war should take a deeper dive into TikTok and YouTube given their outsized scale and influence.

Conclusion

Omidyar Network is dedicated to addressing safety and security online while upholding privacy. The Israel-Hamas war and the resulting online ecosystem shaped by the war has underscored the need to support and connect organizations and individuals working to secure both digital rights and safety online.

Our team is encouraged by the strong network of researchers, advocates, and technologists pushing for a safer, more rights-respecting internet. We are hopeful that the issues highlighted through this review—from the negative externalities brought on by AI images to the need to better understand content moderation practices—can be addressed and solved in the long term through the hard work, care, and expertise of our partners and the broader community of T&S workers and digital rights advocates.