Expert Speak Raisina Debates
Published on Apr 26, 2022
Identifying and Removing Terrorist Content Online: Cross-Platform Solutions

This article is part of the series—Raisina Edit 2022.


When it comes to counterterrorism efforts, it is still the case that more needs to be done. While most efforts by various tech platforms go unseen by the public, advances have been made, and it is worth assessing the state-of-play to identify the next steps to stay ahead of the threat. This article discusses terrorism and violent extremism trends online, reviews some of the approaches being deployed by tech companies, and looks at cross-platform and multistakeholder solution-building. It pulls from research and insights from the Global Internet Forum to Counter Terrorism (GIFCT), member companies, and partnerships with Tech Against Terrorism and the Global Network on Extremism and Technology.

How Terrorists and Violent Extremists Use the Internet

Terrorists and violent extremists use online tools and platforms in similar ways to the average public; using a diversity of platforms to further different goals and to engage different audiences. This means that terrorist signals online are often disseminated across platforms, making it increasingly difficult to fully eradicate one group’s presence or propaganda by efforts made by any one platform or government.

Groups develop and host content on video sharing sites, shared storage sites, and archival sites that make the removal of source content increasingly difficult.

Platforms in use can be categorised in three ways: Coordination platforms, content aggregators, and amplification outlets. Terrorist and violent extremist groups often use smaller, less regulated platforms to coordinate among core members. Coordination includes the use of financial technologies to transfer funds, or marketplace sites to acquire necessary goods. While internal and logistical coordination are meant to be covert, groups also conduct online campaigns aimed at the wider public. Groups develop and host content on video sharing sites, shared storage sites, and archival sites that make the removal of source content increasingly difficult. To reach the wider public—whether to recruit or intimidate—larger social media sites will always be targeted for reach. By the time content reaches a larger site, it often comes as a URL link to a third-party site or a copy is shared on the site, but the source content remains on an aggregator platform.

As examples of the cross-platform and diversified online exploitation by terrorists of online spaces, studies on outlinks tracking ISIS propaganda dissemination of Rumiyah on Twitter tracked 11,520 posts that linked out to 244 different host sites. Europol’s most recent transparency report showed 16,763 content referrals related to 250 Online Service Providers tied to Europol’s SIRIUS platform. The 2021 Terrorist Content Analytics Platform (TCAP) transparency report run by Tech Against Terrorism referred 18,958 URLs to 65 different tech companies. The exact platforms used by terrorist and violent extremist groups differ depending on where a group is based geographically, perceptions of secrecy or oversight of a given platform, and cultural attitudes towards a platform. So, how do tech companies tackle online terrorist content?

Countering Terrorist and Violent Extremist Content

Each platform operates in different ways to both proactively and/or reactively ensure they can remove illegal content and content that violates their terms of service. Once a tech company decides how to define terrorist content, they can deploy human and tooling resources accordingly. Collecting insights from tech companies and experts around the world is not always easy, as reviewed by GIFCT’s Taxonomy and Definitions report. Larger, wealthier companies will always have more resources at their disposal to tackle online harms, whereas smaller and under-resourced companies might rely solely on reactive review processes and limited tooling.

The exact platforms used by terrorist and violent extremist groups differ depending on where a group is based geographically, perceptions of secrecy or oversight of a given platform, and cultural attitudes towards a platform.

Looking at the transparency efforts of larger companies, there are a range of tools that might be deployed to proactively surface, review and sometimes remove content. Often, companies use hybrid models whereby tooling surfaces content that is then triaged to appropriate human moderation teams based on language and subject matter expertise. Many companies use photo and video matching technology to detect the resharing of content that moderation teams have already internally determined to be violating. Some companies also deploy internal tools for audio matching, recidivism surfacing, and network mapping to track profiles belonging to terrorist or violent extremist groups. In all cases, increased tooling rarely means decreased need for human review and oversight. Tools often increase the amount of content triaged to review pipelines.

While most companies do not add granular, harm-specific removal data in their transparency reports, some of the larger platforms do have this information available, giving insight into the internal proactive efforts being made. As examples—in their most recent transparency reports—Twitter removed 44,974 accounts for terrorism violations in the January-June 2021 period; YouTube removed 71,789 videos for violent extremism between October and December 2021, 90 percent of which was removed before having more than 10 views; Facebook removed 7.7 million pieces of content for violating terrorism policies in the October-December 2021 period, 97.7 percent actioned proactively without user reporting; and Instagram removed 905.3 thousand pieces of content, 79.5 percent proactively without user flagging.

Despite individual platform efforts, as larger platforms increase attempts to remove and deplatform obvious abuses, bad actors migrate and diversify the usage of smaller platforms with less resource and oversight. Terrorists and violent extremists also change their behaviour on larger sites to evade detection and removal. It is for this reason that efforts to counter terrorism online effectively must be cross-platform and multistakeholder in design.

Many companies use photo and video matching technology to detect the resharing of content that moderation teams have already internally determined to be violating.

Cross-Platform Solution-Building

GIFCT was founded in 2017 to prevent terrorists and violent extremists from exploiting digital platforms. One core question is how to share signals across platforms in a way that does not violate privacy or human rights concerns. Currently, this is done through managing a hash sharing database among member companies.

GIFCT member companies “hash” photos and videos to share them as a signal to other platforms without sharing the source content or user data associated with the content. In essence, a photo or video is turned into a digital fingerprint (or hash) so that other visually similar images with mathematically close hashes can be surfaced. Hashes are numerical representations of original content. By running hashes against the content on a GIFCT member platform, this can surface and prevent the redistribution of online terrorist content.

The database was launched with agreement around the United Nations Security Council’s consolidated sanctions list and looks to expand incrementally, based on principled ways of defining terrorist and violent extremist content. In the aftermath of the terrorist attacks in New Zealand’s Christchurch,  which were livestreamed by a white supremacy attacker targeting a mosque, GIFCT developed a Content Incident Protocol (CIP) and the ability to label hashes was added specifically when the CIP is activated. As of GIFCT’s last transparency report in July 2021, the database housed 2.3 million hashes corresponding to 320 thousand unique pieces of content.

GIFCT member companies “hash” photos and videos to share them as a signal to other platforms without sharing the source content or user data associated with the content.

However, we know terrorism and violent extremism manifests in different ways in different parts of the world. White supremacy and neo-Nazi groups, for example, make up an increasing number of the lone actor attacks in North America and Europe, yet rarely make it on official government designation lists. Between 2014 and 2019, the Global Terrorism Index recorded the total number of “far-right terrorist” incidents as having increased 320 percent. This holds for many ethno-nationalist and domestic violent extremist groups. For tech companies, it is difficult when they do not have national legal frameworks to justify content removals. Adding to this dilemma, “terrorist content” takes many forms. While most efforts have focused on images and videos, other content types include audio, PDF, and URLs. For this reason, in July 2021, GIFCT added three new buckets to its hash sharing database taxonomy: Attacker manifestos, branded terrorist and violent extremist content, and URLs correlating with TCAP. These buckets recognise how different types of behavioural content spreads among violent extremist subcultures and can be tied to platform policies. In doing so, GIFCT is developing the capacity to hash PDFs and URLs to diversify the types of signal platforms can share.

Multistakeholder Partnerships 

While the cross-platform tools can help particularly small and medium companies get to scale and speed in a way that was previously unavailable, there is still a great need for cross-sector multistakeholderism. Defining how best to deploy tools and safeguarding human rights in these processes is crucial. GIFCT conducted a Human Rights Impact Assessment in 2021 to ensure that efforts are not at the cost of fundamental and universal human rights. Holding a series of cross-sector, international working groups allows GIFCT member companies to work with governments, civil society, and practitioners in understanding the nuance, needs, and limitations of everything from crisis response to transparency reporting.

White supremacy and neo-Nazi groups, for example, make up an increasing number of the lone actor attacks in North America and Europe, yet rarely make it on official government designation lists.

As always, more is needed. More tech companies, and a diversity of platform types, need to work together, instead of working in silos. More input from governments and law enforcement explaining open-source intelligence trends and concerns help tech companies prioritise their efforts. More insights from academics and experts helps companies track adversarial shifts in the online space. More inputs from vulnerable communities, human rights scholars, and counter-extremism practitioners helps tech companies understand how to build safety-by-design within their tools and moderation practices.

While more is needed, we do have the infrastructure to build upon. The foundations for effective counterterrorism efforts online are underway.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Erin Saltman

Erin Saltman

Dr. Erin Saltman is the Director of Programming at the Global Internet Forum to Counter Terrorism (GIFCT). She was formerly Facebooks Head of Counterterrorism and ...

Read More +