Author : Clara Broekaert

Expert Speak Raisina Debates
Published on Mar 17, 2025

With terrorists increasingly leveraging generative AI, the EU has swiftly incorporated AI into its counterterrorism strategy, ensuring a balance between innovation, regulation, and ethical considerations

Integrating AI: EU counterterrorism challenges and opportunities

Image Source: Getty

This article is part of the series—Raisina Edit 2025


The growing use of Large Language Model (LLM) to gather information to conduct explosives-based attacks, the propagation of AI-generated news bulletins by an Islamic State-aligned media outlet, and the creation of bespoke chatbots designed to disseminate Holocaust denialism have raised alarm over the disruptive potential of generative AI in the hands of terrorists and other violent non-state actors. While generative AI can facilitate the optimisation of terrorist recruitment, operational planning, and propaganda dissemination—offering automated content generation, rapid and culturally nuanced translations, and even access to information about the acquisition of chemical precursors or 3D printing firearms—the actual disruptive effect remains contested. At present, generative AI has not demonstrably augmented the lethality or appeal of terrorist entities. Other AI-driven applications, however, specifically in the domain of autonomous and semi-autonomous weaponry and even autonomous vehicles, can be highly disruptive in the hands of terrorists; they confer significant operational advantages, including enhanced command-and-control capabilities and greater lethality in the execution of attacks.

While generative AI can facilitate the optimisation of terrorist recruitment, operational planning, and propaganda dissemination—offering automated content generation, rapid and culturally nuanced translations, and even access to information about the acquisition of chemical precursors or 3D printing firearms—the actual disruptive effect remains contested.

While a fair assessment of the adoption rate of AI technology by terrorist organisations yields mixed results, its potential in the global terror threat landscape is undeniably alarming. This mandates law enforcement agencies, security services, and civil society actors to leverage all available tools to augment and enhance their counterterrorism efforts, including AI. Nation-states at the forefront of the “Global War on Terror” unequivocally hold the upper hand in this endeavour with years of accumulated data on radicalisation, terrorist incidents, propaganda materials, and counterterrorism interventions. Machine learning-driven systems—ranging from early prevention strategies to near real-time detection capabilities—have emerged as potent tools. The United States (US) has been a clear leader in leveraging its petabytes of data on terrorism collected over the past two decades with advanced machine learning for its counterterrorism efforts. Given the heightened threat landscape in Europe since 7 October, fully integrating varied advanced AI applications into European counterterrorism efforts is urgent and should be considered across a range of use cases, from anomaly detections in CCTV footage and travel data to pattern detection in financial transactions to disrupt terrorist financing networks.

The European Union (EU) and its member states must address three key challenges to successfully integrate AI into its counterterrorism efforts. First, they must avoid overregulating the development and application of emerging technologies in counterterrorism efforts. While the complexities of ethics and privacy cannot be dismissed, failing to leverage AI in counterterrorism is irresponsible and risks ceding the advantage to violent non-state actors. Second, innovative tools must find a way to be incorporated across the 27-member bloc. While national security remains the responsibility of each EU member state under the Treaty on European Union, combating terrorism requires robust partnerships in intelligence-sharing and collective resource pooling, including in the development and sharing of AI-enabled tools. Third, the EU must proactively forge strategic global partnerships to expand the data pool used to train AI systems for counterterrorism, thereby enhancing their accuracy and effectiveness.

The EU must proactively forge strategic global partnerships to expand the data pool used to train AI systems for counterterrorism, thereby enhancing their accuracy and effectiveness.

AI integration into European counterterrorism efforts has been marked by both the pressing need to respond to a heightened European terror threat landscape and stringent oversight—particularly under the AI Act and the General Data Protection Regulation (GDPR). As early as 2017, Europol adopted Palantir Gotham, a software platform for operational analysis and decision-making with various AI integrations, in support of counterterrorism investigations and operations. In 2020, the EU Counter-Terrorism Agenda explicitly recognised AI as an impactful technology in counterterrorism efforts, from detecting objects such as abandoned luggage in footage of public spaces to identifying terrorist content on social media platforms. The 2020 Agenda emphasised the potential of predictive analytics and highlighted various EU-funded projects working on AI integration under the Horizon 2020 programme, the EU's research and innovation funding mechanism from 2014 to 2020.

Notably, the RED-Alert System received funding from the EU and was supported by Europol’s counterterrorism unit. The project leveraged natural language processing, social network analysis, and complex event processing to allow law enforcement agencies (LEAs) to disrupt terrorist recruitment, propaganda dissemination, and attack planning. According to the participating LEAs, the RED-Alert System worked significantly better than previous methods. PREVISION also received EU funding to develop capabilities for LEAs in processing and analysing large-scale, heterogeneous data streams, including social media and open web data, darknet and deep web data, surveillance data, traffic and mobility data, and financial transactions. Through predictive analytics, the platform warns LEAs of early indicators of radicalisation. While the project has concluded, the website alludes to the tool’s continued use by European LEAs. Other effective tools that were established under the Horizon 2020 umbrella included the TENSOR platform and the DANTE project.

The AI Act, now in force as a comprehensive regulatory framework, prohibits certain AI applications deemed to pose an unacceptable risk.

Around the conclusion of Horizon 2020, Brussels’ focus on AI safety and negative coverage of AI-enabled tools for counterterrorism and pandemic management in Europe cast a shadow on AI integrations into counterterrorism efforts. The AI Act, now in force as a comprehensive regulatory framework, prohibits certain AI applications deemed to pose an unacceptable risk. This includes the development of facial recognition databases through untargeted scraping, biometric categorisation systems that deduce protected characteristics, and real-time biometric identification in public spaces. The Act makes some exceptions for law enforcement and national security. Law enforcement can undertake real-time facial recognition for specific terrorist threats or targeted image scraping for criminal investigations but need prior judicial authorisation. Additionally, AI systems exclusively for national security, defence or military security purposes do not fall under the AI Act. However, if that same system is also used for law enforcement and public security purposes, then it must abide by the provisions of the Act. In short, while the AI Act is strict, various exceptions allow AI integrations into counterterrorism efforts by member states. Policymakers must actively push back against the inevitable over-regulatory pressures that often characterise Brussels’ approach to new technology when it comes to its applications to counterterrorism.

In December 2024, the Council of the EU published its conclusions on future priorities for strengthening joint counterterrorism efforts. One of the key points is the need for investment in innovation that supports counterterrorism, including AI tools, big data analytics, decryption technologies, biometric data analyses, and digital forensic tools. The EU has also started funding two international projects about responsible AI use by law enforcement, AI-POL and CT-Tech+. While these initiatives point towards some appreciation of the importance of AI in counterterrorism efforts, the EU must not fall into the trap of only applying AI to terrorist and illegal content removal. AI has significantly more potential to enhance counterterrorism efforts, and numerous LEAs across Europe recognise its value. Europol’s 2024 report on AI and Policing lays out the ways AI can be integrated into policing, from facial recognition to real-time processing and anomaly detection. At the member states level, multiple AI applications by local police, intelligence agencies, and other LEAs have already been implemented. EU-level coordination should be pursued to ensure the block shares its best practices and improves AI tools with datasets from different actors in the counterterrorism space.

EU-level coordination should be pursued to ensure the block shares its best practices and improves AI tools with datasets from different actors in the counterterrorism space.

In addition to a regulatory environment that fosters innovation in national security and public safety and strong intra-European collaboration on the effective integration of AI in counterterrorism, data is the critical factor for success. The most effective AI-driven counterterrorism tools, systems, and strategies depend on robust data sets. The data that underpins these tools ultimately determines their functionality and effectiveness. One key pillar will be sharing data on radicalisation, terrorist activities, propaganda, and counterterrorism interventions among partners to strengthen AI-enabled counterterrorism solutions. Even amid growing tensions in transatlantic relations, counterterrorism should remain a cornerstone of collaboration, serving as a mutually beneficial endeavour. However, the EU should look beyond the Atlantic and governmental institutions to form strategic partnerships for data sharing for AI purposes. Civil society organisations focused on preventing violent extremism and rehabilitation, as well as non-Western governments, often possess valuable data sets that should not be overlooked for these efforts.


Clara Broekaert is a Research Fellow at The Soufan Center and an Analyst at The Soufan Group.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.