Militant Groups Turn to Artificial Intelligence, Raising Global Security Concerns
Security agencies warn that while AI itself is a neutral technology, its misuse by extremist actors presents a growing challenge. Countering this threat, experts say, will require cooperation between governments, technology companies, and civil society to detect manipulated content, disrupt online recruitment, and strengthen digital literacy among the public.
Militant Groups Turn to Artificial Intelligence, Raising Global Security Concerns
As governments, corporations, and researchers around the world race to harness the power of artificial intelligence (AI), militant and extremist organizations are also beginning to experiment with the technology—often without fully understanding its long-term implications. Security experts warn that even limited use of AI by such groups could significantly amplify their reach, influence, and operational capabilities.
According to national security analysts and intelligence agencies, AI has the potential to transform how extremist groups recruit new members, spread propaganda, and conduct cyber operations. Generative AI tools can quickly produce realistic images, videos, audio recordings, and written content, allowing militant organizations to create persuasive material at a scale and speed that was previously impossible.
Last month, a post on a pro–Islamic State (IS) online forum urged supporters to integrate AI into their activities. The anonymous user highlighted the ease of using AI tools and openly encouraged others to exploit intelligence agencies’ fears that the technology could enhance recruitment efforts. Such messaging reflects a growing awareness among extremist sympathizers of AI’s disruptive potential.
The Islamic State, which once controlled large areas of Iraq and Syria but now operates as a decentralized network of affiliated groups, has a long history of leveraging digital platforms. Years ago, the group recognized social media as a powerful weapon for recruitment and disinformation. Experts say its interest in AI is a logical extension of that strategy.
For loosely organized militant groups with limited financial resources—or even individual actors with internet access—AI lowers the barrier to influence operations. With minimal cost, users can generate propaganda materials, fabricate deepfake images or videos, and translate messages into multiple languages, allowing extremist narratives to spread across borders and cultures.
“For any adversary, AI makes it much easier to do things,” said John Laliberte, a former National Security Agency vulnerability researcher and now CEO of cybersecurity firm ClearVector. “Even small groups without significant funding can have an outsized impact if they use AI effectively.”
Researchers note that extremist groups began experimenting with AI soon after widely available tools such as ChatGPT entered the public domain. Since then, the sophistication of AI-generated content linked to militant causes has steadily increased.
When amplified by social media algorithms, such content can distort public perception, inflame emotions, and polarize societies. During the Israel–Hamas war, for example, fake AI-generated images depicting injured or abandoned infants circulated widely online. These images provoked outrage and confusion, while obscuring verified information about events on the ground. Militant groups in the Middle East, as well as antisemitic hate groups in Western countries, used the content to radicalize and recruit.
A similar pattern emerged after a deadly attack at a concert venue in Russia last year, which was claimed by an Islamic State affiliate and killed nearly 140 people. In the aftermath, AI-produced propaganda videos spread rapidly across online forums and social media platforms, calling for new recruits and glorifying violence.
According to researchers at SITE Intelligence Group, which monitors extremist activity worldwide, Islamic State-linked networks have also used AI to create deepfake audio recordings of their leaders reciting religious texts. The technology has additionally been used to rapidly translate messages into multiple languages, enabling faster global dissemination.
Security agencies warn that while AI itself is a neutral technology, its misuse by extremist actors presents a growing challenge. Countering this threat, experts say, will require cooperation between governments, technology companies, and civil society to detect manipulated content, disrupt online recruitment, and strengthen digital literacy among the public.