According to Fast Company, extremist and militant groups began experimenting with generative AI like ChatGPT as soon as it became publicly available in late 2022. They are now creating realistic fake photos and videos, such as fabricated images of the Israel-Hamas war in 2022 and AI-crafted propaganda videos after a 2023 attack in Russia. Groups like ISIS have also used AI for deepfake audio of their leaders and to translate messages quickly. Former CIA agent and Darktrace Federal CEO Marcus Fowler says these groups view advanced AI use as “aspirational” but are actively learning. The Department of Homeland Security’s latest threat assessment even warns AI could help such groups produce biological or chemical weapons.
Aspirational But Accelerating
Here’s the thing: when experts say “aspirational,” it doesn’t mean harmless. It means they’re trying. And they’re learning fast. The report notes that ISIS and al-Qaida have held actual training workshops for their supporters on how to use these AI tools. That’s not just dabbling; that’s building institutional knowledge. They’re treating AI like they treated social media a decade ago—as a new weapon to master for recruitment and intimidation. The scale is what’s terrifying. A single person can now generate a flood of convincing, polarizing content that would have required a whole propaganda cell just a few years ago.
The Next Phase of Threats
So the propaganda is bad enough. But the future risks outlined are where it gets really dark. Using AI to help write malicious code or automate cyberattacks is almost a given—that’s just efficiency for hackers. The real game-changer is the potential to bridge knowledge gaps. Think about it: what if a group with violent intentions but no PhDs in chemistry could use a language model to help them navigate the complex process of creating a harmful substance? The DHS isn’t speculating for fun; they put it in their official threat assessment. That means the intelligence community is taking the possibility seriously.
Can Policy Keep Up?
Lawmakers are, predictably, scrambling. There’s talk of legislation, like a bill from Rep. August Pfluger that passed the House, requiring annual AI threat assessments. Sen. Mark Warner wants to make it easier for AI companies to share data on misuse. But let’s be honest. This feels reactive. The tech is out in the wild, cheap and powerful. The groups are experimenting now. Policy moves at a glacial pace compared to software updates. Fowler’s analogy is perfect: ISIS got on Twitter early and exploited it for years. They are, as he says, “always looking for the next thing to add to their arsenal.” AI is that next thing. And defending against it requires a fundamental shift—not just new laws, but new ways of thinking about digital evidence, source verification, and the very nature of online information. It’s a race where the bad guys might already have a head start.
