AI swarms could hijack democracy—without anyone noticing
January 29, 2026
January 29, 2026
They don’t march in the streets or storm the polls, but a new breed of AI-controlled personas could be the next big threat to democracy.
In a new article in Science, a team of researchers is warning of a next generation of AI-driven personas that can coordinate and adapt in real time to infiltrate online groups and influence public opinion.
The authors of the paper, including UBC computer scientist Dr. Kevin Leyton-Brown, describe how the combination of large language models and coordinated AI agents can allow malicious AI swarms to imitate human social dynamics.
“The danger isn’t only false content — it’s synthetic consensus: the illusion that everyone agrees, engineered at scale,” says Dr. Daniel Thilo Schroeder, research scientist at SINTEF and first author of the paper. “Instead of repeating a script, swarms iterate. They probe audiences with many variants, measure responses and amplify the winners.”
Swarms of AI personas mimic humans so well they can infiltrate online communities, shape conversations, and tilt elections—all at machine speed. Unlike old-school botnets, these agents coordinate in real time, adapt to feedback, and sustain coherent narratives across thousands of accounts.
“We shouldn’t imagine that society will remain unchanged as these systems emerge,” says Dr. Leyton-Brown. “A likely result is decreased trust of unknown voices on social media. This might sound like a good thing, but it could empower celebrities and make it harder for grassroots messages to break through.
Advances in large language models and multi-agent systems allow a single operator to deploy thousands of AI “voices” that look authentic and talk like locals. They can run millions of micro-tests to find the most persuasive messages, creating a synthetic consensus that feels grassroots-driven but is engineered to manipulate democratic discourse.
Full-scale AI swarms remain theoretical, but early warning signs include AI-generated deepfakes and fabricated news outlets that influenced recent election debates in the U.S., Taiwan, Indonesia and India, says Dr. Leyton-Brown.
Monitoring groups also report pro-Kremlin networks flooding the web with content intended to poison future AI training data. Researchers say the next election could be the proving ground for this technology.
The paper outlines several key ways to safeguard against AI swarms, such as monitoring coordination patterns in real time, implementing stronger verifications for accounts and publishing incidence reports.
The authors also propose the use of agent-based simulations, which are computational models that simulate and give insight into how autonomous agents interact. Lastly, the authors outline policy changes that may be helpful, such as reducing monetization of inauthentic engagement and increasing accountability.
“We’re not predicting outcomes, but the capability curve is clear: coordinated AI systems lower the cost of influence and raise the stakes for democracy,” says Dr. Jonas R. Kunst, Professor of Communication at BI Norwegian Business School and last author of the paper.
We honour xwməθkwəy̓ əm (Musqueam) on whose ancestral, unceded territory UBC Vancouver is situated. UBC Science is committed to building meaningful relationships with Indigenous peoples so we can advance Reconciliation and ensure traditional ways of knowing enrich our teaching and research.
Learn more: Musqueam First Nation