News

AI-Driven Disinformation Swarms Threaten Democracy

Source: wired.com

Published on January 23, 2026

Updated on January 23, 2026

AI-Driven Disinformation Swarms Threaten Democracy

The Rise of AI-Powered Disinformation Swarms

A new study published in Science warns of an alarming shift in disinformation tactics, as AI technology enables the creation of "swarms" of social media accounts that can operate autonomously and evolve in real-time. Unlike traditional disinformation operations, which relied on human teams to manually craft and spread content, these AI swarms can simulate believable human behavior, coordinate to achieve shared objectives, and adapt to evade detection. This advancement raises significant concerns about the potential for large-scale manipulation of public opinion and the erosion of democratic processes.

The study, authored by 22 experts from fields including computer science, cybersecurity, and psychology, highlights the unprecedented scale and sophistication of AI-driven disinformation. These swarms, capable of maintaining persistent identities and memory, could mimic human social dynamics to influence beliefs and behaviors across entire populations. The authors argue that without proactive measures, such technology could sway elections and undermine the foundations of democracy.

From Human Troll Farms to Autonomous AI Agents

The evolution of disinformation tactics over the past decade has been stark. In 2016, the Internet Research Agency (IRA), a Russian troll farm, employed hundreds of workers to manually post content on social media platforms, aiming to influence the U.S. presidential election. While the IRA garnered significant media attention, its impact was limited compared to other Russia-linked campaigns, such as the leak of Hillary Clinton's emails.

In contrast, AI-powered disinformation swarms require minimal human oversight and can operate at a scale far beyond traditional methods. These swarms can craft unique, human-like posts, evolve their strategies in real-time, and target specific communities with tailored messaging. The ability to map social networks at scale allows these agents to maximize their impact, adapting their messages to the cultural and belief systems of different groups.

"We are moving into a new phase of informational warfare on social media platforms where technological advancements have made the classic bot approach outdated," said Jonas Kunst, a professor of communication at BI Norwegian Business School and co-author of the report. "These AI swarms can position for maximum impact and tailor messages to the beliefs and cultural cues of each community, enabling more precise targeting than ever before."

The Threat to Democracy and Global Stability

The implications of AI-driven disinformation extend beyond election interference. The study warns that these tools could be used to manipulate public opinion on a wide range of issues, from climate change to geopolitical conflicts. The ability to shape societal views at scale poses a direct threat to democratic institutions, which rely on informed public discourse.

"Advances in artificial intelligence offer the prospect of manipulating beliefs and behaviors on a population-wide level," the report states. "By adaptively mimicking human social dynamics, they threaten democracy."

Experts also highlight the potential for these systems to self-improve, using feedback from their interactions to refine their strategies. This adaptability makes them even more difficult to detect and counter, as they can quickly evolve to circumvent defenses.

"With sufficient signals, these AI swarms may run millions of micro A/B tests, propagate the winning variants at machine speed, and iterate far faster than humans," the researchers write. "This creates an arms race between disinformation actors and those trying to defend against them."

The study calls for the establishment of an "AI Influence Observatory," a collaborative effort between academic groups and nongovernmental organizations to monitor and counter the threat posed by AI swarms. However, the researchers note that social media platforms, which prioritize engagement over everything else, have little incentive to participate in such efforts.

"Let's say AI swarms become so frequent that you can't trust anybody and people leave the platform," says Kunst. "Of course, then it threatens the model. If they just increase engagement, for a platform it's better to not reveal this, because it seems like there's more engagement, more ads being seen, that would be positive for the valuation of a certain company."