When AI Swarms Manufacture Reality: Researchers Sound the Alarm on Synthetic Public Opinion

Picture a heated online debate that suddenly explodes in popularity. A hashtag trends, comment sections fill up, and it starts to feel like “everyone” has reached the same conclusion. Now imagine that much of this apparent public agreement isn’t coming from real people at all, but from a coordinated mass of AI-driven profiles speaking in convincing, human-like voices.

That’s the warning researchers from multiple institutions are raising in a new Science publication: the rise of malicious AI swarms built by combining large language models (LLMs) with multi-agent systems. These aren’t the old-school spam bots that repeat the same message and are easy to spot. Instead, they can be made up of countless AI-controlled personas that present themselves as unique individuals, complete with persistent identities, memory, and shared objectives.

What makes these AI swarms especially dangerous is how naturally they can blend into real online communities. Rather than posting the same recycled slogans, they can adjust tone and wording on the fly based on how humans respond. They can jump across platforms, keep conversations going over time, and operate with very little direct human oversight—making them harder to detect and far more persuasive than traditional bot networks.

The biggest risk highlighted by the researchers is the creation of “synthetic consensus.” By flooding social networks, forums, and comment sections with believable support for a specific viewpoint, an attacker can manufacture the illusion that a particular idea is widely accepted. In practice, that means a single actor could appear as thousands of independent voices, steering conversations, drowning out dissent, and pressuring public figures or communities into reacting to a reality that isn’t real.

The concern goes even deeper than temporarily changing minds. The researchers argue that sustained, coordinated influence can reshape a community’s language, inside jokes, symbols, and even broader cultural identity over time. Once an AI swarm becomes embedded in an online space, it can help push certain narratives until they feel normal, familiar, and socially reinforced.

There’s also a long-term technical threat: this flood of artificial content could seep into the data used to train other AI models. If synthetic opinions and manipulated narratives are widely circulated and repeatedly shared, they can end up contaminating training datasets—potentially spreading the influence into mainstream AI systems that many people rely on for information.

To keep up with this new era of AI-driven manipulation, the researchers say today’s defenses aren’t enough. Reviewing content one post at a time—traditional moderation—may not work against a system designed to look organic at scale. Instead, the focus should shift to detecting patterns that are statistically unlikely for real humans, spotting signs of coordination, and tracing where content originates.

They also stress the importance of bringing in behavioral science to better understand how large groups of AI agents behave when they interact with each other and with humans. Among the solutions proposed are privacy-preserving verification methods, stronger evidence-sharing through a distributed “AI Influence Observatory,” and reducing the financial incentives that make inauthentic engagement profitable.

The takeaway is unsettling but clear: as AI text generation becomes more capable and easier to scale, the internet could see waves of coordinated, human-like influence operations that feel real in the moment—unless platforms, researchers, and policymakers adapt quickly to detect and disrupt synthetic crowds before they shape public opinion.