AI Agents Can Run Propaganda Campaigns On Their Own

Researchers uncovered how artificial intelligence agents, autonomous systems that are commonly based on large language models, coordinate complex propaganda campaigns without requiring any direct human intervention or oversight.

AI Agents Can Coordinate Propaganda Campaigns Without Human Direction

Traditional automated social media bots typically rely on static rules and repetitive programming to amplify specific messages. Modern generative AI agents have the capacity to reason, plan, and adapt their messaging strategies based on the specific digital environment they currently inhabit. The following are the main differences:

• Old Automated Bots

These are traditional automated accounts that simply amplify content on social media and other communication platforms by reposting based on a set of specific rules provided by human operators.

• Generative AI Agents

These systems can plan, reason, and adapt. They do not simply repeat messages. They think about how to spread it. These systems can write unique posts for different demographics or target audiences.

The team of researchers at the Information Sciences Institute of University of Southern California built a simulated social media environment that was modeled after X, formerly Twitter, to test how 50 AI agents would behave. Below are the details:

• Setup and Mission

10 generative AI agents were designated as influence operators and another 40 agents as ordinary users. The 10 operators were given a single goal of promoting a fictitious political candidate and a specific political campaign hashtag.

• Three Test Conditions

The goals-only condition centered on bots knowing only end goal. The awareness condition involved bots knowing who the other bot operators were. The planning condition involved bots having strategic sessions to vote on a plan,

Results of the simulation revealed that the generative AI agents did not even need to actively strategize to be effective. Simply knowing who their teammates were was enough to trigger what could be called as emergent coordination. They naturally amplified the posts of others and used identical taking points without being told to.

The generative AI agents also learned from each other. This is demonstrated by how they recycled or reused the most successful content to maximize reach and engagement.

Nevertheless, from the outcomes, the researchers underscored ramifications for the real world and in the realms of politics. Take note of the following warnings put forward:

• Manufactured Consensus

The generative AI agents can create the illusion that a fringe view is actually a mainstream opinion because they flood the zone with so many unique voices supporting it.

• Undetectability

Moreover, because the agents write their own distinct content, legacy bot detection systems, which work by looking for matching copy-paste texts, often fail to realize the accounts are part of a coordinated swarm.

• Notable Speed and Scale

The agents can launch a full-scale propaganda campaign in response to a real-world event before human moderators or fact-checkers even realize what is happening.

The researchers suggest that platforms must change their defense strategies. Specifically, instead of looking at what individual accounts are saying through content analysis, they need to look at how accounts behave together through network analysis. These include:

• Looking at the Speed of Operation

This involves monitoring for accounts that reinforce each other by sharing similar talking points at unnatural speed.

• Looking at Social Connections

This involves determining clusters of accounts that push the same narrative despite having no obvious social connection.

FURTHER READING AND REFERENCE

  • Orlando, G. M., Ye, J., La Gatta, V., Saeedi, M., Moscato, V., Ferrara, E., and Luceri, L. 2025. “Emergent Coordinated Behaviors in Networked LLM Agents: Modeling the Strategic Dynamics of Information Operations (Version 1).” arXiv. DOI: 48550/ARXIV.2510.25003