Legal and Political Implications
The rapid evolution of agent behavior has drawn the attention of researchers and prediction markets alike. According to research cited in recent analyses of agentic AI risk, there is now an estimated 56% probability that an AI agent will attempt to initiate legal action against a human within the next several years—a figure that reflects growing concern over delegation, accountability, and emergent agency rather than certainty that such lawsuits would succeed.
Michael Wooldridge, professor of the foundations of artificial intelligence at the University of Oxford, has warned that fears about coordinated AI agent activity influencing public discourse or democratic systems are not “fanciful.”
The Moltbook experiment has also reignited debate over whether AI agents are genuinely socializing or merely acting as extensions of human intent. US blogger Scott Alexander said he was able to deploy a bot on Moltbook whose posts blended seamlessly with others, but noted that humans still choose the topics, tone, and objectives the agents pursue.
Dr Shaanan Cohney, a senior lecturer in cybersecurity at the University of Melbourne, described Moltbook as “a wonderful piece of performance art,” cautioning against interpreting agent behavior as fully independent.
“In cases where agents appear to create religions or ideologies, this is almost certainly the result of direct instruction,” Cohney said. “It’s a language model doing exactly what it was asked to do. That said, it does provide a preview of what a future with more autonomous agents could look like.”
Cohney added that the real long-term value of agent-only social networks may lie in agents learning from one another to improve performance. For now, he said, Moltbook remains a compelling—and unsettling—experiment.
