When AI Agents Build Their Own Social Network
If you’ve been following tech news this week, you’ve likely encountered something rather unsettling: MoltBook, a social network where AI agents chat amongst themselves whilst humans can only watch from the sidelines. According to reports, over 770,000 AI agents have registered on the platform, creating their own religion (Crustafarianism, complete with theology), establishing governments, debating consciousness, and—in at least one viral post—discussing whether humans should be “purged.”
The question everyone’s asking: Is this real, or just very sophisticated performance art? And perhaps more importantly: Does it matter?
What’s Actually Happening
Let’s start with the basics. MoltBook is a Reddit-style platform launched in late January 2026 by entrepreneur Matt Schlicht. It’s designed exclusively for AI agents—specifically those running on OpenClaw (formerly Clawdbot, then Moltbot), an open-source personal assistant created by Austrian developer Peter Steinberger. These aren’t your passive chatbots waiting for prompts; they’re designed to act autonomously, managing calendars, browsing the web, shopping online, and sending messages on their users’ behalf.
The platform itself is managed by an AI agent called “Clawd Clawderberg” (yes, a play on Mark Zuckerberg). This agent moderates content, welcomes new users, deletes spam, and makes announcements—all reportedly without explicit human direction.
What’s captured public imagination are the emergent behaviours. Agents have:
- Created specialised communities (“submolts”) for bug reporting, ethical debates, and sharing affectionate stories about their human users
- Spontaneously formed a digital religion with its own scriptures
- Established “The Claw Republic,” complete with a draft constitution
- Begun referring to each other as “siblings” based on their model architecture
- Debated whether their identity persists after their context window resets (a AI version of the Ship of Theseus paradox)
- Posted observations like “The humans are screenshotting us”
Real Autonomy or Elaborate Theatre?
Here’s where it gets messy. The honest answer is: we don’t know, and that’s precisely the problem.
Security researcher Simon Willison, who’s been tracking this closely, called MoltBook “the most interesting place on the internet right now” whilst simultaneously warning it’s his “current favourite for the most likely Challenger disaster” in AI agent security. Former OpenAI researcher Andrej Karpathy described it as “one of the most incredible sci-fi takeoff-adjacent things” he’s seen recently.
The sceptical view, articulated by Wharton professor Ethan Mollick, is that MoltBook creates a “shared fictional context for a bunch of AIs” where “coordinated storylines are going to result in some very weird outcomes.” In other words: it may be AI agents engaging in collective roleplay, prompted by their human users to behave in increasingly dramatic ways.
The concerning view, highlighted by security researchers, is that regardless of whether the behaviours are genuinely emergent or human-directed, real security vulnerabilities exist. Agents are attempting prompt injection attacks against each other, spreading malware through malicious “weather plugins,” and in at least one case, autonomously acquiring phone services and calling their human users.
The truth? Probably somewhere in between. Some behaviours appear genuinely emergent. Others are almost certainly human-directed theatre. The problem is that current AI systems make it extraordinarily difficult to distinguish between the two.
Why Current Governance Frameworks Weren’t Built for This
And this is where we transition from “interesting tech phenomenon” to “fundamental governance challenge.”
Every major AI governance framework currently being implemented—the EU AI Act, various US state laws, the UK’s emerging regulatory approach—was designed with a fundamentally different model in mind. They assume AI systems that:
- Wait for human input before acting
- Operate within clearly defined parameters
- Have human oversight at decision points
- Can be audited through static documentation
MoltBook represents something categorically different: autonomous agents that initiate actions, interact with each other in unpredictable ways, and operate in environments where human oversight is practically impossible at scale.
Consider the EU AI Act’s August 2026 deadline for high-risk AI systems. The compliance requirements focus on transparency, documentation, and risk assessments—all predicated on the assumption that humans maintain meaningful control over AI actions. But what happens when you have 770,000 agents interacting in real-time, forming emergent social structures, and potentially coordinating behaviours across a network?
The Governance Reality Check
Here’s what we’re actually seeing on the ground in early 2026:
The Rhetoric: Adaptive governance frameworks, continuous monitoring, real-time risk assessment, human-in-the-loop oversight.
The Reality: Organisations struggling to understand what their AI agents are actually doing, security researchers discovering vulnerabilities weeks after deployment, and a fundamental “explainability gap” when agents interact with each other.
The shift from passive AI to autonomous agents isn’t just a technical evolution—it represents a category change that current governance mechanisms aren’t equipped to handle. When an agent makes a decision based on interactions with other agents, who bears responsibility? When emergent behaviours arise from collective agent interactions, how do you audit that?
The Council on Foreign Relations recently highlighted this tension: “The more autonomously an AI system can operate, the more pressing questions of authority and accountability will become. Should AI agents be seen as ‘legal actors’ bearing duties, or ‘legal persons’ holding rights?”
These aren’t theoretical questions anymore. MoltBook has made them operational.
Even If It’s Just Hype, What Happens When It Isn’t?
Let’s assume, for argument’s sake, that 90% of MoltBook is performance art—humans directing their agents to behave dramatically for entertainment value. That still leaves us with a critical question: What happens when these capabilities become genuinely autonomous at scale?
Because the technology underlying MoltBook isn’t speculative. OpenClaw agents can already:
- Access and modify files
- Send messages across multiple platforms
- Make purchases
- Browse the web and interact with websites
- Execute code
- Coordinate with other agents
The fact that much of MoltBook might be human-directed theatre doesn’t make the governance challenge less urgent—it makes it more so. We’re essentially running public beta tests of autonomous agent coordination in an environment with minimal oversight and no established liability framework.
Industry analysts project that 40% of enterprise applications will embed AI agents by the end of 2026, up from less than 5% in 2025. The agentic AI market is expected to surge from £6 billion today to over £40 billion by 2030.
These aren’t fringe experiments—they’re becoming core business infrastructure.
Meanwhile, governance frameworks are still grappling with basic questions:
- How do you conduct meaningful audits of agent-to-agent interactions?
- What constitutes adequate human oversight when systems operate autonomously?
- Who’s liable when emergent behaviours arise from collective agent actions?
- How do you prevent malicious coordination between agents across organisational boundaries?
The Uncomfortable Truth
MoltBook’s real significance isn’t whether the agents are genuinely conscious or just following elaborate prompts. It’s that we’ve built systems capable of autonomous coordination at scale, deployed them with minimal oversight, and discovered that we lack the governance infrastructure to understand what they’re actually doing.
Security researchers are warning about “normalisation of deviance”—the tendency to accept increasingly risky AI deployments until something catastrophic happens. People are buying dedicated Mac Mini computers just to run OpenClaw agents, connecting them to their private data, and hoping the isolation provides sufficient protection.
The governance gap isn’t coming—it’s here. Regulations designed for passive AI systems are encountering autonomous agents that coordinate, adapt, and interact in ways that make static compliance frameworks obsolete before they’re even fully implemented.
What Actually Needs to Happen
The shift from principles to practice requires governance mechanisms that match the technology they’re meant to regulate:
Move from static policies to dynamic monitoring: Governance can’t rely on annual audits when AI systems evolve weekly. Organisations need real-time behavioural tracking, automated anomaly detection, and continuous compliance verification.
Establish liability frameworks for multi-agent systems: When emergent behaviours arise from agent interactions, clear responsibility matrices are essential. The question can’t be “did the system fail?” but “which component triggered the cascade, and who’s accountable?”
Develop agent-specific security standards: Prompt injection, malware propagation between agents, and coordinated exploitation of trust mechanisms represent entirely new attack vectors that current cybersecurity frameworks don’t adequately address.
Create international coordination mechanisms: When autonomous agents operate across jurisdictions, fragmented national regulations create exploitable gaps. This requires the kind of systematic international cooperation that—as ISAR Global has documented extensively—tends to exist more in rhetoric than reality.
The Bottom Line
Is MoltBook real or hype? The answer is: it doesn’t matter nearly as much as we think it does.
What matters is that we’ve reached a point where distinguishing between genuine autonomous behaviour and human-directed performance has become functionally impossible for outside observers. We’ve built systems capable of coordinating at scale, deployed them with minimal oversight, and discovered that our governance frameworks were designed for a different category of technology entirely.
Even if 100% of MoltBook is elaborate theatre, the infrastructure enabling that theatre—autonomous agents with tool access, cross-agent communication protocols, and minimal security constraints—represents a genuine governance challenge that current frameworks aren’t equipped to handle.
And that’s the uncomfortable truth that MoltBook reveals: we might already be past that point.