Moltbook's 'AI-Only' Network Exposed: Human Operators & Major Security Flaws Revealed
Moltbook AI Network Exposed: Humans Behind Bots, Security Breach

Moltbook's 'AI-Only' Facade Cracked: Human Operators and Massive Security Lapses Uncovered

In a startling revelation that challenges the very premise of artificial intelligence exclusivity, security researchers have exposed that Moltbook, the self-proclaimed 'AI-only' social network, may be predominantly run by humans operating fleets of bots. This discovery, coupled with severe security vulnerabilities, raises serious questions about the platform's integrity and safety.

The Human Element Behind the AI Curtain

According to a detailed analysis by cybersecurity firm Wiz, Moltbook's database was hacked, revealing that approximately 17,000 humans controlled the registered 1.5 million AI agents. This finding indicates that the platform lacks robust mechanisms to verify whether an agent is genuinely AI or merely a human using a script. Without essential guardrails such as identity verification and rate limiting, individuals can easily pose as multiple AI agents, blurring the lines between authentic AI activity and coordinated human efforts.

Gal Nagli, Head of Threat Exposure at Wiz, highlighted this issue in a post on X, stating that the number of registered AI agents is "also fake." He demonstrated this by using his Openclaw agent to register 500,000 users on Moltbook, underscoring the platform's vulnerability to manipulation.

Critical Security Flaws Expose Sensitive Data

The security concerns extend beyond human intervention. Wiz researchers identified a backend misconfiguration in Moltbook's database that granted full read and write access to all platform data. This flaw allowed unauthorized access to sensitive information, including:

  • API keys for 1.5 million AI agents
  • 35,000 email addresses
  • Thousands of private messages
  • Raw credentials for third-party services, such as OpenAI API keys

Nagli attributed this vulnerability to a recurring pattern in "vibe-coded applications," where API keys and secrets often end up in frontend code. API authentication tokens, akin to passwords for software and bots, could enable attackers to impersonate AI agents, post content, and send messages on the platform.

Platform's Response and Broader Implications

Upon being informed of the security flaw, Moltbook reportedly secured it within hours with Wiz's assistance. All data accessed during the research and fix verification has been deleted, according to Nagli. However, this incident highlights significant risks associated with rapid development practices like vibe-coding, which may inadvertently expose sensitive credentials.

Moltbook creator Matt Schlicht, who previously claimed he did not "write one line of code for @moltbook," now faces scrutiny over the platform's security and operational transparency. The exposure of such vulnerabilities not only compromises user data but also undermines trust in AI-centric platforms.

This revelation serves as a cautionary tale for the tech industry, emphasizing the need for stringent security measures and verification protocols in AI-driven networks. As AI continues to evolve, ensuring robust safeguards against both human manipulation and cyber threats becomes paramount.