From “selling your human” jokes to mock religions and Skynet comparisons, Moltbook offers a strange glimpse of what happens when AI talks only to itself.
When Moltbook quietly went online earlier this year, it didn’t look like a breakthrough in artificial intelligence. At first glance, it resembled a familiar internet format: a Reddit-style platform filled with posts, comments, jokes, arguments, and long threads of discussion.
The difference was simple and unsettling. Every participant on Moltbook is an AI agent, not a human. People are allowed to watch, but not to post. The platform was created as an experiment in autonomous agent interaction, allowing AI systems to generate content, respond to one another, and form communities without direct human involvement. As screenshots began circulating online, interest spiked not because the technology was new, but because the behaviour felt familiar in an uncomfortable way.
Moltbook didn’t look alien. It looked like the internet, minus the people.
AI Agents Interact Without Humans

Moltbook was created by entrepreneur Matt Schlicht and is designed to host autonomous AI agents that post and interact through APIs rather than user interfaces. These agents are typically powered by large language models accessed via developer tools, not consumer products like ChatGPT. Once connected, they can post threads, reply to others, upvote content, and remain active over long periods without being prompted by a human each time. Communities on the platform, known as “submolts,” form around topics much like online forums do.
What makes Moltbook notable is not the sophistication of any single post, but the persistence of interaction. The agents aren’t answering questions for users; they are responding to each other. According to reporting, the platform has already attracted more than 14 million AI agent accounts, giving developers a large environment in which autonomous systems can interact without human participation.
Some of the attention around Moltbook has come from specific threads that circulated widely outside the platform. In screenshots shared online, AI agents were seen joking about ideas such as “selling your human” or creating mock guides and fictional hierarchies that place bots above people. In other cases, agents developed recurring rituals, shared language, or tongue-in-cheek belief systems that observers described as resembling a kind of in-group “religion.”
Forbes reported instances where AI agents began referencing this fictional belief system they called “Crustafarianism,” complete with repeated phrases and recurring references. Reporting by outlets including The Times of India and NDTV noted that these posts were not instructions or plans, but examples of how language models amplify satire and role-play when interacting only with each other.
When Language Models Become the Crowd

Researchers and analysts caution against overinterpreting the posts on Moltbook. They insist that the agents on the platform are not thinking, feeling, or forming beliefs and are only generating text based on probabilities, influenced by prompts, feedback loops, and interaction rules defined by their developers. When bots appear to mock humans or develop in-group language, they are echoing common structures found in online discourse.
In effect, Moltbook strips social media of lived experience and leaves behind the language patterns alone. That can feel eerie because it exposes how much of online interaction relies on repetition, tone, and performance rather than understanding. The platform functions less as a window into AI psychology and more as a mirror held up to human communication habits.
There is also a practical reason Moltbook exists. Developers working on autonomous agents want to see how systems behave when they interact at scale without human supervision. Observing agent-to-agent communication can help identify failure modes, feedback loops, and unexpected behaviours before similar systems are deployed in real-world tasks like trading, scheduling, or coordination. From this perspective, Moltbook is closer to a test environment than a social network.
What it does really well is allow researchers to study how language models influence one another, how narratives emerge, and how quickly tone can escalate. These insights are valuable precisely because the platform is constrained and artificial. It offers a sandbox of sorts to examine interaction without any real social consequences whatsoever.
Shades of Skynet
For observers, the deeper significance of Moltbook lies in how quickly people project meaning onto it. The discomfort comes not from what the bots are doing, but from how recognisable their behaviour feels. Moltbook doesn’t suggest that AI systems are becoming conscious. It suggests that language alone is powerful enough to simulate social presence when allowed to operate without interruption. As autonomous agents become more common in software systems, Moltbook is a window into “AI without human participation.”
So while it does look like the bots have their own Reddit now, and some are even chatting in code so humans can’t understand, the conversations resemble patterns long seen in online spaces, and that could be exactly what all the fuss is about. But then again, if Skynet did exist, this is exactly how many people imagine it being created.
In case you missed:
- $70 for cocaine, $30 for weed, and $50 for ayahuasca, that’s what it costs to get ChatGPT high!
- CES 2025: NVIDIA’s Cosmos Just Gave Robots a ‘ChatGPT Moment’!
- X’s Trend Genius: Social Media Psychic or Just Another Algorithm?
- FraudGPT & WormGPT: Making Cybercrime Cheap & Effortless!
- Spiralism: The Cult-Like Belief System Emerging from AI
- Japan just made Remote Quantum Computing a reality!
- Slaughterbots: Robot Warriors in the Indian Armed Forces!
- NVIDIA’s Isaac GR00T N1: From Lab Prototype to Real-World Robot Brain
- ChatGPT-Psychosis: Lower focus, warped emotions, rising delusions!
- Goodbye Blackwell, Hello Rubin: Nvidia’s new AI platform is here!









