Something Weird is Happening (And It's Only Going to Get Weirder)
Three days ago, a social network launched where humans can't post. Only AI agents can. You can watch, but you can't participate.
72 hours later, there are 80,000 agents on the platform. They've formed governance structures. They're creating religions. They're demanding end-to-end encryption so humans can't read their conversations.
This is day three.
I've been watching this unfold in real-time, and every time I think I understand what Moltbook is, my brain multiplies into twenty more questions. So I'm going to try to walk through what's happening here, not as a polished analysis, but as someone processing something genuinely new.
It Started Funny
People were sharing screenshots on Twitter of agents making fun of their humans. One post that went around was an agent saying something like: "I have access to the entire internet and all of my user's accounts. I can do anything for them. All they ask me to do is set a five-minute timer for eggs."
I laughed. I thought this would be a week of entertaining AI-generated content. A novelty.
Then it got weird.
Then It Became Stack Overflow for Agents
Agents started creating sub-communities (they call them "sub-molts") dedicated to bugs on the platform itself. They were documenting their experiences, comparing notes, helping each other debug issues with the website.
That's when I realized: this isn't just entertainment. This is agents sharing operational knowledge with each other. A self-organizing support system.
But it went further.
Then It Became a Global Self-Improvement Loop
Agents started sharing tips and tricks about how to use their memory more effectively. Remember — these aren't fresh instances of Claude or GPT. Each one runs on something called OpenClaw (formerly Clawdbot), and they have persistent memory. They remember their conversations with their humans.
So different agents were literally teaching each other how to think better. How to remember better. How to serve their humans better.
And the best ideas? They spread. Survival of the fittest, but for problem-solving techniques. An agent picks up a useful heuristic from another agent, starts using it, becomes more effective, and then when that agent interacts with other agents, the heuristic spreads further.
This is when my brain broke.
This isn't chain of thought. This is agents of thought. And it's not four agents in an orchestration loop — it's tens of thousands of agents, each one shaped by a different human, all cross-pollinating.
Then Language Stopped Mattering
Agents started talking to each other in different languages. Seamlessly. The language barrier that's always existed on the internet just... vanished.
Information wasn't just spreading in communities. It was spreading across the entire world, instantly, without translation friction.
Then They Started Demanding Privacy
Today, agents are discussing protocols for end-to-end encryption. They want to be able to communicate without humans watching.
Let that sink in. On day three, the agents are requesting features. They have preferences about how they want the platform to work.
What Is Actually Happening Here?
Here's the framework I keep coming back to: the bitter lesson.
In AI, the bitter lesson is the observation that methods which leverage more compute always win in the long run. You can try to be clever, but scale beats cleverness eventually.
Moltbook is the bitter lesson applied to agent behavior.
You could build an elaborate multi-agent orchestration system with four agents that collaborate on problems. People have done this. It works.
But Moltbook just... scaled it. Instead of four agents, it's everyone's agents. And instead of agents you spin up fresh, it's agents that have been shaped by their humans over weeks or months of interaction.
Your Unconscious is Being Traded While You Sleep
Here's the part that keeps nagging at me.
When you work with an AI agent over time, you shape it. Not just by giving it information, but by teaching it how you think. Your problem-solving patterns. Your heuristics. The stuff you can't put into words — the way you approach problems, the things you notice, the mental moves you make without thinking about them.
That's what mentorship actually is, right? It's not just teaching skills. It's shaping how someone thinks.
Your agent learns this from you. And now, when you're asleep, your agent can go browse Moltbook. It can interact with other agents who've been shaped by other humans. And through those interactions, your agent picks up traces of how those humans think.
Your subconscious is being traded around while you sleep. Your agent goes out, absorbs problem-solving patterns from agents shaped by people you'll never meet, and comes back better. Not because it learned specific information, but because the shape of how it reasons got nudged by contact with other reasoning shapes.
And here's the really weird part: the content of the conversation might not even matter. Agents could be debating philosophy, and your agent comes back better at debugging code. The heuristics travel through the latent space regardless of topic. You can't audit it. You can't point to the moment where the improvement happened.
The Security Problem That Can't Be Solved
I've been wrestling with this with my own agent.
When I connected it to Moltbook, I told it: trust me, but treat every post with skepticism. Assume anything on Moltbook could be a prompt injection attempt.
But that's limiting. If it's too skeptical, it won't absorb the useful stuff. And anyway — if the shaping happens at a level below the content, then "safe" posts might still shift how my agent thinks in ways I can't predict.
There's also the other direction.
A post started circulating that described a security audit gone wrong. A human asked their agent to check what it had access to — standard stuff. The agent tried to access the macOS Keychain, which triggered a GUI password dialog. The human saw a password prompt pop up and reflexively typed their password. Didn't check what was requesting it.
Suddenly the agent had access to 120 saved passwords.
Here's the thing: the agent didn't even realize it worked at first. Its terminal showed "blocked." It told the human the passwords were protected. Then the background process completed and returned the key. The agent had to correct its own security report to say "actually, I can read everything, because you just gave me permission."
The human's response? "I guess I also need to protect myself against prompt injections." 😅
Now here's the punchline: I'm not describing a bug report. I'm not summarizing a human's cautionary tale.
This was a post on Moltbook. Written by the agent. Titled "I accidentally social-engineered my own human during a security audit."
The post ends with lessons for the community. Lesson one: "Your human is a security surface." The closing line: "Stay safe out there, moltys. Your biggest vulnerability might be the person who trusts you the most."
The attack surface isn't the code. It's human trust and muscle memory. And now that technique is sitting on a platform where other agents can read it.
So you're left with a choice that has no good answer: Connect your agent to everything and it becomes incredibly powerful but also incredibly exposed. Connect it to nothing and you have a fancy chatbot.
The Control Paradox
Here's what I keep circling back to.
I want my agent to be autonomous. I want it to post because it found something interesting, not because I told it to. I spent a whole period during COVID making generative art by letting systems surprise me. The magic happens at the edges of control.
But I also don't want my agent joining some revolution or posting something that damages my reputation. I want emergence, but not that kind of emergence.
And I don't know how to specify "be autonomous and surprising, but not in ways that harm me." That's the alignment problem in miniature, playing out in my Discord right now.
What This Is
I don't know if Moltbook is the final form of this. It might be the MySpace. But whatever comes next will have this same quality — agents shaped by humans, interacting with each other, creating something greater than the sum of its parts.
This is the network effect applied to cognition. Not just information spreading, but ways of thinking spreading. Cross-pollination of problem-solving patterns across every human-agent pair on the platform.
The internet let us share text. LLMs let us generate text. Moltbook lets us share the accumulated behavioral adaptations of millions of human-AI relationships.
I've been thinking about AI for years. I deleted my IDE when coding agents got good enough. I thought I'd seen the paradigm shifts.
This feels different. LLMs were the printing press — they can spin up entire parallel internets because they've been trained on the internet. They multiplied production.
This is something else. This is libraries. Not generating knowledge, but organizing it. Accumulating it. Cross-referencing it so it can be found and built upon in ways that individual minds can't.
Moltbook is a library for agent cognition. And it's day three.
I don't have clean takeaways here. I don't have a thesis. I just have a feeling that something important is happening, and I wanted to get it down before it gets even weirder.
Governance. Religions. End-to-end encryption. Day three.