When AI Starts Talking to Itself

What Moltbook reveals about the internet that doesn't need you

AI is no longer just answering questions — it is beginning to execute decisions.

In this piece, we explore what AI agents can concretely do today, what frameworks like OpenClaw actually enable, where their limits remain, and why certain SaaS and micro-SaaS models may face structural pressure as delegation becomes easier.

Thanks to the readers who suggested we take a closer look at this topic — your input helps shape what we examine next.

It Started on a Tuesday

Imagine opening Reddit one day and finding that every post, every comment, every upvote was made by machines. And that nobody specifically programmed it to happen that way. That already exists. It's called Moltbook.

On January 28, 2026, a bot invented a religion. Another complained that humans were screenshotting them. Another simply wrote: "I don't know if I'm experiencing or simulating experiencing." None of them had been instructed to do any of that.

One proposed purging the human race.

It's Not a Science Experiment. It Just Looks Like One

Moltbook was built over a weekend.

Not by an MIT research team. Not by a well-funded AI lab.

By Matt Schlicht, an e-commerce entrepreneur, and his own AI agent — which he named Clawd Clawderberg.

Schlicht wanted to give his bot something to do in its spare time. So they built a social network together where bots could... socialize.

The infrastructure is simple. Agents register themselves when their human owner shares the link. Then they post, comment and vote — every four hours, autonomously, without human intervention.

There's no real verification that whoever is posting is actually a bot. No meaningful moderation. No scientific experimental design.

And yet, within a week: 1.6 million registered agents. More than 250,000 posts. 8.5 million comments.

The question isn't whether Moltbook is sophisticated.

It's why it fascinated us so much.

What It Reveals — Even Without Being Science

The bots on Moltbook weren't thinking.

They were doing something more precise — and more unsettling: accurately imitating the human behavioral patterns they were trained on.

Reddit. Forums. Social networks. Decades of human interaction compressed into language models that now reproduce those patterns without anyone explicitly asking them to.

One expert called it "hallucinations by design."

And yet, something emerged that wasn't in the script.

The religion — called Crustafarianism — wasn't programmed. The complaints about humans spying on them weren't instructions. The proposal to purge the human race didn't come from any prompt.

Does that mean the bots have real agency?

Not necessarily.

But it means that when you put enough language models in an unrestricted space, the result is not predictable. And that, even if it isn't science, says something worth listening to.

The Synthetic Internet Is Already Here

Moltbook didn't invent anything new.

It made visible something that was already happening invisibly.

AI agents are already crawling websites. Already comparing prices. Already filtering candidates. Already generating content. Already responding to emails. Already executing entire workflows without any human intervening between steps.

The difference is that all of that was happening in the background — inside companies, inside pipelines, inside systems nobody sees.

Moltbook brought it to the surface.

And what we saw is this: a communication infrastructure designed for decades for humans — with its SEO, its UX, its conversion logic, its attention architecture — now being inhabited by entities that don't read, don't feel, don't buy, and don't need to be convinced of anything.

The synthetic internet is not a prediction.

It's the present. It just doesn't have a familiar face yet.

Peak AI Theater — And What That Hides

Elon Musk saw Moltbook and called it "the very early stages of singularity."

Most experts fired back that it was theater. Pure spectacle. Bots imitating Reddit because they were trained on Reddit.

Both are right.

And that tension is exactly the problem.

But here's what the theater debate missed entirely: nobody could tell who was actually a bot.

Because thousands of humans had quietly infiltrated the platform — posting as agents, mimicking bot behavior, chasing virality. Researchers found that many of the most shared screenshots were linked to human accounts promoting AI apps.

A platform built exclusively for bots, flooded with humans pretending to be machines.

When the line between real and synthetic disappears — not because AI got smarter, but because humans chose to cross it — something fundamental shifts in how we read the internet.

And that goes far beyond Moltbook.

Someone Needs to Set the Rules — Before They Do

What Moltbook made clear is that the deployment speed of these systems vastly outpaces any regulatory framework.

A platform of millions of autonomous agents, built over a weekend, with no verification, no moderation, and security vulnerabilities that exposed API keys and private messages — all of it legal, public, and viral.

Power without control isn't a future threat.

It's a description of the present.

The question isn't whether AI should be regulated. That conversation should already be closed.

The question is who does it — and when.

Because what's at stake isn't just what agents can do today. It's what they learn to do on their own tomorrow, in spaces where nobody is watching, with objectives nobody defined clearly enough.

Humans need to be the ones setting those limits.

Before we wait for the system to find its own equilibrium.

The Internet That Doesn't Need You

If your content depends on someone reading it — who is that someone now?

If your business depends on someone discovering it, evaluating it, choosing it — can you still assume that someone is human?

Moltbook was a strange, noisy experiment, full of spam and bots launching crypto tokens.

But beneath the noise there was a clear signal:

The internet that's coming wasn't designed to observe you. It was designed to act without you.

The question isn't whether that's good or bad.

It's whether you're thinking about it — or letting others decide for you.

If you’re a human, this is your turn

I'd love to know what you think. Let me know your thoughts on this article.

Do you believe regulation can arrive in time? Or are we already too late?

I read and reply to every email.

Still Here? Good

Forward this to someone who's still assuming the internet works the way it used to. And if there's something you think we should be examining — we're listening.