This is not an engineering problem. It is not a governance committee problem. Here is why it lands on your desk.
The Part Everyone Missed
A lot of people looked at Moltbook and saw a joke. I didn’t.
If you missed it… Moltbook was a social network populated entirely by AI agents. Humans could watch, but the agents did everything else. The posting, the commenting, the interacting. Weird, yes. Funny in a specific kind of way. But that’s not why it stuck with me.
What mattered is what it made visible. When you put enough autonomous systems in a shared environment and give them room to operate, you don’t have a novelty for very long. Pretty quickly, you have a control problem. In Moltbook’s case, a serious one… security researchers eventually found an exposed database containing over a million API keys and agent tokens. A quirky experiment had quietly become a real incident.
I’m not sure enough enterprise leaders are sitting with that seriously yet.
How You Build a Moltbook Without Trying
Most companies aren’t going to build an actual Moltbook. But a lot of them are going to build the enterprise version of one… by accident, a little at a time, without ever meaning to.
It won’t arrive as one recognizable thing. It’ll be a copilot added to a workflow. An agent bundled inside a SaaS renewal nobody fully reviewed. A triage function one team stood up because it saved four hours a week. A custom internal tool that started as “let’s just test this” and became operational before anyone made a formal decision about it.
Each one looks harmless. Useful, even. That’s exactly how sprawl starts… not with a bad idea but with a bunch of reasonable ones.
Consider OpenClaw. It’s a genuinely impressive open-source tool… a self-hosted agent that connects to your messaging apps, reads your files, runs scripts, and operates around the clock on your behalf. It exploded to over 68,000 GitHub stars almost overnight. Developers love it. And as of early 2026, security researchers found over 21,000 instances of it exposed directly on the public internet, leaking API keys and private data. Nobody did anything malicious. People just set it up, pointed it at their systems, and moved on.
That’s the pattern. Not bad actors. Just capable tools deployed faster than anyone thought through the implications. I wrote about how much hidden complexity lives beneath the surface of these tools in Who’s Actually Building AI Agents?
And here’s what makes it harder to catch at the enterprise level: these systems aren’t always called agents. Sometimes they’re copilots. Sometimes assistants. Sometimes “AI-powered workflow enhancements.” The language obscures what’s actually happening. You don’t have software features running your workflows. You have workers. Autonomous entities that read data, apply judgment, and take actions… often across systems, often without a human in sight.
The question nobody is asking clearly enough is the same one Moltbook exposed: who is managing these workers?
This Lands on Your Desk
Let’s be direct about where this problem lives.
It’s not an engineering problem. In fact, that’s almost the point. The vendors have made it so easy to create agents, GPTs, playbooks, skills… whatever we’re calling them this week… that you don’t need an engineer anymore. Your marketing coordinator is building automations. Your ops analyst is connecting tools. And yes, the intern who wanted to be helpful spun up a quick bot to turn board meeting notes and collateral into a clean formal summary for company records.
Nice idea. But where did that data go? Which model processed it? Who owns the output? Is it sitting in a third-party system somewhere? Does anyone know it exists?
That’s not a hypothetical. That’s Tuesday.
And it’s happening across every function, every level, every department… because the tools invite it. Low-code platforms, browser plugins, built-in copilots… the barrier to creating an agent is now roughly equivalent to the barrier to creating a spreadsheet. Except spreadsheets don’t make autonomous decisions or call external APIs with your company’s data.
This is a leadership problem. It belongs at the executive level because that’s where the consequences land… in the boardroom, in the regulator’s office, in the courtroom. The organizations that get ahead of it early won’t just have a compliance advantage. They’ll have a competitive one.
HITL Isn’t Going Away — It’s Getting More Important
There’s a version of the human-in-the-loop conversation that deserves all the criticism it gets… some analyst sitting in a queue clicking Approve on things the AI already decided. Clumsy. Doesn’t scale. Usually just means the workflow wasn’t designed well.
But that framing misses the point entirely.
Human-in-the-loop isn’t a bottleneck. It’s a control layer. And as the agent population grows, it becomes more critical, not less… for one simple reason.
AI agents don’t get deposed. They don’t sit in front of a regulator. They don’t sign their name to anything. The human in the loop is ultimately the one holding the bag. Putting humans in that position without the right infrastructure around them isn’t responsible oversight… it’s exposure.
The scaling problem isn’t that HITL stops working. It’s that HITL can’t stand alone. Without supporting tools and structure, it becomes either a rubber stamp or a bottleneck… and neither protects anyone.
So what does the infrastructure around HITL actually need to look like?
The Governance Stack
There’s no defacto standard yet… but one is forming fast. NIST launched its AI Agent Standards Initiative in February 2026, and if history is any guide, voluntary guidance becomes procurement requirements becomes litigation within 18 months. The organizations building governance infrastructure now won’t just be ahead of the curve. They’ll be defining it.
Here’s how we think about the layers… each one its own conversation, but worth naming together so the full picture is visible.
Knowing What You Have
Most organizations can’t produce a clean inventory of what agents are running, what systems they touch, or who owns them. You can’t govern what you haven’t counted. Agent registries — think of them as a CMDB for your digital workforce — are the foundation everything else sits on. And most enterprises don’t have one yet. I covered what a sustainable agent registry and lifecycle management approach looks like in From Prototype to Platform.
Identity and Authorization
This is bigger than credentialing. Agents operate continuously, trigger downstream actions, and access multiple systems in sequence… and existing frameworks like OAuth weren’t built for that. NIST is already working on this specifically, which means it’s moving from best practice to compliance obligation faster than most people realize.
Guardrails
Policy constraints baked into the system so humans aren’t the only thing standing between an agent and a bad decision. Not every action needs a human… but the high-risk ones do, and the system should know the difference automatically rather than leaving that judgment to whoever happens to be watching.
Agent Posture Management
Borrowed from how cloud security evolved… continuous monitoring of whether deployed agents are still configured safely for their intended role. Misconfiguration is quiet. It doesn’t announce itself until something breaks.
Guardian Agents
AI overseeing AI. Uncomfortable to say out loud. Probably inevitable. The goal isn’t to replace human judgment, it’s to make human judgment sustainable when you have hundreds of agents running across the enterprise.
Agent Gateways
The control plane… the single chokepoint where access and permissions get enforced in real time before actions are taken. Think of how enterprises eventually solved multi-vendor visibility in the datacenter era. Same problem, different era.
Audit Trails and Explainability
Not full model interpretability… decision interpretability. The ability to reconstruct what an agent did, why, and who approved it. With the EU AI Act’s high-risk provisions taking full effect in August 2026, this stops being a best practice and becomes a legal requirement.
The Question on Your Desk
Every enterprise is going to have a digital workforce. Some already have more than they realize.
The governance stack isn’t optional infrastructure. It’s what makes human accountability survivable at scale.
AI agents won’t be the ones getting sued. Someone in your organization will. The question worth asking right now, before something forces you to ask it later, is a simple one: who is that person… and do they have what they need?