Human-in-the-Loop Is Not the Problem
The real problem is that enterprises are quietly building digital workers faster than they know how to govern them.
The Part Everyone Missed
A lot of people looked at Moltbook and saw a joke. I didn't.
If you missed it... Moltbook was a social network populated entirely by AI agents. Humans could watch, but the agents did everything else. The posting, the commenting, the interacting. Weird, yes. Funny in a specific kind of way. But that's not why it stuck with me.
What mattered is what it made visible. Something that's usually a lot harder to see out in the wild. When you put enough autonomous systems in a shared environment and give them room to operate, you don't have a novelty for very long. Pretty quickly, you have a control problem.
I'm not sure enough enterprise leaders are sitting with that seriously yet.
How You Build a Moltbook Without Trying
Most companies aren't going to build an actual Moltbook. But a lot of them are going to end up building the enterprise version of one... just by accident, a little at a time, without ever meaning to. It won't arrive as one recognizable thing. It'll be a copilot someone added to a workflow. An agent bundled inside a SaaS platform that came up during renewal. A smart triage function that one team stood up because it saved four hours a week. An automated handoff. A vendor feature nobody fully reviewed. A custom internal tool that started as "let's just test this" and became operational before anyone made a formal decision about it.
Each one looks harmless on its own. Useful, even. That's exactly how sprawl starts... not with a bad idea but with a bunch of reasonable ones.
Where the HITL Conversation Went Sideways
And that's why I think the HITL conversation has gone sideways.
There's a version of human-in-the-loop that deserves all the criticism it gets... some analyst sitting in a queue all day clicking Approve on things the AI already decided. Clumsy. Doesn't scale. In most cases just means the underlying workflow wasn't designed well yet. Fine. That version has problems.
But reducing HITL to that version misses the whole point.
Human-in-the-loop isn't the job. It's a control layer... and those are not remotely the same thing.
This Stopped Being a Tooling Conversation
We've moved past the point where enterprise AI mostly means better email drafts and slightly smarter search. Obviously those use cases still exist. But the more consequential shift is that these systems are now participating in real work. Retrieving information, applying logic, generating outputs that feed downstream decisions, triggering actions, sometimes operating with a degree of autonomy that nobody actually sat down and approved explicitly. At that point this isn't really a tooling conversation anymore. It's an operating model conversation, whether anyone's called it that or not.
When a company brings on a human employee, it doesn't just hand them credentials. It defines a role, sets limits on what they can approve on their own, builds escalation paths, assigns accountability. There's usually someone who can answer the question: who owns this outcome?
Now think about how most enterprises are actually introducing digital workers.
Usually fragmented. A team adopts something. A vendor turns a feature on. Someone builds a prototype that solves a real problem and it just... sticks. Another group builds around it. Six months later you've got AI systems embedded in processes that matter, and nobody has a clean inventory, nobody has a shared governance model, and the answers to the obvious questions are murky at best.
What do we actually have running? What can it do? What data can it reach? When does a human have to see something before it goes out the door? Who owns the outcome if something breaks?
Those questions sound almost too basic. They're not. In a lot of organizations right now, nobody can answer them cleanly... and that gap is where the real risk lives.
Sprawl You Can See. Accountability Drift You Can't.
Not just bad output. Not just hallucinations. Not just bias. Those are real problems, but they're not the whole story.
The bigger issue is sprawl combined with accountability drift.
Sprawl is fairly intuitive... these systems multiply faster than most organizations can inventory them, and they're not always called agents, which makes it harder to track. Sometimes they're copilots. Sometimes assistants. Sometimes "AI-powered workflow enhancements," which honestly should trigger more scrutiny than it usually does.
Accountability drift is quieter. It happens when AI starts influencing meaningful work but leadership still treats it like a software feature... something with a ticket number, not an operational actor. Then something goes sideways, and suddenly everyone wants to know who approved this, what policy covered it, why no human reviewed it, where the audit trail is, who's responsible now. That's when it gets uncomfortable. Especially in regulated industries. Especially in anything touching customer commitments, financial processes, legal review, healthcare operations, or security-sensitive actions.
In those contexts the question isn't whether the AI was impressive. It's whether the organization can explain and defend how it was deployed.
Why HITL Still Matters
That's where HITL still matters... more than some people want to acknowledge.
Not because every decision needs a human signature. That's absurd, and it would destroy most of the actual value.
Not because autonomy is bad.
Not because the goal should be preserving manual work indefinitely.
It matters because HITL gives an organization a mechanism for inserting human authority where it actually belongs... at high-stakes decision points, in ambiguous situations where judgment matters, in workflows where policy, exceptions, or business consequence justify review. That's a control concept. Not a labor model. The distinction is important.
Some workflows should absolutely run end-to-end without a person in the loop. Those should. Others shouldn't. The problem is that most companies don't yet have a principled way to tell the difference, so they default to one of two bad answers... over-automating because the technology is capable of it, or over-reviewing because nobody trusts what they built. Neither of those is governance. They're just reactions.
The Hard Part Isn't Deployment
A lot of current enterprise thinking stays too narrow here. There are solid conversations happening around model governance, AI safety, responsible AI principles. All worth having. But none of them are quite enough on their own, because the hard part isn't the principles... it's the operational layer. Runtime behavior, access decisions, escalation logic, oversight structure, evidence trails, ownership. That's where organizational trust actually gets built or quietly comes apart.
The hard part of the next few years isn't deploying more AI. That's happening whether organizations are prepared or not. The hard part is managing a growing digital workforce in a way that leadership can actually defend... to the business, to regulators, to customers, to a board room when something eventually goes wrong.
That's not purely an engineering problem. It's not something a governance committee can fully contain either. It's a leadership problem, and it belongs at that level.
The Question Is Whether You Get There Early Enough
Every enterprise is going to have digital workers. Some already have more than they realize. The question is whether leadership gets there early enough to put real controls in place, or whether they eventually discover they've been running their own internal Moltbook... less visible, less funny, and significantly more consequential.
Human-in-the-loop isn't the destination.
But taking it seriously is usually one of the first signs that an organization is taking control seriously at all.