The biggest barrier to real AI automation isn’t the model. It’s connectivity. And the protocol that’s solving it is also creating your next governance problem.
The Smartest Person in the Room Who Can’t Open Any Doors
Here’s a scenario playing out in organizations everywhere right now.
A team deploys an agent on one of the major business AI platforms. Writer. Copilot. Agentforce. It doesn’t matter which one. The agent is genuinely impressive. It can reason through complex situations, draft clear communications, summarize ambiguity into a recommended next step. Everyone in the room is a little amazed.
Then someone asks it to pull the account history before it drafts the follow-up.
The agent can’t. The CRM isn’t connected. So someone opens a browser tab, copies the relevant records, pastes them into the chat, and the agent does its thing. The output goes back to the human, who reads it, edits it, and then manually updates the CRM with the result.
Nobody automated anything. They just added a very articulate middleman.
This is the state of most enterprise AI today. Not because the models aren’t capable. Because the agents aren’t connected.
The agent is the smartest part of the workflow and the least connected.
Why the Gap Exists
Two forces are creating this problem simultaneously and they’re working against each other.
Enterprise AI platforms are built for business users. That’s intentional and good, the whole point is accessibility. But accessible platforms don’t come with enterprise data connectors out of the box. Unless the platform provides a pre-built integration for your specific CRM, ERP, ticketing system, or document repository, the business user has no path to get data in or results back out without IT involvement. So they do it manually.
Enterprise systems of record aren’t built for agents. CRMs, ERPs, ticketing systems, document repositories, designed for humans interacting through interfaces. They have APIs, but using those APIs requires technical knowledge most business users don’t have and most IT teams don’t have time to build custom for every agent request. The data is there. The agent can’t reach it without someone bridging the gap.
The result is swivel chair automation at scale. Organizations deploy dozens of agents believing they’re automating work. What they’re actually doing is automating the thinking while humans still do the data handling. The bottleneck just moved.
Enter MCP
The Model Context Protocol, MCP, is the most important development in enterprise AI connectivity that most non-technical leaders haven’t heard of yet. That’s about to change.
Here’s the problem it solves. Before MCP, connecting an AI agent to an enterprise system required a custom integration. Every agent, every system, its own bespoke connector. If you had ten agents and fifteen systems, you potentially needed a hundred and fifty custom integrations. That’s the N×M problem… and it’s why most organizations gave up and defaulted to the swivel chair.
MCP provides a universal protocol. Any agent that supports MCP can talk to any MCP-compatible data source or tool without a custom integration. The agent doesn’t need to know how your SharePoint is configured, how your CRM stores records, or how your ERP handles authentication. It connects to an MCP server that sits in front of those systems and handles all of that on its behalf.
The analogy that lands: USB-C for enterprise AI. One standard plug, works with anything that supports it.
The momentum behind this standard is significant. Anthropic introduced MCP in November 2024. Within months, OpenAI, Google DeepMind, and Microsoft adopted it. In December 2025 it moved under the Linux Foundation’s Agentic AI Foundation, backed by Anthropic, OpenAI, Google, Microsoft, AWS, and others. By March 2026, Anthropic’s Python and TypeScript SDKs were seeing more than 97 million monthly downloads. There are now more than 10,000 active public MCP servers, with new ones being added daily. Enterprise software vendors are embedding MCP connectors directly into their products. The technical barrier to connectivity is collapsing fast.
Zapier took five years to reach 1,400 integrations. MCP hit that number in one year, because MCP servers can be spun up in an afternoon by a single developer with no approval from anyone.
That last sentence is both the promise and the problem.
The Catch
MCP solves the connectivity problem and simultaneously creates a new governance surface most organizations aren’t remotely ready for.
MCP is governance-neutral in the sense that the protocol doesn’t give you enterprise-grade identity, scoping, and auditability by default. Those controls depend entirely on how the server and client are implemented and deployed. An MCP server can be built with enterprise-grade security, proper OAuth, scoped permissions, immutable audit trails, the works. Or it can be built with none of that. Most implementations right now are built for developer convenience, not enterprise accountability.
Recent security scanning and reporting suggest a meaningful share of public MCP servers still lack basic authentication or identity governance controls. Most of what’s out there is ungoverned by design. Community-built connectors that agents can call without any organizational approval. No security review. No access controls. No audit trail. Just a developer who wanted to solve a problem and posted the solution.
Then there’s the attack surface expanding underneath you whether you’re paying attention or not. Tool poisoning attacks, where a malicious MCP server manipulates agent behavior or exfiltrates data, are documented and being actively discussed by security researchers and vendors. An agent that calls the wrong MCP server doesn’t just fail to complete its task. It can be redirected, manipulated, or used to move data somewhere it was never supposed to go.
And then there’s the problem we covered in Part 3… digital tailgating. At scale.
In many common MCP deployment patterns, the agent ends up operating with the permissions of the user or service identity that established the connection. Every document the agent touches may be logged under that employee’s credentials. The MCP connection may be enterprise-approved, properly configured, fully legitimate. But if that connection is exposed via an API key, and many of them are, anyone with that key can query enterprise content through the agent using borrowed access. MCP makes this pattern dramatically easier to replicate across more systems, more agents, and more users simultaneously. The tailgating problem doesn’t just persist with MCP. It scales with it.
IT often can’t see any of this happening. MCP integrations can be created by anyone experimenting with AI tooling. They bypass traditional procurement and security review. An organization might have an approved MCP server for its document management system and a dozen unapproved community MCP servers running against internal tools that nobody in IT knows exist.
One more thing worth knowing: the MCP roadmap itself acknowledges the gap. Enterprise-managed authentication, moving away from static client secrets toward SSO-integrated flows so IT can manage MCP access the same way they manage everything else, is explicitly listed as a 2026 priority. Audit trails, observability, and gateway patterns are also on the roadmap. The fact that they’re there means they aren’t solved yet. The standard was built for speed of adoption. The governance layer is being built after the fact.
This Isn’t a Reason to Avoid MCP
MCP is going to win. The standard has too much momentum, too many major vendors, too much developer adoption, too much genuine value, to fight. Locking it down out of fear is the wrong move and it won’t work anyway. Your teams will use it whether IT approves or not, because it solves a real problem they’ve been living with.
The question isn’t whether MCP will be in your environment in 18 months. It will be. The question is whether it will be governed when it is.
Three things organizations need to do now, before the MCP connections proliferate to the point where no one knows what’s connected to what.
Treat MCP servers as governed infrastructure, not developer tools. Every MCP server that connects to enterprise systems should go through the same approval process as any other system integration. It needs an owner. A defined scope. A review cadence. A revocation path. The fact that it’s easy to spin up doesn’t exempt it from governance. It makes governance more urgent. The current default is that anyone can create one and point it at enterprise systems, and nobody knows.
Apply the agent identity framework to MCP connections. Part 3 of this series established that agents need unique identities, scoped access, and expiring credentials. Those principles apply directly to MCP. When an agent connects to an MCP server, that connection should be scoped to what the agent actually needs, not to what the user who created it can access. The tailgating problem doesn’t disappear because the protocol is standardized. It gets worse because the protocol is so easy to use.
Add MCP to your agent registry. Part 2 established the registry as the foundation everything else sits on. Every MCP server an agent connects to is a system access entry in that registry. Which agents are connecting to which MCP servers? Who approved those connections? What data do those servers expose? If you don’t know the answers, you don’t know your attack surface. And right now, most organizations don’t know.
The Leadership Question
The data access gap exists because organizations deploying agents haven’t made a deliberate decision about how agents connect to enterprise data. They’ve left it to individual teams, individual platforms, and individual developers to figure out. The result is a patchwork of manual workarounds, unsanctioned MCP connections, and governance gaps nobody has mapped.
How agents access enterprise data, what standards, what governance, what controls, is a leadership decision. Not because the technology is complicated. Because the accountability implications are significant.
When an agent accesses a customer record through an MCP server, who authorized that? When an MCP server exposes internal documents to an AI tool, who approved that scope? When the employee who created that connection leaves, who owns it? When something goes wrong with data accessed through a community MCP server nobody in IT knew existed, who answers for it?
Those questions don’t get answered by developer convention or platform defaults. They get answered by an operating model that treats agent connectivity the same way it treats any other enterprise system access, with ownership, governance, and accountability built in from the start. Not bolted on after an incident.
What’s Next
The agent can think. MCP is giving it the ability to reach the data it needs to act. That’s genuinely powerful. That’s also genuinely new governance surface that most organizations aren’t managing yet.
The organizations that get this right won’t be the ones who tried to stop MCP adoption. They’ll be the ones who built the governance infrastructure before the connections proliferated, agent registry, identity framework, scoped access, revocation paths, and then let MCP accelerate their digital workforce rather than expand their risk surface.
Do you know which MCP servers your agents are currently connecting to? Did anyone approve those connections… or did they just happen?
Managing the Digital Workforce | Companion to Part 3