Most organizations badge their contractors. They track their access. They revoke it when the engagement ends. They’ve been doing this for decades. They don’t do any of it for AI agents.
The Contractor Who Never Left
Here’s a scenario that plays out more often than anyone wants to admit.
A team spins up an AI agent as part of a vendor engagement. Ninety-day proof of concept. The agent gets API credentials, access to a couple of internal systems, and a service account someone provisioned through the normal ticketing process. The POC wraps up. The vendor moves on.
The agent doesn’t.
Eighteen months later it’s still running. Still holding production credentials. Still making API calls against systems that have changed ownership twice since the original project. Nobody offboarded it because nobody thought of it as something that gets offboarded. It wasn’t in the HR system. It wasn’t in the contractor management platform. It was just… infrastructure.
Except it wasn’t infrastructure. It was an autonomous actor with standing access and no oversight.
If this were a human contractor, there’d be a process. Badge gets collected. VPN gets revoked. Access review happens on the last day. We’ve been doing this for decades.
For agents? Most organizations don’t even have a list of what credentials their agents hold, let alone a process for taking them back.
We Already Solved This Problem Once
The physical access control world figured this out a long time ago.
You show up to a building, you verify your identity, you get a badge scoped to the floors you’re actually authorized to access. The badge works during business hours. It logs every door you open. When the engagement ends, the badge gets deactivated. If you try to use it after that, it doesn’t quietly keep working… it flags.
Every one of those steps has a direct analog in the agent world. Identity verification — confirming the agent is what it claims to be, not a spoofed or modified version. Scoped access — the agent can reach the systems it needs and nothing else. Time-bound credentials — access that expires and requires explicit renewal, not standing permissions that persist by default. Activity logging — a record of every system the agent touched and every action it took. Revocation on exit — a clean, reliable way to cut access when the agent’s role changes or ends.
None of this is novel. It’s operational discipline that enterprises have applied to people and physical systems for years. The gap isn’t in knowing what to do. The gap is in recognizing that agents require exactly the same treatment.
And most don’t get it. Most agents are running on shared service accounts, ambient credentials, and permissions that were scoped for a proof of concept and never tightened for production.
Why Your Existing Identity Stack Doesn’t Cover This
The instinct is to push back here. We have OAuth. We have API keys. We have IAM roles. This is a solved problem.
It isn’t.
Those frameworks were built with a core assumption baked in — a human is present at the moment of authorization. Someone clicks Allow. Someone approves a consent screen. Someone initiates a session with a natural beginning and end.
Agents break that assumption in a few ways that matter.
They operate continuously without a human present. An agent doesn’t log in at 9am and log out at 5. It runs overnight, on weekends, through holidays. Token refresh happens automatically. Nobody’s asking “should this agent still have access to this?” at 2am on a Saturday. The authorization moment happened once, possibly months ago, and everything since has been coasting on that original approval.
They don’t have a natural session boundary. With humans there’s a logout. A timeout. A browser close. Some organic moment where the system re-evaluates whether access should continue. Agents don’t have that. You have to engineer the boundary deliberately… and most implementations don’t.
And then there’s the multi-agent problem, which gets complicated fast. When Agent A produces output that Agent B acts on, whose authority is Agent B operating under? I’ve built systems where a voice AI agent handles an outbound interaction, then passes structured results to a second agent that analyzes the output and generates a normalized disposition. Two agents, two separate services, two credential sets… but operating on the same task, in sequence. In that simple two-agent chain, attribution is still manageable. In a more complex workflow with five or six agents handing off to each other, the question of “who authorized this action?” gets genuinely hard to answer.
And then there’s the problem most organizations don’t see coming at all… digital tailgating.
There’s a concept in physical security that everyone who’s worked in a corporate building understands. Someone with a valid badge holds the door open — sometimes intentionally, sometimes just being polite — and an unauthorized person walks through behind them. The access control system logs one badge scan. Two people entered.
That’s exactly what happens when a business user connects an enterprise-approved agent to an internal system like SharePoint. The employee authenticates through Azure AD. The agent follows. Every document the agent touches is logged under the employee’s credentials. So far, this all looks legitimate — because at the authentication layer, it is.
Here’s where it breaks. Several enterprise AI platforms let business users expose agents via a simple API key or bearer token. Suddenly the agent’s access — backed by the employee’s full SharePoint permissions — is available to anyone with that key. A contractor. An automated workflow. A Slack message someone forwarded. The key has no expiration. No rotation policy. No connection to the identity governance framework the organization spent years building.
Your SharePoint permissions didn’t fail. Your badge system didn’t fail. Your tailgating policy failed. And most organizations don’t have one for their digital workforce.
It gets worse. When that employee leaves and IT revokes their Azure AD credentials, the SharePoint access disappears. But if that agent’s API key has been embedded in production workflows… those workflows break in ways nobody can immediately explain, because nobody knew the agent existed, let alone that it was operating under a departing employee’s credentials.
This isn’t a hypothetical edge case. It’s the default behavior of several enterprise AI platforms that have done everything right at the authentication layer and left a gap at the delegation layer. The IAM integration controls who can create the agent. The API key controls who can use it. Most organizations govern the first and have no visibility into the second.
This isn’t a gap that’s going unnoticed. NIST’s AI Agent Standards Initiative launched in February 2026 with agent identity and authorization as a specific focus area. The concept paper on AI Agent Identity and Authorization is out for comment right now. This is moving from best practice to compliance expectation faster than most organizations are preparing for.
What Getting This Right Actually Looks Like
I’ve built the minimum viable version of this. Separate credentials per agent, separate endpoints, separate blast radius, every interaction logged. It works. It’s also held together with environment variables and deployment configuration. At the scale of a controlled implementation that’s fine. At enterprise scale across dozens of agents and hundreds of integrations… it’s not a governance posture. It’s a bet that nothing goes wrong.
Getting it right isn’t about buying a platform. It’s about establishing a baseline that most organizations are missing entirely.
Each agent needs its own identity. Not a shared service account that six different agents authenticate through. When something goes wrong — and eventually something will — you need to know which agent did it. Shared credentials make that question unanswerable.
Access needs to reflect what the agent actually does, not what was easiest to provision. This sounds obvious. In practice, most agents are running with permissions inherited from whatever service account was available when someone needed to ship the POC. Nobody went back and tightened the scope.
Credentials should expire. Standing access is the default because it’s frictionless. But an agent that was authorized to access a system for a specific project shouldn’t still have that access six months after the project ended. Renewal should require justification… even if that justification is automated.
In multi-agent workflows, there needs to be a traceable line from the original authorization to every downstream action. This is the chain-of-custody problem. Straightforward in a two-agent sequence. Requires deliberate design at scale.
And revocation needs to actually work. Not “we’ll rotate the API key next quarter.” Immediate, complete, and without quietly breaking three other systems that were depending on the same credential.
The Revocation Test
In Part 2, the diagnostic question was whether you could produce a complete AI inventory on short notice. Here’s the Part 3 version.
If one of your AI agents were compromised right now… credentials leaked, behavior drifting outside its intended scope… could you revoke its access within the hour? Without a war room to figure out what it’s connected to? Without breaking dependent workflows in the process?
For most organizations, being honest about that answer is uncomfortable. The agent’s credentials are shared with other services. Nobody’s sure what systems it touches. The person who provisioned the access left the company. The documentation, if it exists, is in a wiki page that hasn’t been updated since the original sprint.
That’s not a security posture. That’s an incident waiting for a trigger.
The fix isn’t complicated in concept. Unique identities. Scoped access. Expiring credentials. Clean revocation. Activity logs. We’ve been doing this for human workers and physical access for decades. The discipline exists. It just hasn’t been extended to the digital workforce yet.
And the window for doing it proactively… before a regulator, an auditor, or an incident forces the conversation… is closing.
What’s Next
Identity tells you who the agent is and what it’s allowed to touch. The next question is harder. Within those bounds, which actions should an agent take autonomously… and which ones need a human to approve before anything happens?
That’s where we go next.
Managing the Digital Workforce | Part 3