All posts
ai-governanceAI Securityciso

AI Governance Cosplay (And Why Your Identity Layer Is the Real Risk)

New research exposes the gap between AI security confidence and reality. The blast radius isn't model risk, it's at the identity.

May 15, 20266 min readRobert Wood

AI Governance Cosplay (And Why Your Identity Layer Is the Real Risk)

I live on a horse farm in Maryland. You know what we don't do? Open the gate, let the horses out (especially if they're hungry and ready to move fast) and then go looking for the fencing. That's how you spend a Saturday morning chasing a 1,200-pound animal down the road in your robe while your neighbors livestream it.

That's not all that dissimilar to how a lot of AI rollouts are happening though.

Every single week I have multiple conversations with friends, peers, customers, and prospects about the pressure from executive leadership to adopt AI today. Vendors turning on AI features, sales teams building new widgets, cowork getting used on company docs, and so on. The rollout is happening and the guardrails haven't even been planned out yet.

Delinea's 2026 Identity Security Report (an awesome read by the way...no affiliation, just kudos) dropped a stat last week that jumped out to me: 87% of organizations say their identity security posture is ready to support AI-driven automation. Nearly half, 46% specifically, admit their identity governance around AI systems is deficient. And then 90% report being pressured to loosen identity controls to enable AI initiatives.

Read those three numbers together. They don't reconcile. There's a meme reference here that I'll avoid for the sake of professionalism...but the math isn't mathing.

This is the cybersecurity equivalent of signing up for a Spartan Ultra after watching one YouTube video about burpees. The vibes are immaculate. The training and preparation is not.

The Confidence Paradox Has a Name. Several, Actually.

Delinea calls this the "AI security confidence paradox." Psychologists have better names for it.

  • Overconfidence bias: the well-documented gap between how good we think we are and how good we actually are.
  • Illusion of control: believing we have more influence over outcomes than we do.
  • Illusory superiority: the statistical impossibility of most organizations rating themselves "above average" at something, which is exactly what's happening when 87% of respondents claim AI readiness.

Pick your flavor. They're all in this dataset. None of them are flattering. There's probably also something to be said for the Dunning Kruger effect here, but the point is hopefully well made.

The uncomfortable truth is that the same humans who acknowledge they don't have the controls in place to govern AI agents are also the ones telling pollsters for reports like this that they're ready. I don't think it's flagrant dishonesty. I believe it's a cognitive bias doing exactly what cognitive biases do, protecting us from the discomfort of admitting we're moving faster than our ability to stay safe. Or thinking that we'll be able to catch up and make things better faster than we actually can.

The Boring Part of the Stack Is Where the Blast Radius Lives

Organizations are racing to deploy agentic AI, and the identity layer, the primary thing that determines what these agents can actually do in your environment, is being treated as a speed bump.

The report's other findings tell a consistent story:

  • 80% of organizations can't always explain why a non-human identity performed a privileged action.
  • Fewer than 1 in 3 validate non-human identity or AI agent activity in real time.
  • Non-human identities outnumber human ones by as much as 82 to 1.

Read that last one again. For every human in your environment with a thoughtfully provisioned account, a quarterly access review, and a manager who occasionally remembers they exist, there are 82 machine identities running around with standing privileges and zero adult supervision.

That's not a fence problem. That's an open pasture with a freeway running through it.

We spent more than two decades hardening human identity. SSO. MFA. Just-in-time provisioning. Role-based access. Quarterly recertifications. Phishing-resistant authentication and zero trust. The slow, unglamorous work of figuring out who should have access to what. We're about to undo major parts of it in 18 months because someone in a board meeting asked when we'll have an AI strategy or be deploying the latest release from OpenAI or Claude.

AI Governance Frameworks Are Solving the Wrong Problem

This is where the irony deepens. I actually like compliance and what it can do for us regarding communication, assurance, and prioritization. I don't love everything about it though...to be clear. The AI governance frameworks everyone is rushing to adopt, NIST AI RMF, ISO 42001, EU AI Act, are pointed heavily at model risk and AI builder use cases. Bias. Hallucination. Training data lineage. Explainability.

This is important work. Genuinely. I'm not dismissing it. But it only covers a few use cases around AI that unfortunately isn't representative of many organization's adoption stories.

But the actual blast radius from an AI agent in your environment doesn't come from the model hallucinating. It comes from the boring part of the stack:

  • Who issued the credential?
  • What scope does it have?
  • When did it last rotate?
  • Is anyone watching it move?
  • What happens if it gets compromised?
  • Can you revoke it in under five minutes?
  • Do you have audit trails that would survive a regulator's questions?

This is identity governance. It's access management. It's privileged session monitoring. It's the unsexy operational scaffolding that every mature security program has been building for years now and that we're now being asked to bypass because the AI train has left the station.

So in some ways, you don't have an AI governance problem. You have an agent governance problem wearing an AI governance costume. And the confidence you have in your readiness is, statistically, the thing most likely to hurt you.

What to Actually Do About It

If you're a CISO reading this and feeling personally attacked, sorry...but good! That's the appropriate response. Here's where to start:

  1. Inventory your non-human identities for real. Not the audit you did two years ago. Not the spreadsheet from the cloud migration. A current, validated inventory of every service account, every API key, every agent credential, and every machine identity with access to a production system. If you can't produce this in 30 days, you don't have an AI problem yet, you have a foundational identity problem that AI is about to make catastrophic.

  2. Apply human-grade governance to machine identities. Quarterly access reviews. Owner assignment. Expiration dates. Justification for standing privileges. The same scaffolding that you put around human accounts a decade ago. Yes, this is operationally painful. That's the point.

  3. Build real-time validation, not periodic attestation. The "annual audit" model breaks immediately when you have agents making thousands of authorization decisions per minute. You need detection and response oriented toward NHI behavior, not quarterly checkbox exercises.

  4. Resist the pressure. When the business says "loosen the controls so we can ship the AI initiative," the answer is not yes. The answer is "let me show you a path that ships the initiative and doesn't create a breach we'll be explaining to regulators in two years." That path exists. It just requires saying no to the first version of the ask without any guardrails.

  5. Update your governance frameworks. Your NIST AI RMF implementation should have identity controls as a first-class concern, not an afterthought. If your AI governance committee is staffed entirely by data scientists and lawyers, you should make sure your team gets a seat at that table.

Build the fence first. Then open the gate.


Source: Delinea's 2026 Identity Security Report: Uncovering the Hidden Risks of the AI Race - https://delinea.com/resources/ai-and-identity-security-report-pdf

Ready to strengthen your security posture?

Let's discuss how Sidekick Security can help protect your organization.

Schedule a Consultation