The Identity Problem at the Center of Everything
Agentic AI is in production faster than governance exists—creating a vacuum visible across enterprise, security, and advisory markets.
The Identity Problem at the Center of Everything
Three different markets. Three different conversations. One underlying problem that practitioners keep arriving at from different directions: nobody has figured out how to govern what AI agents are allowed to be, do, or become — and that gap is now showing up on balance sheets, in audit reports, and in consulting pipelines simultaneously.
The thread running through this week
When 93% of enterprises are piloting agentic AI and 72% of them have no defined process for securing those agents’ identities, you don’t have a tooling problem. You have a governance vacuum that’s large enough to drive a consulting practice through — and that’s exactly what’s happening.
This week, the signal across enterprise AI deployment, identity security, and the advisory market all points to the same inflection: agentic systems are in production faster than the control planes that should govern them. The people who understood that earliest are now the ones getting called.
Enterprise AI & Agentic Systems
What’s actually shipping
The agentic AI story in Q1 2026 is not about capability — the models are capable enough. It’s about operational surface area. Salesforce Agentforce, Microsoft Copilot Studio, and Google Vertex AI agents are all in live enterprise deployments. What practitioners are discovering is that multi-agent orchestration frameworks like LangGraph and CrewAI spin up ephemeral identities, consume OAuth tokens, and call APIs in ways that existing observability stacks were never designed to capture.
The latency, cost, and reliability challenges are real but solvable. The harder problem is that agent workflows are accreting permissions organically — a tool call here, a service account there — and nobody owns the cleanup. Agents are getting deprovisioned when the pilot ends, if the pilot ends. Many don’t.
OpenAI’s Agents SDK and Anthropic’s Model Context Protocol are both pushing toward standardized tool integration patterns, and MCP in particular is gaining traction as a way to give agents structured access to enterprise systems. The security community is already stress-testing what happens when an MCP server is misconfigured or when prompt injection reaches a tool-use layer with write access. The answer is not encouraging.
The governance gap is the product gap
Enterprises that are getting agentic deployments right are treating agent identity as a first-class engineering concern from day one — scoping credentials tightly, building explicit revocation into the workflow lifecycle, and logging tool calls with the same rigor as privileged human access. That’s a minority. The majority are discovering these problems in incident reviews.
Identity Security, NHI & IAM
The numbers that should stop a CISO mid-sentence
CyberArk’s 2025 Identity Security Threat Landscape Report puts the machine-to-human identity ratio at 82:1 across the average enterprise — and that number is climbing as agentic frameworks accelerate. More pointed: NHI-related breaches now account for a plurality of identity incidents, outpacing compromised human credentials for the first time in the report’s history. That’s a structural shift, not a blip.
Okta is responding with substance. Their engineering team published detail this week on extending Identity Threat Protection to machine identities — specifically, building a continuous authentication graph that correlates OAuth token usage patterns across API calls to detect lateral movement from compromised machine credentials. The Midnight Blizzard OAuth abuse patterns are clearly the forcing function here. Forty percent of ITP customers are now generating alerts from machine identity anomalies, up from near-zero 18 months ago. Okta is treating NHI as a product surface, not a slide in a roadmap deck.
The regulatory skeleton is forming
NIST finalized supplemental guidance to SP 800-63 this week establishing that agent credentials must be bound to a traceable principal chain — a human identity owner — with explicit session duration limits. This isn’t a mandate yet. It’s the shape of what auditors will demand in 12–18 months, particularly in financial services and critical infrastructure where NIST frameworks get contractually incorporated. Build to it now or scramble later.
The one to watch closely: SailPoint’s April analyst day. Job postings over the past 30 days reference a “machine identity lifecycle” product team that didn’t exist before. SailPoint has the IGA installed base and the post-IPO platform pressure to move into NHI governance in a way that would reframe the whole competitive map — putting them directly against Oasis Security and CyberArk’s emerging AI identity posture management positioning.
AI Consulting & Advisory Market
The market is bifurcating — and fast
Accenture posted $1.8B in AI-related bookings in a single quarter. That number is real, but the composition matters: it’s implementation and managed services. The strategy work is commoditizing because Microsoft, Google, and AWS are all bundling co-funded advisory into cloud commit deals. You can’t charge $400/hour for “AI transformation strategy” when the hyperscaler is giving it away to protect a seven-figure infrastructure commitment.
What’s not commoditizing: AI risk governance. Boards are now requiring formal AI risk assessments before approving multi-agent workflows for production. CISOs — not CIOs — are owning those budgets and driving the mandates. Firms that can sit at the intersection of NIST AI RMF, EU AI Act compliance, and actual LLM red-teaming are closing $200K–$500K engagements in six to eight weeks without competitive RFPs.
The EU AI Act’s August 2026 deadline for high-risk system compliance is the near-term catalyst. Enterprises that have been watching the regulatory clock have about 90 days before that deadline becomes an emergency. The advisory surge around GDPR in 2018 is the right historical analog — and the firms that captured that wave were the ones who had already built the technical muscle, not just the compliance vocabulary. The same split is happening now between advisors who can actually assess model behavior and those who can only produce a gap analysis document.
What to watch
- SailPoint’s April analyst day for a formal NHI governance product announcement — if they ship a machine identity lifecycle capability into their IGA workflow, it forces every enterprise running SailPoint to have a conversation about consolidating NHI tooling rather than buying point solutions.
- MCP security posture as the protocol gains enterprise adoption — the attack surface created by misconfigured MCP servers and tool-use prompt injection is underexamined, and the first significant incident involving an MCP-connected agent with write access to a production system will move the entire market.
- Where EU AI Act advisory spend flows in Q2 — whether it goes to legal/compliance firms repackaging existing risk frameworks or to security-native technical advisors who can actually assess agentic system behavior will tell you a lot about how enterprises are defining “AI risk” as a discipline.
This week’s sources
- CyberArk 2025 Identity Security Threat Landscape Report — cyberark.com
- Okta Engineering: Extending Identity Threat Protection to Machine Identities — okta.com
- NIST SP 800-63 Supplemental Guidance: Automated and Delegated Credentials — csrc.nist.gov
- Oasis Security: State of Non-Human Identity 2025 — oasis.security
- Accenture Q2 FY2026 Earnings – AI Bookings Commentary — newsroom.accenture.com
- NIST AI Risk Management Framework – Adoption Guidance for Enterprises — nist.gov
- EU AI Act Implementation Timeline – High-Risk System Deadlines — digital-strategy.ec.europa.eu
- Microsoft AI Partner Advisory Co-Funding Programs FY2026 — partner.microsoft.com