RSAC 2026: The Week the Industry Admitted It Has No Idea How to Govern What It Just Built
RSAC 2026 drew 44,000 attendees and a flood of AI security product launches, but the real story was what the data revealed: 500,000 machines backdoored through LiteLLM, AI-generated code producing 35 CVEs in a single month, AI agent traffic up 7,851%, and a federal judge calling the Pentagon's AI ban "Orwellian." The governance gap is accelerating.
Safe AI AcademyMarch 29, 202613 min read15 views
RSAC 2026: The Week the Industry Admitted It Has No Idea How to Govern What It Just Built
This year's RSAC was a genuine inflection point, and not because of the 44,000 attendees or the record number of product launches. It was because of what happened between the keynotes.
While vendors were announcing AI security products that "barely existed twelve months ago" (the Cybersecurity Excellence Awards organizers' words, not mine), a supply chain attack was cascading through half a million machines. AI-generated code was producing CVEs at an exponential clip. A federal judge was calling the Pentagon's treatment of an AI company "Orwellian." And the Sam Altman, CEO of the world's most funded AI lab quietly stepped away from safety oversight.
The thing is, RSAC 2026 was not a story about solutions. It was a story about the distance between what the industry is building and what the industry can govern. And that distance got a lot wider this week.
The LiteLLM Catastrophe: When Security Tools Become the Attack Surface
Let me start with the story that should terrify every engineering team running AI in production.
TeamPCP, the same threat actor behind the Trivy compromise on March 19, executed a cascading supply chain attack that moved from Trivy to Checkmarx KICS to LiteLLM in less than a week. The LiteLLM Python package, a popular LLM proxy with 3.4 million daily downloads, was backdoored in versions 1.82.7 and 1.82.8 with a credential harvester, a Kubernetes lateral movement toolkit, and a persistent systemd backdoor. Version 1.82.8 added a file that , meaning the malware ran every time any Python process launched on an infected machine.
Stay Updated
Get notified when we publish new articles and course announcements.
The scale is staggering. Vx-Underground estimated 500,000 machines infected and 300GB of data exfiltrated. Wiz found LiteLLM running in 36% of monitored cloud environments. Microsoft GraphRAG, Google ADK, and Checkmarx were among the downstream victims. The attacker compromised LiteLLM co-founder and CEO Krish Dholakia's GitHub account and injected the payload during the wheel build process. PyPI quarantined the malicious versions within roughly three hours, but with 3.6 million daily downloads, the exposure window was enormous.
Here is the part that keeps me up at night: the attack included a self-propagating worm capability. Stolen npm tokens were automatically weaponized to infect packages maintained by the victims themselves. Security tools were not just the target; they were the propagation vector.
I will be honest, I have been writing about supply chain risks for weeks now. GlassWorm, ClawHavoc, the Cline compromise. But TeamPCP represents something qualitatively different. This was not poisoning a skill marketplace or hiding Unicode payloads in VS Code extensions. This was a surgical, cascading attack that weaponized the security scanning tools themselves. The tools we run to detect compromises became the compromise. Let me put it this way: it is like discovering that your smoke detectors have been quietly pumping carbon monoxide into the house.
For compliance teams: if your software bill of materials does not include your AI development toolchain, and if your supply chain risk controls do not cover transitive dependencies in LLM proxy libraries, you have a gap that 500,000 machines just fell through.
The AI Code CVE Explosion: Your Copilot Is Writing Vulnerabilities
While the supply chain was burning from the outside, a quieter crisis was building from the inside.
Think about what that means. We are not talking about AI coding assistants occasionally introducing a bug. We are watching a measurable, accelerating production of security vulnerabilities by the tools that developers trust to make them more productive. And the curve is exponential: the March count is nearly six times January's.
This connects directly to something Kevin Mandia said at RSAC's closing session. His team at Armadin (freshly funded at $189.9 million) tested autonomous AI agents against Fortune 150 applications and found RCE vulnerabilities in 100% of them. One hundred percent. His warning: the next two to three years will be "insane." When the founder of Mandiant tells you to brace for impact, you listen.
And the traffic data backs it up. HUMAN Security's 2026 State of AI Traffic Report found that automated traffic is growing 8x faster than human traffic, with AI agent traffic specifically up 7,851% year over year. The gap between benign and malicious automation? Only 0.5%. That means the difference between a legitimate AI agent and a malicious one is statistically invisible in your traffic logs.
Meanwhile, Vorlon's 2026 CISO Report surveyed 500 CISOs and found that 99.4% experienced a SaaS or AI ecosystem security incident in 2025. One in three had an AI agent incident. And 83% cannot distinguish between human and non-human behavior in their environments. At the end of the day, you cannot govern what you cannot see, and most organizations cannot see the difference between a human user and an AI agent operating in their systems.
The Identity Crisis: Who Is Your AI Agent, and Who Let It In?
This brings me to what I think is the most consequential product announcement of RSAC week, and it was not from a security vendor.
The way I see it, this is Microsoft acknowledging something that compliance teams have been struggling with for over a year: AI agents are actors in your environment, and if they do not have identities, they do not have accountability. You cannot apply least-privilege to something you cannot identify. You cannot audit something that does not have an audit trail. And you definitely cannot terminate a misbehaving agent if you do not know it exists.
This is why I found 1Password's Unified Access platform launch and Cisco's agentic IAM announcement in Duo so significant. Both are attempting to solve the same problem: agents need to be registered, mapped to human owners, given task-based permissions, and monitored continuously. This is not a new concept for human users. We have been doing identity governance for decades. But extending it to autonomous agents that can spawn sub-agents, call external tools, and make decisions without human approval? That is a fundamentally different problem, and the tooling is just now catching up.
From a control framework perspective, if you are building AI governance controls right now, agent identity management needs to be a top-level domain, not a sub-control buried under "access management." Every agent needs an identity, every identity needs an owner, every owner needs visibility into what their agents are doing, and every organization needs a kill switch. That is not aspirational. After this week, it is table stakes.
The RSAC Product Avalanche: Building the Fire Station While the City Burns
I do not want to leave the impression that the industry is doing nothing. The product response at RSAC 2026 was unprecedented in both volume and specificity. But the gap between what launched and what organizations can actually absorb deserves honest assessment.
In a single week, we saw eight new agentic AI security products from major vendors: Snyk Agent Security with MCP server governance, SentinelOne Prompt AI with MCP governance, Palo Alto Prisma AIRS 3.0 with an AI Agent Gateway, Arctic Wolf's Aurora Agentic SOC, AccuKnox AI-Security 2.0, Acalvio 360 Deception for AI-driven attacks, Forcepoint plus F5 for runtime API protection, and Wiz AI Application Protection Platform. CrowdStrike launched Charlotte AI AgentWorks and AIDR for agentic SOC operations. Cisco open-sourced DefenseClaw, a framework for securing MCP tool-calling flows with a Skills Scanner, MCP Scanner, A2A Scanner, and AI Bill of Materials generator.
And perhaps most interestingly, Accenture launched Cyber.AI powered by Claude with an Agent Shield component for real-time agent governance. They deployed it internally across 1,600 applications and 500,000-plus APIs, reducing scan turnaround from 3-5 days to under 1 hour and expanding security testing coverage from roughly 10% to over 80%. Those are real numbers from real deployment, not a marketing deck. And Amazon quietly revealed Project Metis, a system of competing Red-Team and Blue-Team AI agents that run autonomously inside AWS infrastructure 24/7, compressing security workflows from weeks to 4 hours. CrowdStrike's stock fell 7% on the news.
Bruce Schneier's RSAC keynote tied it all together with a thesis I have been arguing for months: data integrity is the foundation for AI trust. Not guardrails, not filters, not compliance checkboxes. Data integrity. If the data feeding your AI is compromised, poisoned, or unreliable, no amount of runtime protection saves you. As someone who has spent years obsessing over knowledge base quality and control data accuracy, I wanted to stand up and applaud.
Let me bring this back to the kitchen. Except this week, the kitchen analogy needs an update.
We built a kitchen with AI-powered appliances that can cook autonomously. The appliances are producing food faster than anyone expected, but some of that food is poisoned (74 CVEs and counting). The ingredient supplier got hacked, and the contamination spread to 500,000 other kitchens before anyone noticed. The inspector showed up, looked at the appliances, and could not tell which ones were working for us and which ones were working for someone else (83% of CISOs cannot distinguish human from agent behavior). Meanwhile, the health department tried to shut down the most safety-conscious kitchen on the block, and a judge had to step in and call it what it was.
Oh, and the head chef at the biggest kitchen in town? He just said he does not have time to focus on food safety anymore. He is busy raising money for a bigger kitchen.
At the end of the day, RSAC 2026 was the conference where the industry collectively acknowledged a truth that has been building for months: we are deploying AI systems faster than we can govern them, and the governance gap is not a temporary condition. It is a structural feature of how this technology is being built and sold.
That does not mean we give up. It means we get honest about what controls actually need to look like. Agent identity is not optional. Supply chain integrity for AI toolchains is not a nice-to-have. AI-generated code review is not paranoia; it is a measurable necessity. And vendor safety posture evaluation needs to account for the reality that safety leadership can change overnight, for political, financial, or organizational reasons.
Takeaway for this week, "we will figure it out later" is not a strategy anymore. The data says later already happened.