No Trust Podcast: Zero Trust for the Rise of the Machines — with George Finney
In this episode of No Trust, part of the Zero Trust Forum, hosts Jaye Tillson and John Spiegel welcome back George Finney — CISO, author of Project Zero Trust and Rise of the Machines, and newly inducted member of the Cybersecurity Hall of Fame. George’s latest work tackles the intersection of Zero Trust and AI agents, exploring how security principles apply when your “users” aren’t human at all. From identity governance for autonomous agents to the pitfalls of “trusting” AI, this conversation is a wake-up call for security leaders navigating the AI era
PODCAST
John Spiegel
8/26/20255 min read


In this episode of No Trust, part of the Zero Trust Forum, hosts Jaye Tillson and John Spiegel welcome back George Finney — CISO, author of Project Zero Trust and Rise of the Machines, and newly inducted member of the Cybersecurity Hall of Fame.
George’s latest work tackles the intersection of Zero Trust and AI agents, exploring how security principles apply when your “users” aren’t human at all. From identity governance for autonomous agents to the pitfalls of “trusting” AI, this conversation is a wake-up call for security leaders navigating the AI era.
Highlights from the Conversation
From CISO to Author to Hall of Fame
George’s Project Zero Trust became required reading in courses at Harvard and Carnegie Mellon, and his follow-up Rise of the Machines blends Zero Trust fundamentals with AI-focused scenarios — peppered with pop culture references that make it accessible far beyond the IT security audience.
The AI Agent Problem
Autonomous agents are powerful but dangerous. A compromised account could give an attacker control over persistent AI agents, accelerating malicious activity. And today, it’s hard to distinguish in the logs between a human user’s actions and their AI counterpart’s.
“The only way to secure AI is with Zero Trust.” — George Finney
Identity Beyond Humans
George urges leaders to treat AI agents like high-risk service accounts: minimize their privileges, limit their lifespan, and constrain them to specific functions. AI blurs the line between human and machine identities — governance must catch up.
The Control Plane Is Gone
Traditional secure systems separate the control plane from the data plane. AI merges them, making data poisoning a direct path to operational compromise. Without Zero Trust guardrails, AI can be tricked into unsafe actions by malicious inputs.
From Department of No to Enabler
George rejects fear-based security. He advocates for pre-mortems — imagining and mitigating “what could go wrong” before deploying AI — so security teams can enable innovation without undue risk.
AGI Hype and Healthy Skepticism
Citing history from Deep Blue to today’s LLMs, George warns against inflated AGI timelines used to drive stock prices. He suggests small language models, realistic expectations, and a critical eye toward vendor agendas.
Zero Trust in an AI World
If Zero Trust were redesigned with AGI in mind, George sees potential for AI-driven, technology-agnostic controls that could remove unsafe trust relationships, increase visibility, and make Zero Trust more approachable — even for large, complex environments.
Leadership Advice
His first move in a new leadership role? Invest in people. He trained both security and IT staff in Zero Trust principles, because “people are the only link” — they build, configure, and operate everything else.
Why You Should Listen
If you want to understand how Zero Trust adapts to a world of AI agents, and how to balance innovation with control, this episode delivers both strategic vision and practical steps. You’ll walk away knowing why governance, identity, and skepticism are your best tools in the age of AI.
🎧 Listen to the full episode here: No Trust Podcast – George Finney Episode
Full Transcript (Cleaned & Readable)
Jaye Tillson: Welcome to another episode of No Trust. Today we’re stripping away the buzzwords to talk about securing AI agents with Zero Trust. Our guest is George Finney — friend of the show, CISO, and author of Project Zero Trust and Rise of the Machines. George, for new listeners, tell us who you are.
George Finney: Thanks for having me. I’ve been a CISO for over 20 years, currently at the University of Texas system, overseeing 14 universities. Project Zero Trust was inducted into the Cybersecurity Hall of Fame and is now taught at Harvard and Carnegie Mellon. My latest book, Rise of the Machines, is about Zero Trust and AI, with plenty of pop culture references to keep it fun.
John Spiegel: Let’s talk about AI agents. How do Zero Trust principles apply when the “user” isn’t human?
George Finney: AI agents keep me up at night. They can be abused just like compromised service accounts — except faster. Today, logs don’t easily show whether it’s me or my AI agent acting. If I’ve given an agent a long-lived token and my account is compromised, that agent can run amok. We need Zero Trust controls: know exactly what the agent can access, keep it appropriate, and tie it to business needs.
Jaye Tillson: Applications have evolved from three-tier to cloud to AI everywhere. How should security think differently?
George Finney: Chapter 8 of my book uses the example of calendar apps like Calendly — convenient, but they get your credentials and can act on your behalf. If breached, they can send emails, exfiltrate data, and facilitate scams. That’s an existential threat, not just “risk.” AI agents embedded in core apps like Salesforce multiply that threat. If the business wants AI, great — but use that as leverage to do Zero Trust first.
Jaye Tillson: Many think “identity” only means humans.
George Finney: Identity is the cornerstone of Zero Trust — for employees, partners, customers, service accounts, and now AI agents. We should treat agents like privileged accounts: one-time access, scoped to specific tasks. And we can’t inherently trust AI, because unlike past tech, it merges the control and data planes — making it vulnerable to data poisoning.
John Spiegel: How do you avoid being the “department of no” while securing AI?
George Finney: Use pre-mortems. Before deploying AI, ask “what could go wrong?” and build safeguards in from the start. Fear shuts people down — pre-mortems keep you proactive. And learn from misuse cases, like when an AI assistant deleted a production database. Governance matters: an AI governance council should review and approve workflows, especially for critical decisions.
Jaye Tillson: Should AI actions have tighter session limits and re-auth requirements?
George Finney: Yes, but AI lacks the easy re-auth triggers we have for humans. We might schedule when agents can run, require a user to start them, or limit their scope. Legislation is already pushing for a “human in the loop” for critical actions, especially in healthcare and other high-stakes areas.
Jaye Tillson: Are we risking a future where AI “truth” drifts away from reality?
George Finney: LLMs are essentially autocomplete on steroids. They can hallucinate, and if those outputs feed back into training, the system drifts. We’ve seen AI winters before. AGI is likely decades away — near-term hype is often investor-driven. Be skeptical, and consider alternatives like small language models.
John Spiegel: If you could design Zero Trust with AGI in mind, what changes?
George Finney: I’d aim for an AI-driven, technology-agnostic Zero Trust system with full visibility and automatic removal of unsafe trust relationships. Someone will crack this — and it’ll make Zero Trust far more approachable.
Jaye Tillson: What’s your advice for leaders?
George Finney: Invest in your people. In my first year, I trained both security and IT staff in Zero Trust principles. People aren’t the weakest link — they’re the only link.
John Spiegel: Fun question — what’s your comfort food after a day of AI and Zero Trust?
George Finney: Really good sushi, especially omakase with unexpected combinations, like pear on nigiri. The novelty helps me reset.
Jaye Tillson: Favorite sci-fi vision of AI’s future?
George Finney: 2001: A Space Odyssey — Hal 9000 had its own agenda and didn’t care about you. That’s closer to where AI might go.