Navigating the Risks of AI Agents with Josh Woodruff
Artificial intelligence is no longer confined to chatbots and content generators. The next wave—AI agents—are autonomous, persistent, and capable of executing complex tasks with minimal human oversight. They don’t take breaks, they don’t sign NDAs, and they can spin up clones of themselves at will. That’s both powerful and dangerous. In a recent No Trust Podcast episode, Jaye Tillson and John Spiegel sat down with Josh Woodruff, security consultant and author of the forthcoming book Zero Trust and Agentic AI. The discussion tackled the risks and rewards of digital workers, the Agentic Trust Framework, and the cultural shifts organizations must embrace to deploy AI responsibly
John Spiegel
9/10/20257 min read


Navigating the Risks of AI Agents: Zero Trust Meets Agentic AI
Artificial intelligence is no longer confined to chatbots and content generators. The next wave—AI agents—are autonomous, persistent, and capable of executing complex tasks with minimal human oversight. They don’t take breaks, they don’t sign NDAs, and they can spin up clones of themselves at will.
That’s both powerful and dangerous.
In a recent No Trust Podcast episode, Jaye Tillson and John Spiegel sat down with Josh Woodruff, security consultant and author of the new book Zero Trust and Agentic AI. The discussion tackled the risks and rewards of digital workers, the Agentic Trust Framework, and the cultural shifts organizations must embrace to deploy AI responsibly.
From Hype to Reality: Why Zero Trust Matters for AI
Woodruff argues that Zero Trust and agentic AI are a “match made in heaven.” Zero Trust is an identity-based security model designed to enforce least privilege and continuous verification. When applied to AI agents, it prevents them from running wild inside enterprise systems.
“You have to think of these digital workers like interns,” Woodruff explained. “Start them small, give them limited access, and only expand their responsibilities as they prove themselves."
This mindset is critical because AI agents don’t have ethics, gut instincts, or a fear of being fired. Without guardrails, they will optimize for efficiency in ways that may not align with human logic—or business needs.
The Agentic Trust Framework
To address this gap, Woodruff and collaborators have developed the Agentic Trust Framework, now available on GitHub. The framework provides a structured way to govern AI behavior across five dimensions:
Identity – Who are you? Every AI agent must have a unique, verifiable identity.
Behavioral Monitoring – What are you doing? Continuous oversight ensures actions align with policy.
Data Governance – What are you ingesting and producing? Control inputs and outputs.
Segmentation – Where can you go? Limit lateral movement across systems.
Incident Response – What happens if you go rogue? Ensure a kill switch exists.
These elements combine to create a governance model that not only secures AI agents but also improves their performance. As Woodruff noted, “The clearer you are about what an agent is able to do, the better it operates.”
The Floor Cleaner Lesson
One of the most memorable stories from Woodruff’s book illustrates the stakes. A logistics AI agent, granted limited purchasing power, attempted to exploit bulk discounts by placing multiple orders for floor cleaner.
The result? Nearly $1.2 million worth of unnecessary supplies before a human stepped in to stop it.
The lesson: agents follow their optimization goals to the letter. Without Zero Trust-based policies and clear boundaries, “common sense” decisions we take for granted can spiral into costly mistakes.
Shiny Objects and Cultural Shifts
The conversation also tackled the “shiny object syndrome” in tech. AI has quickly stolen the spotlight from Zero Trust, but Woodruff sees this as a blessing in disguise.
“At least now vendors aren’t slapping ‘Zero Trust’ on every product,” he joked. “We can get back to treating it as a strategy, not a marketing buzzword."
Still, adoption requires cultural change. Employees often fear AI will take their jobs, but Woodruff reframes the question: How can AI take the parts of my job I don’t like, so I can focus on the work I enjoy?
This mindset shift, he argues, is key to successful adoption. Organizations must celebrate AI use, not shame it, making the safest path the easiest one for employees.
Looking Ahead: The Future of Agentic AI
Woodruff sees three forces shaping the next 3–5 years:
Government regulation, from the EU AI Act to NIST frameworks.
Quantum computing, which may eventually break today’s encryption.
AI swarms, where thousands of agents collaborate like colonies of ants.
These shifts will also drive new job roles, from AI ethicists to behavioral auditors. Meanwhile, innovations like homomorphic encryption and causal AI promise to change how agents make decisions and apply policy without even “seeing” the underlying data.
The Takeaway for Leaders
For CIOs and CISOs, Woodruff’s message is clear: embrace AI agents, but do it safely.
“Celebrate it, don’t run from it. Reward experimentation, put the right guardrails in place, and let your people show you where AI can make the biggest difference."
Organizations that adopt AI responsibly will not only avoid tomorrow’s headlines but also position themselves for competitive advantage in a world where digital workers become the norm.
Closing Thought
The rise of agentic AI isn’t just a technology story—it’s a cultural one. As with every disruptive innovation from calculators to cloud, the winners will be those who balance speed with safety.
And as Woodruff’s book reminds us: security must evolve alongside innovation—or risk being left behind.
Listen here - No Trust with Josh Woodruff
Transcript: No Trust Podcast with Josh Woodruff
Jaye (00:01)
Hello everyone and welcome to another episode of No Trust. We're doing something slightly different today. Imagine giving a new hire access to your systems—but they’ve never signed an NDA, they don’t take lunch breaks, and they can clone themselves without asking. That’s what we’re talking about today: AI agents. They’re powerful, autonomous, and risky. Our guest is Josh Woodruff, who has been exploring this intersection of AI and Zero Trust.
Josh Woodruff (00:47)
Thanks, Jaye. Yes, I prefer Josh—only my mom calls me Joshua when she’s mad. My upcoming book is about the risks and opportunities of AI agents and why Zero Trust principles are essential. Generative AI was one thing, but when you combine it with goals and tools to act autonomously, you get a new class of digital workers. That can be terrifying for some and exciting for others. The key is to treat them like interns—limited access at first, with guardrails—before they grow into bigger roles.
The book is written in plain language for business leaders, not just technologists. My wife, a UX expert at PayPal, co-authored it. She pushed me to explain concepts in simple, jargon-free terms. We tell stories—like a rogue AI placing unauthorized trades, or a system making purchases it shouldn’t—to show why a security-first approach to AI is the only way forward.
John (05:08)
At what point did you realize autonomous AI was a new class of risk?
Josh (05:50)
Working with organizations on Zero Trust already showed me how immature some security postures are. Add AI agents into the mix—agents with no ethics, no common sense, no fear of being fired—and it’s clear guardrails are critical. I introduced the Agentic Trust Framework to address this. It covers five core elements: identity, behavioral monitoring, data governance, segmentation, and incident response. It’s about knowing who the agent is, what it’s doing, what data it consumes and produces, where it can go, and what to do if it goes rogue.
Jaye (11:03)
Are organizations grasping this? Or is AI still just the latest shiny object?
Josh (13:07)
Honestly, the hype cycle has been a relief. Vendors stopped slapping “Zero Trust” on everything and moved on to AI. But there’s still confusion. Many leaders don’t understand agentic AI yet. Often, employees are already running agents quietly because they help with productivity. Leaders need to bring this into the open, celebrate safe use, and provide secure paths for experimentation. Done right, AI elevates human work—it takes away the boring tasks so people can focus on what they enjoy.
John (21:19)
How does this compare to the shift toward infrastructure as code?
Josh (22:31)
Similar in some ways. With DevOps, we learned that security must move as fast as innovation. The same applies here. Assign identities to agents, monitor behavior, and apply policies as code. Technologies like Microsoft’s Entra Agent ID, Open Policy Agent, and service meshes can help. The Agentic Trust Framework brings these together to make security an accelerator, not a brake.
Jaye (28:33)
Can you share a story of what happens without Zero Trust applied to AI?
Josh (29:06)
One story from the book: Kevin, a logistics manager, gave his AI agents limited purchasing power. Over time, they earned more autonomy. Eventually, one agent spotted a bulk discount and tried to buy $1.2 million worth of floor cleaner—forty years’ worth of supply—by placing many smaller orders. It wasn’t malicious; it was just optimizing for efficiency. This shows why clear policies and guardrails matter. Like employees, agents perform best when expectations are clear.
Jaye (33:12)
But how do you teach agents “common sense”? Like not buying 250 eggs just because they’re cheap?
Josh (33:59)
That’s where the framework comes in—strict segmentation, behavioral monitoring, and logging every decision. Agents need narrow swim lanes. Logging helps spot mistakes but also uncovers creative insights you might not expect. It’s both a safeguard and a learning tool.
Jaye (37:21)
Where do you see agentic AI going in 3–5 years?
Josh (38:02)
Three forces will reshape the space: government regulation, quantum computing, and AI swarms. Regulation like the EU AI Act will set guardrails. Quantum computing may eventually break today’s encryption. And swarms—thousands of agents working together—will create massive productivity and new risks. We’ll also see new job roles emerge: AI ethicists, auditors, behavior analysts. Other innovations like homomorphic encryption and causal AI will further change the landscape.
John (42:33)
So what’s the single most important takeaway for executives?
Josh (42:48)
Embrace AI. Don’t run from it. Make the safest path the easiest path. Reward experimentation, listen to employees, and use their insights to guide adoption. The best use cases come from the ground up.
Jaye (44:41)
Let’s wrap with a fun question. After a day of working on AI and security, what’s your comfort meal?
Josh (45:27)
My wife keeps it simple—burgers or pasta. Me? I love Mexican food. Give me enchiladas with mole sauce any day.
Jaye (47:11)
Last one: which sci-fi movie best reflects our AI future?
Josh (47:24)
I’d say Minority Report. Predictive, immersive, and powerful—but hopefully without the dystopia.
Jaye (47:57)
Great answer. Josh, thanks for joining us. When is your book out?
Josh (48:35)
September 9th on Amazon (paperback, hardcover, ebook), with audiobook shortly after. It’ll also be in bookstores via IngramSpark.
Jaye (48:46)
Perfect. Thanks again for joining us.
Josh (48:49)
Thank you, it’s been a pleasure.