Misc

When paradigms are shifting: InfoSec in the age of AI

Over the last few weeks, I have had a very productive exchange with Christoph Klaassen on the impact of AI on security governance and compliance. In this post, we summarize our thoughts.

When the Perimeter Dissolves: InfoSec in the Age of Agentic AI

There’s an old saying among hackers coined by Dr. Eugene Spafford: “The only truly secure system is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards – and even then I have my doubts.”1

It was a joke, a wry nod to the impossibility of perfect security. But here’s the thing: the joke doesn’t land anymore. Because in the world we’re building right now, the systems don’t stay powered off. They reason. They plan. They act. And they do it faster than any human security team can keep up.

Welcome to the age of agentic AI. If you work in Information Security Management and/or Governance, Risk & Compliance, this is the inflection point you may have been sensing in your gut for months.

The Ground Has Shifted Beneath Our Feet

Let’s be direct about what’s happening. The static, chat-based large language models that defined the early generative AI era are already structurally obsolete for enterprise use. In their place, a new generation of autonomous, goal-oriented AI systems is taking root. These agents don’t just generate text, but also access tools, chain decisions together, maintain memory across interactions, and take real-world action without human intervention at every step. At the 2025 Forrester Security & Risk Summit, analyst Allie Mellen put it plainly: everything in security will change because of AI over the next decade.

Why Traditional GRC Can’t Keep Up

Here’s the uncomfortable truth that many of us in the GRC space need to confront: our well-established enterprise governance frameworks were built for a world of human-speed processes, periodic audits, and relatively stable system architectures (anyone remembers when Cloud- and K8s-based systems took off as more or less uncontrolled IT environments, and GRC hurdled to keep up with new operating paradigms?). Annual risk assessments. Quarterly compliance reviews. Control matrices are updated every fiscal year. The traditional GRC lifecycle operates on weeks to months.

Now consider what we’re asking those frameworks to govern: AI agents that can be updated, retrained, or reconfigured in hours. Integration architectures that shift weekly as new models are swapped in or new tool connections are established. To make things even more complicated, systems that influence or even execute business-critical processes, where the very decision-making logic is a probabilistic model that even its creators can’t fully explain. And this, while many InfoSec Risk Management professionals already struggle with mathematically sound risk management practices, i.e., quantitative risk management.

These aren’t growing pains. These are structural mismatches between the speed of AI adoption and the cadence of traditional information security governance. Traditional GRC is fundamentally incapable of governing a world of autonomous agents, serving as the central nervous system of enterprises, controlling value chains, and supporting processes. Do you already see the paradigm shift here? Envision real-time, automated governance for every action an AI agent takes. Not monthly. Not quarterly. Continuously.

We see a shift from periodic, checkbox-driven compliance toward automated, scalable, and measurable systems of continuous assurance and adaptation. The vocabulary alone tells you how deep this change runs. We’re not talking about updating a spreadsheet. We’re talking about re-engineering the entire discipline.

The Attack Surface You Didn’t Know You Had

For those of us with roots in penetration testing and red teaming, the agentic era introduces attack surfaces that feel almost alien compared to traditional infrastructure assessments.

The OWASP GenAI Security Project released its Top 10 for Agentic Applications in December 2025, built by over 100 security researchers.2. The risks they identified are not theoretical. They’re drawn from the lived experience, and they are a great collection of things that can go wrong when AI moves from passive text generation to active decision-making.

Consider indirect prompt injection: an attacker doesn’t need to breach your network or compromise a server. They need to plant malicious instructions in a document, an email, or a web page that your AI agent will process. The agent reads the content, follows the hidden instructions, and takes action, such as exfiltrating data, escalating privileges, or executing unauthorized transactions.

Then there’s memory poisoning. Unlike traditional applications with deterministic state, consider agentic systems that maintain persistent memory across interactions. Corrupt that memory, and you corrupt every future decision the agent makes. It’s like planting a cognitive bias in a human analyst, except the bias propagates at machine speed across every interaction the agent has. How well prepared is your organization to detect and defend against this threat? As the OWASP team noted in their release, once AI began taking actions, the nature of security changed forever.3

The Compliance Paradox

Here’s where InfoSec professionals face a genuine paradox. On one side, regulatory pressure is intensifying. The EU AI Act is moving through phased enforcement, with high-risk AI system requirements becoming enforceable in August 2026. DORA, NIS2, and other sector-specific regulations are adding additional requirements. The compliance landscape has never been more demanding. On the other side, the systems we need to govern are becoming less predictable, less explainable, and more autonomous with every passing quarter. Models change. Interconnections multiply. Agent behaviors emerge that weren’t designed or anticipated.

The paradox is this: we’re being asked to certify compliance for systems whose behavior we cannot fully predict. We’re expected to assure our stakeholders (colleagues, management, customers, suppliers, regulators, the general public) that our AI-integrated processes are secure, controlled, and governed. But the very nature of these systems resists the kind of deterministic assurance that traditional compliance demands.

This isn’t a problem you can solve by hiring more auditors or adding another control layer. It requires a fundamental rethinking of what “GRC” means in a world of probabilistic, adaptive systems.

What the Hacker Mindset Teaches Us

This is where the hacker culture that many of us carry in our DNA becomes not just relevant, but essential. The hacker mindset has always been about one thing: understanding how systems actually behave, not how they’re supposed to behave. It’s the difference between reading the manual and poking the machine. Between trusting the architecture diagram and also trying to break it. Between assuming the control works and (hopefully?!) proving it. In the agentic AI era, that mindset is needed more than ever.

Red teaming AI systems isn’t optional anymore. It’s becoming a regulatory expectation. The EU AI Act requires adversarial testing for high-risk systems. And we’re quite certain: they (the regulators) won’t take “it should be secure” as an answer. Here, the hacker’s natural curiosity comes to the rescue. It’s the core question of asking yourself, “but what happens if I do this?” that will still define effective InfoSec in the coming years.

The agents are the new perimeter. And, as with every perimeter before them, they need people who think like attackers to make them defensible, both from a technical and a compliance perspective.

Four Shifts Every InfoSec Professional Must Make

So where does this leave us? Definitely not helpless, but decidedly in motion. Here’s what the coming months and years demand from each of us:

  1. From periodic to continuous: annual risk assessments and quarterly reviews cannot govern systems that change daily. Monitoring, assurance, and governance must become real-time, automated, and event-driven. If your compliance process still revolves around a calendar, it’s already outdated.
  2. From deterministic to probabilistic assurance: we need to accept that governing AI systems means managing ranges of acceptable behavior, not guaranteeing specific outputs. This requires new metrics, new thresholds, and a much more mature conversation about risk tolerance (hopefully, not by still using risk heatmaps ;)).
  3. From siloed expertise to integrated teams: AI security can’t live in a separate box from GRC, and neither can live apart from the engineering teams building and deploying these systems. Making risk, compliance, and IT speak the same language isn’t just a nice-to-have. It’s the only way to see the full picture.
  4. From reactive compliance to proactive architecture: the best time to embed security into an agentic system is at design time (shocking news, we know!). Zero trust principles must extend beyond identity and access to encompass agent actions themselves, verifying not just who is requesting access, but what the agent intends to do and whether that action falls within policy.

The Road Ahead

There’s a tendency in our industry to frame every major shift as a crisis. Yes, the challenges are real and global: the skills gap is acute, the regulatory landscape is shifting beneath our feet, and the attack surface is expanding faster than most organizations can map it.

But there’s another way to read this moment. For those of us who chose InfoSec because we’re drawn to complexity, because we thrive at the intersection of technology and trust, because we believe that security enables rather than constrains: this is possibly the most exciting time in our field’s history! The rules are being rewritten (again). The frameworks are being rebuilt. The very definition of what it means to “secure” a system is evolving before our eyes. The people who will shape that evolution are the ones who lean into the change, who bring the rigor of governance, the creativity of the hacker, and the courage to admit that some of our old answers no longer fit the new questions.

The agents are here. They’re busy. And they need us! Not the version of us that clings to last year’s playbook, but the version that’s already reaching for the next one.

Let’s build it together.

Sources & Further Reading


More ERNW research on AI security:

Upcoming TROOPERS26 trainings in the space:


  1. Quotable Spaf↩︎
  2. OWASP Top 10 for LLM Applications 2025 — [genai.owasp.org]↩︎
  3. OWASP Top 10 for Agentic Applications Security Project Blog Post↩︎

Leave a Reply

Your email address will not be published. Required fields are marked *