TL;DR
- AI is shifting from defensive tool to active operator in cybersecurity
- Models can now identify and exploit vulnerabilities, not just flag them
- The barrier to entry for “attacker-like behaviour” is rapidly lowering
- Organisations risk a period where discovery outpaces remediation
- AI should be treated less like a tool and more like an actor inside systems
AI is no longer just a defensive cybersecurity tool
Everyone is still talking about AI in cybersecurity as if it is primarily a defensive tool, focused on detecting threats, improving monitoring and automating response processes.
That framework made sense for a while, but it already feels slightly out of date. We have seen how Agentic AI is changing this and effectively building out insider threats.
“This new generation of models is not just identifying vulnerabilities in systems — it can exploit them.”
AI can now exploit vulnerabilities, not just identify them
This new generation of models is not just identifying vulnerabilities in systems — it can exploit them.
Anthropic’s Mythos model is a good example of this shift. It can find a vulnerability, build a working exploit and execute it with minimal human input, which is a very different capability from simply flagging a potential issue for a developer to investigate.
That is not AI “supporting” security teams in the traditional sense. It is AI performing tasks that have historically been associated with attackers.
Why this changes the cybersecurity risk profile
The part that feels more significant is not just that this capability exists, but how accessible it appears to be becoming.
In the past, exploiting vulnerabilities required a high level of technical expertise, usually sitting with experienced security researchers or highly skilled engineers who understood both the systems and the attack surface in depth.
If these models can be used effectively by people without deep security backgrounds, then the ability to behave like an attacker is no longer limited to a small group of specialists.
That changes the risk profile quite materially.
AI inside organisations: from tool to internal actor
At the same time, organisations are increasingly embedding AI directly into their internal environments, giving it access to codebases, internal tools, infrastructure and, in some cases, production systems.
This is entirely logical, because that is where the value of these tools is realised.
But when you combine that level of access with the capabilities being demonstrated by models like Mythos, it becomes harder to think of AI purely as a productivity tool.
Instead, it starts to look more like something operating inside the system with its own capability to explore, identify weaknesses and potentially act on them.
“It starts to look more like something operating inside the system… with its own capability to explore, identify weaknesses and potentially act on them.”
The gap between discovery and remediation
The usual response to this is that these tools will ultimately benefit defenders, and that is probably true in the long term.
However, there is likely to be a period where discovery outpaces remediation.
If models can identify vulnerabilities faster than teams can realistically fix them, then organisations may find themselves with a growing backlog of known issues, which increases rather than reduces exposure in the short term.
The most likely risks are not malicious
It is also worth noting that the most likely risk scenario is not necessarily deliberate misuse.
More often, it is likely to be something much less dramatic.
A model is given slightly too much access, used in a context that was not fully thought through, or behaves in a way that is technically correct but operationally problematic.
These situations are not malicious, but they can still create real issues.
This is a step change, not a gradual shift
What stands out most is the pace at which this capability has emerged.
Only a few months ago, models struggled to reliably exploit vulnerabilities at all.
Now we are seeing systems that can identify, chain and exploit them with relatively little input.
That is not a gradual improvement; it is a step change.
And governance frameworks rarely evolve at that speed.
Rethinking AI: tool vs operator
It feels like we are at a point where the way we think about AI needs to shift slightly.
Rather than viewing it purely as a tool that helps people work more efficiently, it may be more accurate to think of it as something closer to an actor within the system, with its own capabilities and potential impact.
That distinction matters, because it changes how these systems need to be controlled, monitored and governed.
The question is no longer whether this capability will become more widespread, because it almost certainly will.
The more relevant question is whether organisations are prepared to manage systems that can operate in ways that look increasingly similar to the behaviour of highly capable attackers.
What do you think? Are these models still being treated primarily as tools, or is there a shift towards thinking of them as something closer to operators within the system?
Short FAQs
Is AI still just a defensive tool in cybersecurity?
AI is no longer limited to detecting threats or supporting response processes. New models can identify and exploit vulnerabilities, shifting from passive tools to systems capable of acting within the attack surface.
Can AI act like a cybersecurity attacker?
Short answer: yes.
New models can find vulnerabilities, build exploits and execute them with minimal human input, performing tasks traditionally associated with attackers.
What makes agentic AI different from traditional security tools?
Agentic AI can operate autonomously within systems, exploring environments, identifying weaknesses and taking action without step-by-step human direction. That makes it fundamentally different from tools that only analyse or alert.
Why does AI fundamentally change the cybersecurity risk model?
AI lowers the expertise required to exploit vulnerabilities and dramatically increases the speed of discovery. This changes the risk model from controlled exposure to rapidly expanding and harder-to-manage attack surfaces.
Where does AI create the most immediate risk in cybersecurity?
The most immediate risk is the gap between discovery and remediation. If AI identifies vulnerabilities faster than teams can fix them, organisations are left with a growing backlog of known issues and increased exposure.
If you are experimenting with or scaling AI and would like to understand your risks, we’re always open to a chat. Reach out using the form below and we’ll connect you with our AI team.
