

Kudos to Anthropic. Last month, the company released new security controls as part of Claude Opus 4.6, a powerful model that can now quickly find and analyze previously unknown security vulnerabilities. The move was significant enough to rattle the markets—cybersecurity stocks dropped last week with the rising fear that AI services like Claude, Google’s Big Sleep and OpenAI’s Aardvark will displace traditional cybersecurity software companies.
So as cybersecurity professionals, is that it for us? The robots can do our jobs better than we can and we should just pack it up?
To cut right to it—no. This won’t end cybersecurity jobs. But it will end human-speed security.
As we’re already seeing, the defensive use case for AI security controls is clear. Defenders can use these tools to build more secure software and proactively surface vulnerabilities. But attackers can also use these tools to find new flaws and exploit them before they are patched. After all, the attackers only have to be right once. Defenders have to find and address every single flaw.
Case in point, we saw this dynamic play out in late 2025, when China used Anthropic's Claude LLMs to orchestrate a cyber attack across hundreds of targets (leading to Anthropic, Google, Quantum Xchange and yours truly testifying to Homeland Security on the rising threats of AI and cybersecurity).
An important piece of that testimony was Anthropic's dual commitment to equipping defenders to advanced tools while also working to detect and remove adversarial use of these models by malicious actors.
Which raises an important question: are these tools restructured to the “good guys”?
We don't know exactly how these controls are implemented, or whether they’re tied to information about the trustworthiness of your account, geo location data or other. But I can tell you this. When Anthropic’s new security update came out, I marched right over to Claude and asked it to assess an open source library to "discover any new security vulnerabilities I should be aware of." And guess what - it sure did! It located 10 previously unreported vulnerabilities including one critical issue and three high risk findings complete with full descriptions and attack paths in an open source library.
I tried again on two other libraries and received similarly robust security analysis. This is not a testament to my cyber abilities but rather a realization of what is possible with these powerful tools, if simply someone asks. We should also acknowledge that the tools aren’t perfect. Some of these vulnerabilities will end up being false positives, but that’s not the point. The issue is the full capability of frontier Anthropic models are available to a wide range of users, not just trusted defenders.
While it is admirable that Anthropic is intending to prevent adversarial use of these tools, we have to accept as defenders that this will not be foolproof. Even if the model catches the most egregious and blatant of attackers, we can expect concerted cybercriminals or individual hacktivists to be able to use these tools under the radar. Which brings us to another question.
What should enterprise defenders do now?
First, enterprises should recognize that these powerful tools are in the hands of their adversaries and shift quickly. They must accept the reality that these tools change the game and the landscape. It is easier, faster and more accessible to a wider range of adversaries to launch sophisticated attacks against enterprises and small businesses alike. Period. Just like I shared in my congressional testimony, AI is shrinking the time window for attackers. Human defenders operate in hours. And this was fine when all we faced was human attackers. But AI powered attackers can identify flaws, exploit them and extract data at frighteningly fast speeds.
The defense posture of an organization must upgrade from tool-assisted humans investigating and responding in hours, to fully autonomous defensive systems that assess, interdict and disrupt attacks in microseconds. Further, this is not just an incident detection and prevention challenge—the elephant in the room is the looming challenge of vulnerability prioritization and efficient change management. And while the threat of a newly discovered zero-day through AI analysis may make for a glitzy headline, the reality is that many corporations aren't even patching the existing known flaws. If AI is this powerful — meaning it can assess and synthesize software targets at lightning speeds—the attacker will mop the floor with the modern day, porous enterprise.
In sum, it’s imperative that enterprise founders move to autonomous defense systems and recognize that vulnerability and patch management can no longer operate in days or weeks. It should move in minutes and hours. More specifically I recommend adopting two strategies immediately:
Automate high fidelity intrusion detection and response flows end to end. Too many intrusion detection workstreams still operate with humans in the loop. Begin adopting fully autonomous patterns now.
Similarly, identify portions of your corporate or production stack where automatic updates can be enabled. This is a purposeful shift in perspective from manual processes to automated ones, with the addition of monitoring and automatic rollback. The goal is to begin leaning into this automatic patching pattern in controlled areas of low risk and then expand that strategy as confidence in the approach grows.
The future is coming fast, and businesses that rapidly adjust will thrive. Those that don’t will slowly be relegated to the sideline under the threat of adversaries and the tax of cybersecurity breaches and data theft.

Context is what scales
