The security industry has been talking about AI for years, but the last few weeks made the theoretical very real — on both sides of the equation.
The Defensive Side: AI as Vulnerability Hunter #
Anthropic recently disclosed that Claude Opus 4.6 discovered over 500 high-severity vulnerabilities in a two-week period, including 22 previously unknown flaws in Firefox — with working exploit code. Google’s Vulnerability Rewards Program paid out $17 million in 2025 and is now formally integrating AI-assisted discovery into its bounty programs.
The implications are significant. AI-assisted vulnerability research is compressing timelines that used to take human researchers weeks or months into days. That’s genuinely good news for defenders — more bugs found before exploitation means more opportunities to patch.
But there’s a catch.
The Offensive Side: AI in the Malware #
Mandiant reported that threat actors are now embedding large language models directly into malware, giving it adaptive capabilities. Meanwhile, the “Slopoly” ransomware family — generated using AI tools — entered active ransomware operations. This isn’t AI-assisted coding. It’s AI-powered malware that can adapt its behavior on the fly.
An autonomous AI agent also successfully compromised McKinsey’s internal AI platform in two hours during a red team exercise. The attack demonstrated that AI agents don’t just find vulnerabilities — they can chain exploitation steps together with a speed and persistence that human attackers can’t match.
The Speed Gap Is the Real Problem #
Here’s what worries me most: AI is finding vulnerabilities faster than organizations can patch them. March 2026 alone brought 83 CVEs from Microsoft, multiple Chrome zero-days, critical Veeam flaws, and actively exploited n8n bugs. And that’s just the high-profile disclosures.
If AI-powered discovery continues to accelerate — and there’s no reason to think it won’t — the gap between disclosure and exploitation is going to shrink. For organizations that already struggle to keep up with patch cycles, this compression is a force multiplier for attackers.
What You Should Be Doing #
Prioritize vulnerability management ruthlessly. You can’t patch everything simultaneously. You need a risk-based approach that focuses on what’s actively exploited and what’s most exposed in your environment.
Understand your attack surface. AI-powered attackers will map your environment faster than you expect. If you don’t have a clear inventory of your assets, configurations, and exposed services, you’re already behind.
Don’t panic about AI — prepare for it. The organizations at greatest risk aren’t the ones facing AI-powered attacks. They’re the ones that haven’t invested in fundamentals: patching, access control, network segmentation, and incident response planning. AI doesn’t change the basics — it raises the stakes.
Evaluate your AI tooling exposure. If your organization uses AI platforms, agents, or LLM-integrated tools, those systems are now attack surfaces. The OpenClaw vulnerabilities and McKinsey incident demonstrated that AI systems can be targets, not just tools.
The Takeaway #
AI is a capability amplifier. It amplifies the defender’s ability to find bugs and the attacker’s ability to exploit them. The question for every organization is: which side of that amplification are you benefiting from more?
If the answer isn’t clear, it’s time for an honest assessment of where you stand.