AI is Eating Your Scanner
For the better part of two decades, the vulnerability scanner was the centerpiece of any serious security program. The ability to reliably discover what was exposed and match it to a CVE database felt like genuine differentiation. Vendors competed on coverage, speed, and accuracy. Security teams built workflows around their scanner of choice.
That era is ending.
AI hasn't just improved scanning, it has eroded the moat. Detection, the core function that once justified six-figure contracts, is rapidly becoming table stakes. The question for CISOs isn't whether their scanner finds the CVEs. They all do. The question is what happens next.
Detection is a solved problem (or close enough)
The mechanics of vulnerability detection are fundamentally pattern matching tasks: crawling assets, fingerprinting software versions, matching against known vulnerability databases. These are exactly the tasks AI handles well. Open-source models can now perform credible scanning at a fraction of the cost of legacy enterprise tools. Cloud-native pipelines are auto-discovering assets and correlating exposure data without human configuration. The barrier to building a scanner has collapsed.
If you want a concrete illustration of how far this has gone, consider what Valerio Baudo recently published. Every night at 3AM, a cron job on his server runs an eight-stage AI pipeline that queries NVD and CISA KEV for recently active CVEs, pulls the patch commit from GitHub, spins up vulnerable and patched Docker containers, and writes and validates working TypeScript exploit checks against both. By morning, there are pull requests waiting. The whole operation runs on a $200/month Claude subscription. Thirty-one exploit checks in the first four days.
This isn't a well-resourced security vendor. It's one person and an agentic workflow running while he sleeps.
We're already seeing the downstream effects across the market: pricing pressure on detection-only vendors, consolidation among mid-market players, and buyers increasingly skeptical about what exactly they're paying for. When a commodity gets commoditized further, the contracts don't renew.
The real gap was never detection
Finding vulnerabilities was never actually the hard part. Any reasonably mature organization already has more findings than they can act on. The backlog is the problem. The inability to translate a raw finding into a concrete remediation action, at scale, in context, without burning out your engineers, is the problem.
That's where triage and prioritization earn their place in the stack. Knowing a vulnerability exists is worthless without understanding whether that service is internet-facing, what data it touches, and whether a compensating control already exists. That last mile, from "this is the most important thing to fix" to "this is fixed and validated," is where most programs still bleed time and toil. There are several ways to address a finding, and knowing your environment is the key to efficiency here. Whether it’s a patch, or a configuration change, or implementing an IPS rule, or hardening an endpoint, etc. Knowing what exists and how it is configured will lead to the shortest path to protecting against a future incident.
Where value is concentrating
The vendors who will win this cycle aren't the ones with the most comprehensive scan coverage. They're the ones who own the remediation workflow. That means guided fix paths that go beyond "apply the patch," accounting for dependency trees, deployment environments, and acceptable risk tradeoffs. It means closed-loop validation that confirms a remediation actually worked rather than leaving that to a re-scan cycle weeks later. And it means integration into the developer workflow, not just the security dashboard, so that findings become tickets with diffs rather than alerts.
This is a fundamentally different product than a scanner. It's a system that moves work from the finding queue to the resolved column. That's where buyers will concentrate their budgets, because that's where their operational pain actually lives.
Baudo's pipeline gestures at this future too. The final stage doesn't just produce a finding, it opens a PR with validation evidence and reproduction commands ready for review. The human is a reviewer, not a builder. The question for enterprise security vendors is whether they can deliver that same closed loop at organizational scale, with the context and integrations that a solo cron job can't provide.
What this means if you're a CISO today
When you're renewing contracts or evaluating new tooling, don't ask whether a tool finds more. Assume parity on detection. Ask what it does after a finding is discovered.
Two capabilities will separate the winners from the noise. The first is prioritization. Even as AI accelerates remediation, no organization will fix everything. The teams that come out ahead will be the ones closing the vulnerabilities most likely to end in a security incident, not just the ones with the highest CVSS scores. Context-aware prioritization, grounded in your actual environment and threat landscape, is not a solved problem and is not going to be commoditized anytime soon.
The second is the remediation loop itself. How much manual effort does your team still own once a finding is prioritized? What does fix guidance actually look like, and does it account for your stack, your deployment environment, your risk tolerance? Is there validation that confirms the fix worked, or does your team have to trust and re-scan weeks later?
The scanner isn't going away. But it's becoming infrastructure, like a firewall. Necessary, undifferentiated, and no longer worth paying a premium for. The premium is shifting to whoever helps you figure out what to fix first, and then actually get it fixed.