previous arrow
next arrow
Slider

Separating Signal From Noise: Why AI-Powered Pentest Tools Can Create Problems

 Published: December 15, 2025  Created: December 15, 2025

by Gunter Ollmann

AI-powered offensive security tools are becoming increasingly common and can be used by anyone, even those with limited technical knowledge. The tools themselves are technically very capable, able to find vulnerabilities at a speed and scale like never before possible.

But, there is a downside in terms of the “noise” they generate. Just as for years, security operations staff have sat in front of computers with hundreds of false alarms going off, the same is now happening to those responsible for vulnerability management.

New AI-driven tools inundate users with alerts without context or necessary prioritization, making it nearly impossible for analysts to separate signal from noise.

While they excel at discovery, the actual result is a surge in costly, unactionable alerts. This leads to alert fatigue, allows real adversaries to slip through and ultimately erodes confidence in the security function.

Zero Days Get The Publicity But Legacy Vulnerabilities Are An Issue

In the Cobalt State of Pentest Report 2025, we analysed tests from over 2,700 organizations over a ten-year period. Incredibly, we found that, on average, organizations only ever fix less than half of all exploitable vulnerabilities identified in the tests.

Overall, firms remediate less than half (48%) of all their pentest results, however, this number significantly improves (69%) for findings labeled ‘serious.’ And while zero days get the headlines, legacy bugs are just as much of an issue—40% of vulnerabilities exploited last year were from 2020 or earlier.

It’s not a “blame game,” as the challenges are nearly incomparable. Patches are not often available, can be ineffective and are difficult to apply at scale. Security teams are often stretched to the limit, often having a “wish list” of 500 items to address each day, but only the capacity for 50.

The persistent challenge is not simply finding weaknesses, but deciding which ones truly matter, and which can realistically be addressed with the resources available.

AI Tools Often Find CWEs, Not CVEs

While AI tools are very good at finding issues, much of their output doesn’t map cleanly to published CVEs (Common Vulnerabilities and Exposures), which are confirmed, exploitable flaws.

Instead, they often generate potential CWEs (Common Weakness Enumerations) or software ‘bugs’ that may or may not translate into an actual security risk.

We often talk of finding a needle in a haystack, but the outcome here is haystacks of needles. Security teams face hundreds or thousands of flagged issues, many of which may never pose a real threat.

This isn’t just frustrating; it distorts key metrics like mean-time-to-fix, making teams appear to be falling behind even when they’re working at capacity.

Humans Remain Critical

The human dimension is critical. Technology, particularly AI, excels at speed and scale, but it cannot yet replace the years of experience and judgment of a human analyst.

Experienced pentesters provide the critical layer of context, and can validate whether a weakness is truly exploitable, measure its potential impact and advise on remediation within the context of a company’s IT stack and risk tolerance.

They can use their instinct to distinguish between theoretical bugs buried in code, which may never surface, into actual threats from the exploitable flaws that attackers can weaponize tomorrow.

Fseparating-signal-from-noise-why-ai-powered-pentest-tools-can-create-problems%2F%22

Pentesters also bring a unique mindset—they view systems and networks like attackers. They understand their tradecraft and motivations. This means they can identify the weaknesses that attackers would target first and those that they likely leave alone. This intuition is the signal that cuts through the noise, ensuring precious security resources are focused on the issues that actually matter.

Attackers Use These Tools Too

Let’s also remember that adversaries often use the same tools that defenders do. Bug hunters already employ AI to zero in on specific vulnerability classes, sometimes beating defenders to the punch.

State actors and cybercriminal groups do the same, refining AI outputs until they reveal lucrative exploitation paths. But ultimately, crime is carried out by humans and they too will not be relying on AI alone.

The solution is not a matter of “AI or human,” but rather a crucial balance between the two. Automation alone brings huge benefits of scale, discovery and speed, but it must be coupled with human expertise in validating, prioritizing and closing the vulnerabilities that matter most.

Because in the end, the challenge in cybersecurity has never been simply to find more vulnerabilities. The challenge is to fix the right ones, before adversaries exploit them.


https://www.forbes.com/councils/forbestechcouncil/2025/11/25/separating-signal-from-noise-why-ai-powered-pentest-tools-can-create-problems/>