In January 2026, one of the most widely-deployed software projects in the world made a decision that sent shockwaves through the security community: cURL, the ubiquitous data transfer library installed on over 20 billion devices, shut down its bug bounty program on HackerOne. The reason wasn’t a lack of vulnerabilities or budget constraints — it was something far more troubling. The program had been overwhelmed by a tidal wave of AI-generated junk submissions, rendering it effectively useless.
This wasn’t a quiet sunset. Daniel Stenberg, the creator of cURL and its longtime maintainer, publicly documented the chaos in the program’s final days. In a 16-hour period, the project received 20 vulnerability submissions. Not a single one was legitimate. Every report was AI-generated noise — convincing-sounding technical jargon wrapped around non-existent security flaws. The confirmed-vulnerability rate, which had historically hovered above 15%, had plummeted below 5% starting in 2025. The signal-to-noise ratio had become untenable.
The Rise and Fall of cURL’s Bug Bounty
Over its lifetime, cURL’s bug bounty program was a genuine success story. The project identified and fixed approximately 87 confirmed vulnerabilities through the program, paying out over $100,000 in rewards to researchers who helped secure a critical piece of internet infrastructure. For years, it represented exactly what bug bounties should be: a productive collaboration between maintainers and the security community, making software safer for everyone.
But the arrival of large language models changed the equation dramatically. By late 2024 and into 2025, the program began experiencing a new phenomenon: the “AI slop” crisis. LLMs had become sophisticated enough to generate technical security reports that looked legitimate on the surface — complete with CVE-style identifiers, technical terminology, and formatted vulnerability descriptions. The problem was that these reports described vulnerabilities that didn’t actually exist.
Stenberg didn’t stand by quietly. He publicly threatened to ban anyone submitting AI-generated reports and implemented stricter submission guidelines. It didn’t help. The flood continued, consuming maintainer time and energy that should have been spent on legitimate security work. On February 1, 2026, cURL officially moved to GitHub-only vulnerability reporting with no monetary incentive attached. The bug bounty era for cURL was over.
What This Means for the Industry
The cURL shutdown isn’t an isolated incident — it’s a warning signal for the entire bug bounty ecosystem. Open-door programs that accept submissions from anyone without vetting are increasingly vulnerable to abuse at scale. When the cost of generating a fake vulnerability report approaches zero and the potential reward remains positive, basic economics predict exactly what happened to cURL.
The industry’s response needs to be multi-faceted. Platforms require better automated triage systems that can distinguish between AI-generated noise and legitimate research. Researcher vetting and reputation systems need to become standard features, not optional add-ons. Programs need tools to identify patterns of abuse and block bad actors before they consume maintainer bandwidth.
The confirmed-vulnerability rate plummeted from 15%+ to below 5% starting in 2025, making the program unsustainable for volunteer maintainers.
This is where vetted, private bug bounty programs demonstrate their value. Platforms that implement KYC verification, maintain researcher reputation scores, and prioritize quality over quantity are naturally resistant to the AI slop problem. When researchers have verified identities and track records, the economics of spam change dramatically. Programs can focus on signal instead of drowning in noise.
The Path Forward: Evolution, Not Extinction
The lesson from cURL’s shutdown isn’t that bug bounties are dead — it’s that the fully-open, no-verification model is dying. The future belongs to programs that balance accessibility with accountability. Researchers should still be able to earn rewards for legitimate security work, but platforms need mechanisms to ensure that’s what they’re actually receiving.
Private programs with vetted researcher pools, reputation-weighted submission systems, and human-assisted triage represent the natural evolution of the bug bounty model. These aren’t closed gardens that exclude new talent — they’re quality filters that protect maintainers’ time while still rewarding skilled researchers. As AI capabilities continue to advance, this evolution from open-door to vetted-access will only accelerate.
The cURL shutdown is a milestone moment for the industry, but it doesn’t have to be a tragedy. If platforms and programs learn the right lessons — invest in better tooling, implement stronger vetting, and design systems that resist AI-generated noise — bug bounties can continue fulfilling their original promise: making software more secure through collaboration between builders and breakers. The question is whether the industry will adapt quickly enough, or whether we’ll see more high-profile programs follow cURL’s example and shut their doors entirely.