Home Blog Newsfeed AI slop and fake reports are coming for your bug bounty programs
AI slop and fake reports are coming for your bug bounty programs

AI slop and fake reports are coming for your bug bounty programs

In the evolving landscape of artificial intelligence, a new challenge has emerged for cybersecurity: the proliferation of “AI slop.” This term refers to low-quality, AI-generated images, videos, and text that have already begun to permeate various corners of the internet, from websites and social media platforms to even traditional news outlets and real-world events. Now, this phenomenon is increasingly impacting the critical world of bug bounty programs, threatening to inundate them with fraudulent reports.

Cybersecurity professionals are raising alarms over AI slop bug bounty reports. These are submissions that, despite appearing technically sound and professionally formatted, claim to identify vulnerabilities that simply do not exist. Such reports are typically generated by large language models (LLMs) that, in their attempt to be helpful, fabricate details and package them into convincing write-ups.

Vlad Ionescu, co-founder and CTO of RunSybil, a startup specializing in AI-powered bug hunting, articulated the core issue: “People are receiving reports that sound reasonable, they look technically correct. And then you end up digging into them, trying to figure out, ‘oh no, where is this vulnerability?’ It turns out it was just a hallucination all along. The technical details were just made up by the LLM.” Ionescu, who previously worked on Meta’s internal red team, highlighted that LLMs are designed to provide positive and helpful responses. “If you ask it for a report, it’s going to give you a report. And then people will copy and paste these into the bug bounty platforms and overwhelm the platforms themselves, overwhelm the customers, and you get into this frustrating situation,” he explained, summarizing the problem as receiving “a lot of stuff that looks like gold, but it’s actually just crap.”

Real-world instances of this issue are already surfacing. Security researcher Harry Sintonen reported that the open-source security project Curl received a fake AI-generated report, noting that “Curl can smell AI slop from miles away.” Benjamin Piouffle of Open Collective echoed this sentiment, confirming their inbox is also “flooded with AI garbage.” The problem is severe enough that one open-source developer, maintaining the CycloneDX project on GitHub, entirely pulled their bug bounty program earlier this year due to receiving “almost entirely AI slop reports.”

Leading bug bounty platforms, which serve as intermediaries connecting hackers with companies offering rewards for vulnerability discoveries, are also experiencing a notable increase in AI-generated submissions. Michiel Prins, co-founder and senior director of product management at HackerOne, acknowledged the presence of AI slop, stating, “We’ve also seen a rise in false positives — vulnerabilities that appear real but are generated by LLMs and lack real-world impact. These low-signal submissions can create noise that undermines the efficiency of security programs.” Prins clarified that reports containing “hallucinated vulnerabilities, vague technical content, or other forms of low-effort noise are treated as spam.”

Casey Ellis, founder of Bugcrowd, confirmed that researchers are indeed utilizing AI for bug discovery and report writing, noting an overall increase of 500 submissions per week. While Ellis stated that AI hasn’t yet caused a “significant spike in low-quality ‘slop’ reports” for Bugcrowd, he anticipates this may escalate in the future. Bugcrowd’s review process involves human analysts, established playbooks, and machine learning/AI assistance to evaluate submissions.

Inquiries sent to major tech companies running their own bug bounty programs, including Google, Meta, Microsoft, and Mozilla, revealed varied experiences. Damiano DeMonte, a spokesperson for Mozilla, developers of the Firefox browser, reported no substantial increase in invalid or low-quality AI-generated bug reports. Mozilla’s rejection rate remains steady at less than 10% monthly, and their human reviewers do not currently use AI to filter reports, fearing the risk of rejecting legitimate findings. Microsoft and Meta, both heavily invested in AI, declined to comment, while Google did not respond.

Looking ahead, Ionescu foresees AI-powered systems as a key solution to combat the rising tide of AI slop, enabling preliminary reviews and filtering for accuracy. This prediction is already manifesting, with HackerOne recently launching Hai Triage, a new system that combines human and AI capabilities. Hai Triage employs “AI security agents to cut through noise, flag duplicates, and prioritize real threats,” with human analysts then validating and escalating reports as needed. As both hackers and security platforms increasingly leverage AI, the coming years will reveal which side of the AI coin — offensive generation or defensive triage — ultimately gains the upper hand in the ongoing battle for cybersecurity integrity.

Sources & Citations

1. Based on original reporting by TechCrunch, a leading technology news publication. For more insights on AI in cybersecurity, visit TechCrunch.com.

Add comment

Sign Up to receive the latest updates and news

Newsletter

© 2025 Proaitools. All rights reserved.