Analysis of 1,900+ repositories shows human reviewers still outperform automation on security-critical issues
FOR IMMEDIATE RELEASE | March 25, 2026
Media Contact: Email; info@securecodingpractices.com Phone: +1 (518) 813 2007 www.securecodingpractices.com
SAN FRANCISCO, March 25, 2026, A new peer-reviewed study from Atlassian researchers has found that AI-powered code review tools resolve security issues at a significantly lower rate than human reviewers, even as industry analysts predict a wave of vulnerabilities from AI-generated code. The findings challenge assumptions about automation’s role in application security and highlight growing skills gaps across development teams.
The Atlassian RovoDev study, conducted across 1,900+ repositories and accepted to IEEE/ACM ICSE 2026, found that AI code review achieves a 38.70% issue resolution rate compared to 44.45% for human-written comments. While AI tools reduced human commenting by 35.6% and accelerated pull request cycles by 30.8%, researchers documented a persistent “review gap” where automation misses context-dependent vulnerabilities that human reviewers catch.
“These numbers confirm what many engineering leaders are beginning to suspect: AI tools are excellent accelerators, but they don’t replace developer judgment,” said a spokesperson for Secure Coding Practices. “The things automation misses, business logic flaws, architecture-level risks, novel attack vectors, are exactly the things that lead to real breaches. As teams adopt AI coding assistants faster than ever, that gap becomes a critical blind spot.”
The findings arrive alongside new Gartner research projecting that 30% of all application vulnerabilities will stem from “vibe coding”, developers using AI assistants to generate code they don’t fully understand, by 2027. Additional Gartner data shows 43% of organizations operate at the lowest level of AppSec maturity (Level 1), with the average organization scoring just 2.2 out of 10.
IBM research attributes 82% of security breaches to skills gaps rather than tooling failures. Meanwhile, learning science data from the Learning Pyramid framework demonstrates that hands-on, practice-based training achieves 75% knowledge retention, compared to just 5-20% for lecture-based formats.
“Organizations have spent a decade buying better scanners and generating more alerts, yet 43% remain stuck at the lowest maturity level,” [Founder Last Name] added. “Proper prioritization can reduce alert noise by 75%, as Gartner notes, but prioritization requires judgment. You can’t prioritize what you don’t understand. The 15x retention advantage of hands-on training isn’t just interesting data, it’s the only scalable path to closing the AppSec maturity gap.”
The analysis synthesizes data from multiple sources including the Atlassian RovoDev study (arXiv:2601.01129, January 2026), Gartner’s “Application Security Strategy 2026” report, IBM security research (December 2025), and established learning science research. All source materials are publicly available and cited with original publication dates within the last 90 days.
Secure Coding Practices provides hands-on, practical bootcamps designed to teach developers how to embed security directly into their development process. Created by developers for developers, their programs focus on identifying and fixing real-world vulnerabilities like those in the OWASP Top 10, delivering immediate, actionable skills that apply to any codebase.
Full study and methodology available at: www.securecodingpractices.com/exploit-losses-bypass-code-reviews
