Anthropic Launches Claude Code Review, Sparks Fee Backlash

Anthropic Launches Claude Code Review, Sparks Fee Backlash

Anthropic launched Code Review for Claude Code on March 9, 2026, introducing a multi-agent AI system designed to automatically scan pull requests for bugs and logic errors. The feature, now available in research preview for Team and Enterprise customers, dispatches teams of AI agents to analyze code for complex issues that human reviewers often miss. However, the launch has sparked criticism from developers over its pricing structure, with each review costing an estimated $15 to $25 depending on code complexity.

AI code review tool sparks developer backlash over bugs and unfair pricing practices in software development team.

How the Code Review System Works

When a pull request is opened, Code Review dispatches a team of agents that look for bugs in parallel, verify bugs to filter out false positives, and rank bugs by severity, with results posted as a single high-signal overview comment plus in-line comments for specific bugs. The system employs an adaptive approach where reviews scale with the PR, with large or complex changes getting more agents and a deeper read while trivial ones get a lightweight pass.

The AI focuses on logical errors rather than style issues to provide immediately actionable feedback, according to Cat Wu, Anthropic’s head of product. Based on testing, the average review takes around 20 minutes.

Internal Performance Metrics

Anthropic has been using Code Review internally for months before the public launch, with notable results. Before Code Review, 16% of PRs received substantive review comments, but that number has now increased to 54%. On large PRs over 1,000 lines changed, 84% receive findings averaging 7.5 issues, while on small PRs under 50 lines, that drops to 31% with an average of 0.5 issues.

The tool’s launch comes as code output per Anthropic engineer has grown 200% in the last year, creating review bottlenecks that the company aims to address.

Developer Backlash Over Costs

While the technology demonstrates impressive capabilities, the pricing structure has generated significant pushback from the developer community. At $15 to $25 per review, the token-based costs can quickly accumulate for teams processing numerous pull requests daily. This concern is particularly acute for organizations already dealing with increased volumes of AI-generated code that require review.

The pricing reflects the computational intensity of the multi-agent system. Unlike simpler linting tools or Anthropic’s existing open-source Claude Code GitHub Action, Code Review employs multiple AI agents working in parallel to conduct deep analysis. Anthropic acknowledges it’s a more thorough and more expensive option than the existing Claude Code GitHub Action, which remains open source and available.

Broader Context and Market Implications

Claude Code’s run-rate revenue has surpassed $2.5 billion since launch, according to the company, demonstrating strong enterprise adoption despite pricing concerns. Anthropic’s enterprise business has seen subscriptions quadruple since the start of the year.

The feature arrives on what may be the most consequential day in the company’s history, as Anthropic simultaneously filed lawsuits against the Trump administration over a Pentagon blacklisting, while Microsoft announced a new partnership embedding Claude into its Microsoft 365 Copilot platform.

The tool is targeted at companies like Uber, Salesforce, and Accenture that already use Claude Code, with developer leads able to enable Code Review to run by default for every engineer on their team.

The “Vibe Coding” Problem

Code Review addresses challenges created by what the industry calls “vibe coding,” where AI tools take instructions given in plain language and quickly generate large amounts of code. While these tools have sped up development, they have also introduced new bugs, security risks, and poorly understood code.

Anthropic reports hearing the same concern from customers every week: developers are stretched thin, and many PRs get skims rather than deep reads. The company positions Code Review as addressing this gap, though the cost structure suggests it’s designed primarily for high-stakes enterprise environments where bugs carry significant consequences.

Key Facts

  • Code Review launched March 9, 2026, in research preview for Team and Enterprise customers
  • Average cost ranges from $15 to $25 per review depending on code complexity
  • Internal usage increased substantive review comments from 16% to 54% of pull requests
  • Code output per Anthropic engineer has grown 200% in the last year
  • Claude Code has surpassed $2.5 billion in run-rate revenue
  • Reviews take an average of 20 minutes to complete

Sources

Sources

  1. Anthropic launches code review tool to check flood of AI-generated code | TechCrunch
  2. Code Review for Claude Code | Claude
  3. Anthropic Launches AI-powered Code Review For Claude Code – Dataconomy
  4. Anthropic rolls out Code Review for Claude Code as it sues over Pentagon blacklist and partners with Microsoft | VentureBeat