95% of Claude Code Written by AI
by Mike Krieger on June 5, 2025
Anthropic's experience with Claude Code demonstrates how AI-powered development transforms engineering workflows, team composition, and bottlenecks when AI generates the vast majority of code.
Situation
- Context: Anthropic's Claude Code team builds the coding capabilities for Claude AI
- Self-improving system: The team uses Claude to build Claude Code in what Mike Krieger calls a "very self-improving kind of way"
- Scale: Approximately 95% of Claude Code's Typescript codebase is generated by Claude itself
- Unique position: The team serves as "patient zero" for this new development paradigm
- Technical context: Claude Code is Anthropic's largest Typescript project, while most of the company uses Python
Actions
Workflow Transformation
- Changed review processes: Moved away from traditional line-by-line code reviews
- AI-powered reviews: Using Claude to review Claude-generated code
- Human role shift: Engineers focus on "acceptance testing" rather than detailed code reviews
- Pull request volume: Over 70% of pull requests are Claude-generated
Infrastructure Adaptation
- Merge queue redesign: Had to completely rearchitect their merge queue to handle the increased volume of code and pull requests
- Bottleneck identification: Recognized that traditional infrastructure couldn't handle AI-accelerated development pace
Democratizing Development
- Lowered barriers to entry: Non-Typescript developers can now contribute to the codebase
- Cross-team contributions: Engineers from other teams can solve their own problems in the codebase
- Language-agnostic development: People unfamiliar with Typescript can go from problem to pull request in an hour
Results
- Development acceleration: Dramatically increased code production and pull request volume
- Infrastructure strain: "Completely blew out the expectations" of existing systems
- Democratized contributions: Engineers without Typescript knowledge can now contribute
- Shifting bottlenecks: Focus moved from code writing to higher-level concerns like product strategy and infrastructure
- New workflow model: Created a template for how AI-assisted development might work across the industry
Key Lessons
- Bottlenecks shift upward: When AI handles code generation, bottlenecks move to infrastructure, strategy, and alignment
- Review processes must evolve: Traditional code review becomes impractical when AI generates most code
- Infrastructure needs redesign: Systems built for human-paced development can't handle AI-accelerated workflows
- Democratized development: AI coding assistants can eliminate language barriers, allowing non-specialists to contribute
- Self-improving cycle: Teams using AI to build AI tools create powerful feedback loops that accelerate development
- Human focus shifts: Engineers spend less time writing code and more time on acceptance testing and higher-level concerns
- Prepare for volume: Organizations adopting AI coding assistants should anticipate massive increases in code production and prepare their infrastructure accordingly