Skip to content

Be Kind and Candid

by Alexander Imbirikos on December 14, 2025

Alexander Imbirikos leads Codex, OpenAI's coding agent, with a vision that extends far beyond just writing code. He sees Codex as the beginning of a true software engineering teammate that will eventually participate across the entire development lifecycle.

Imbirikos believes AI tools should make humans feel maximally accelerated rather than confused about their role. When building Codex, his team focuses on creating experiences where humans feel empowered, not replaced. This philosophy led them to shift from their initial cloud-based approach (which was too future-forward) to meeting users where they are—in their IDEs and terminals—while gradually building toward more autonomous capabilities.

The most transformative aspect of Codex isn't just its ability to write code, but its potential to become proactive. Imbirikos notes that today's AI products are limited because users must explicitly prompt them, creating a bottleneck. "If you think of how many times people could actually get benefit from a really intelligent entity, it's thousands of times per day," he explains. His vision is for Codex to become "helpful by default" without requiring constant prompting.

For engineering teams, this means the future bottleneck isn't writing code but validating it. Imbirikos observes that while AI makes writing code faster, it creates a new challenge: "Reviewing AI code is often a less fun part of the job for many software engineers." This insight drives his team to focus on features that help build confidence in AI-written code and improve validation capabilities.

Imbirikos sees coding as fundamental to any AI agent's capabilities. "For models to do stuff, they are much more effective when they can use a computer," he explains, "and the best way for models to use computers is simply to write code." This perspective suggests that even non-technical users may eventually benefit from coding agents working behind the scenes.

When asked about AGI timelines, Imbirikos offers a practical view: the limiting factor isn't just model capabilities but "literally human typing speed or human multitasking speed." This suggests teams should focus on building systems where agents can validate their own work and be useful by default, rather than requiring constant human oversight.

For those concerned about the future of software engineering, Imbirikos advises focusing on systems thinking rather than implementation details. While AI will handle more coding tasks, humans will still need to reason about what makes effective software systems and teams. The most valuable skills will be understanding customer problems deeply and configuring AI tools to solve them effectively.