Skip to content

Teammates Deflect Questions With "Let Me Claude That For You"

by Benjamin Mann on July 20, 2025

Anthropic's internal "Let me Claude that for you" culture has created a powerful knowledge democratization system that reduces information bottlenecks and increases team productivity.

The practice emerged organically as Anthropic's AI assistant Claude became increasingly capable. Team members began responding to coworkers' questions with "let me Claude that for you" - running the query through their own AI model and sharing the results. As Benjamin Mann describes, "Recently I asked a coworker 'hey who's working on x' and they were like 'let me Claude that for you' and then they sent me the link to the thing afterwards."

This approach creates several compounding benefits. First, it reduces the cognitive load on subject matter experts who would otherwise be interrupted to answer routine questions. Second, it trains everyone to become more self-sufficient in information gathering. Third, it creates a virtuous feedback loop where the team actually uses their own product extensively, generating valuable usage data and identifying improvement opportunities.

The tactic works because it's lightweight and fits naturally into existing workflows. There's no formal policy requiring it - it's simply become part of the company culture. The phrase itself has become a gentle reminder that many questions can be answered without human intervention.

For teams building AI products, this "dogfooding" approach is particularly valuable. It ensures everyone experiences the product as users do, creating empathy and understanding that drives better product decisions. It also helps identify gaps in the AI's knowledge or capabilities that might not be apparent from metrics alone.

The practice has scaled so effectively that it's become one of Mann's life mottos: "Have you tried asking Claude?" This simple question has transformed how information flows through the organization, creating a culture where AI augmentation is the default rather than the exception.

For product teams looking to implement a similar approach, the key is having leadership model the behavior first, making it safe and expected for everyone to leverage AI tools before tapping human resources. The goal isn't to discourage human interaction, but to reserve it for truly high-value exchanges where human judgment and context are essential.