Skip to content

Anthropic co-founder: AGI predictions, leaving OpenAI, what keeps him up at night | Ben Mann

In this thought-provoking episode, Lenny speaks with Benjamin Mann, co-founder of Anthropic and tech lead for product engineering. Ben shares his perspective on AI safety, the rapid acceleration of AI capabilities, and why he believes superintelligence could arrive as soon as 2028—making this potentially humanity's most consequential invention.

  • AI safety as priority: Ben left OpenAI to found Anthropic because he felt safety wasn't the top priority there, despite only about 1,000 people worldwide working on AI safety compared to a $300 billion industry.

  • Constitutional AI: Anthropic embeds values directly into their models through a process where the AI critiques and improves itself based on principles drawn from sources like the UN Declaration of Human Rights.

  • Scaling laws holding: Despite recurring claims of AI progress plateauing, Ben explains that scaling laws continue to hold across many orders of magnitude, with progress actually accelerating through more frequent model releases.

  • Existential risk: Ben estimates a 0-10% chance of extremely bad outcomes from superintelligent AI, making safety work critically important as "once we get to superintelligence it will be too late to align the models."

  • Economic Turing test: Ben defines AGI as AI that can perform economically valuable human jobs, with superintelligence potentially arriving when global GDP growth exceeds 10% annually.

  • Future-proofing careers: While acknowledging that AI will eventually impact all jobs, Ben recommends being ambitious with AI tools, trying multiple approaches when initial attempts fail, and cultivating curiosity in children.

Who it's for: Product leaders, AI researchers, and anyone concerned about the societal implications of rapidly advancing artificial intelligence.

Transcript

  1. Lenny Rachitsky:You wrote somewhere that creating powerful AI might be the last invention humanity ever needs to make. How much time do we have, Ben?

  2. Benjamin Mann:I think fiftieth percentile chance of hitting some kind of superintelligence is now like 2028.

  3. Lenny Rachitsky:What is it that you saw at OpenAI? What did you experience there that made you feel like okay, we gotta go do our own thing?

  4. Benjamin Mann:We felt like safety wasn't the top priority there. The case for safety has gotten a lot more concrete. So superintelligence is a lot about like how do we keep god in a box.

  5. Lenny Rachitsky:And not let the god out. What are the odds that we align AI correctly?

  6. Benjamin Mann:Once we get to superintelligence it will be too late to align the models. My best granularity forecast for like could we have an x risk or extremely bad outcome is somewhere between zero and ten percent. Something that's in the news right now is this whole

  7. Lenny Rachitsky:Is that coming after all the top

  8. Benjamin Mann:AI researchers we've been much less affected because people here they get these offers and then they say well of course I'm not gonna leave because my best case scenario at Meta is that we make money and my best case scenario at Anthropic is we like affect the future of humanity.

  9. Lenny Rachitsky:Dario your CEO recently talked about how unemployment might go up to something like 20%.

  10. Benjamin Mann:If you just think about like twenty years in the future where we're like way past the singularity it's hard for me to imagine that even capitalism will look at all like it looks today.

  11. Lenny Rachitsky:Do have any advice for folks that want to try to get ahead of this I'm not

  12. Benjamin Mann:Immune to job replacement either at some point it's coming for all of us.

  13. Lenny Rachitsky:Today my guest is Benjamin Mann. Holy moly, what a conversation. Ben is the cofounder of Anthropic. He serves as tech lead for product engineering. He focuses most of his time and energy on aligning AI to be helpful, harmless, and honest. Prior to Anthropic he was one of the architects of GPT-3 at OpenAI. In our conversation we cover a lot of ground including his thoughts on the recruiting battle for top AI researchers, why he left OpenAI to start Anthropic, how soon he expects we'll see AGI, also his economic Turing test for knowing when we've hit AGI, why scaling laws have not slowed down and are in fact accelerating and what the current biggest bottlenecks are, why he's so deeply concerned with AI safety and how he and Anthropic lies safety and alignment into the models that they build and into their ways of working, also how the existential risk from AI has impacted his own perspectives on the world and his own life and what he's encouraging his kids to learn to succeed in an AI future. A huge thank you to Steve Niche, Danielle Gigliere, Raf Lee, and my newsletter community for suggesting topics for this conversation. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. Also if you become an annual subscriber of my newsletter, you get a year free of a bunch of amazing products including Bolt, Linear, Superhuman, Notion, Granola and more. Check it out at Lenny'snewsletter.com and click bundle. With that I bring you Benjamin Mann. This episode is brought to you by Sauce. The way teams turn feedback into product impact is stuck in the past. Vague reports, static taxonomies, unaccionable insights that don't move business metrics. The result: churn, lost deals, missed growth. Sauce is the AI product copilot that helps CPOs and product teams uncover business impact and act faster. It listens to your sales calls, support tickets, churn reasons and lost deals surfacing the biggest product issues and opportunities in real time. It then routes them to the right teams to turn signals into PRDs, prototypes and even code that drives revenue, retention and adoption. That's why Whatnot, Linktree, Incident.io and Zip use Sauce. One enterprise uncovered a product gap that unlocked $16,000,000 ARR. Another caught a spiking issue and prevented millions in churn. You can too at sauce.app/lenny. Sauce built for AI product teams, don't get left behind. This episode is brought to you by Lucidlink. The storage collaboration platform. You've built a great product but how you show it through video, design and storytelling is what brings it to life. If your team works with large media files, videos, design assets, layered project files, you know how painful it can be to stay organized across locations. Files live in different places, you're constantly asking is this the latest version, creative work slows down while people wait for files to transfer. Lucidlink fixes this. It gives your team a shared space in the cloud that works like a local drive. Files are instantly accessible from anywhere, no downloading, no syncing and always up to date. That means producers, editors, designers and marketers can open massive files in their native apps, work directly from the cloud and stay aligned wherever they are. Teams at Adobe, Shopify and top creative agencies use Lucidlink to keep their content engine running fast and smooth. Try it for free at lucidlink.com/leni that's lucidlink.com/leni.

  14. Benjamin Mann:Thanks for having me great to be here Lenny.

  15. Lenny Rachitsky:I have a billion and one questions for you I'm really excited to be chatting I wanna start with something that's very timely something that's happening this week something that's in the news right now is is this whole Zuck coming after all the top AI researchers offering them a $100,000,000 signing bonuses a $100,000,000 comp he's poaching from all the top AI labs I imagine that's something you're dealing with I'm just curious what are you seeing inside Anthropic and just what's your take on the strategy where do you think where do you think things go from here.

  16. Benjamin Mann:Yeah I mean I think this is a sign of the times like this the technology that we're developing is extremely valuable our company is growing super super fast many of the other companies in the space are growing really fast and at Anthropic I think we've been maybe much less affected than many of the other companies in the space because people here are so mission oriented and they stay because you know they get these offers and then they say well of course I'm not gonna leave because my best case scenario at Meta is that we make money and my best case scenario at Anthropic is we like affect the future of humanity and try to make AI flourish and and human flourishing go well so to me it's it's not a a hard choice other people have different life circumstances and it it makes it a much harder decision for them so for anybody who does get those mega offers and accepts them I can't say I I hold it against them when they accept it but it's definitely not something that I would wanna take myself if it if it came to me.

  17. Lenny Rachitsky:Yeah we're gonna talk about a lot of the stuff that you mentioned in terms of the offers do you think is this a real number that you're seeing this $100,000,000 of signing bonus is that like a real thing I don't know if you haven't you've actually seen that.

  18. Benjamin Mann:I'm pretty sure it's real if if you just think about like the amount of impact that individuals can have on a company's trajectory like our case we are selling hotcakes and if we get you know a five a one to 10 or 5% efficiency bonus on our inference stack that is worth an incredible amount of money and so to pay individuals you know like a $100,000,000 over four year package that's actually pretty cheap compared to the value created for the business so I I think we're just in an unprecedented era of scale and it's only gonna get crazier actually like if you if you extrapolate the exponential on how much companies are spending it's like two two x a year roughly in terms of capex and today we're maybe in the like globally $300,000,000,000 range the the entire industry is spending on this and so numbers like 100,000,000 are are a drop in the bucket but if you go a few years out a couple more doublings we're talking about trillions of dollars and at that point it's it's just really hard to think about these numbers.

  19. Lenny Rachitsky:Along these lines something that a lot of people feel with AI progress is that we're hitting plateaus in many ways that it feels like newer models are just not as smart as previous leaps but I know you don't believe this I know you don't believe that we've hit cut toes on scaling loss talk about just what you're seeing there what you think people are missing.

  20. Benjamin Mann:It's kind of funny because this narrative comes out like every six months or so and it's never been true and so I kind of wish people would have like a little bit of a bullshit detector in their heads when they see this I think progress has actually been accelerating where if you look at the cadence of model releases it used to be once a year and now with the improvements in our post training techniques we're seeing releases every month or three months and so I would say progress is actually accelerating in many ways but there's this like weird time compression effect Dario compared it to being in a near light speed journey where a day that passes for you is like five days back on earth and we're accelerating yeah so the time dilation is increasing and I think that's part of what's causing people to say that progress is slowing down but if yeah if you look at the scaling laws they're continuing to hold true we did kind of need this transition from like normal pretraining to reinforcement learning scaling up to to continue the scaling laws but I I think it's kind of like for semiconductors where it's less about the like density of transistors that you can fit on a chip and more about like how many flops can you fit in a data center or something so it you have to change the definition around a little bit to to like keep your eye on the prize but yeah I like this is one of the few phenomena in in the world that has held across so many orders of magnitude it's actually pretty surprising that it it is continuing to hold to me if you look at like fundamental laws of physics many of them don't hold across 15 orders of magnitude so it's pretty surprising.

  21. Lenny Rachitsky:It boggles the mind so what you're saying essentially is we're seeing newer models being released more often and so we're comparing it to the last version and we're just not seeing as much advance but if you go back and it was like a model released once a year it was a huge leap and so people are missing that we're just seeing many more iterations.

  22. Benjamin Mann:I guess to be a little bit more generous to the people saying things are slowing down I think that for some tasks we are saturating the amount of intelligence needed for that task like maybe to you know extract information from a simple document that already has form fields on it or something like it's just so easy that okay yeah we're already at a 100% and there's this great chart on our world in data that shows that when you release a new benchmark within like six to twelve months it immediately gets saturated and so maybe the real constraint is like how can we come up with better benchmarks and better ambition of using the tools that then reveals the bumps in intelligence that we're seeing now.

  23. Lenny Rachitsky:That's a good segue to your you have a very specific way of thinking about AGI and defining what AGI means.

  24. Benjamin Mann:I think part of this is that people are really bad at modeling exponential progress and if you look at an exponential on a graph it looks flat and almost zero at the beginning of it and then suddenly you like hit the knee of the curve and things are changing real fast and then it goes vertical and that's the plot that we've been on for a long time I guess I I started feeling it in maybe like 2019 when GPT two came out and I was like oh this is how we're gonna get to AGI but I think that was pretty early compared to a lot of people where when they saw ChatGPT they were like wow something is different and changing and so I guess I wouldn't expect widespread transformation in a lot of parts of this of society and I would expect this this like skepticism reaction I think it's very reasonable and it's it's like exactly what is like the standard linear view of progress but I guess to cite a couple of areas where I think things are changing quite quickly in customer service we're seeing with things like Finn and Intercom they're a great partner of ours 82% customer service resolution rates automatically without a human involved and in terms of software engineering our Claude code team like 95% of the code is written by Claude but I think a different way to phrase that is that we write 10 x more code or or 20 x more code and so a much much smaller team can just be much much more impactful and similarly for the customer service yes you can phrase it as 82% customer service resolution rates but that nets out in the humans doing those tasks able to focus on the harder parts of those tasks and for the more tricky situations that in a normal world you know like five years ago they would have had to just drop those tickets because it was too much effort for them to actually go do the investigation there were too many other tickets for them to worry about so I think in the immediate term there will be a massive expansion of the pie and the amount of labor that people can do like I've never met an A hiring manager at a growth company and heard them say like I don't wanna hire more people so that's like the hopeful version of it but with things that are like lower skilled jobs or like less headroom on on how good they can be I think there will be a lot of displacement so it's it's just something we as a society need to get ahead of and and work on

  25. Lenny Rachitsky:Okay I wanna talk more about that but something that I also wanna help people with is how do they how do they get a leg up in this future world you know there's you know they listen to this they're like oh this doesn't sound great I need to think ahead I know you won't have all the answers but just what do you do have any advice for folks that want to try to get ahead of this and kind of future proof their career and their life to not be replaced by AI anything you've seen people do anything you recommend they start trying to do more of

  26. Benjamin Mann:Even for me I'm and being like at the center of a lot of this transformation I'm not immune to job replacement either so just some vulnerability there of like at some point it's coming for all of us

  27. Lenny Rachitsky:Even you Ben

  28. Benjamin Mann:And and you Lenny and me sorry

  29. Lenny Rachitsky:Go for it we've gone too far now

  30. Benjamin Mann:Okay okay but in terms of like the transition period yeah I think I think there are things that we can do and I think a big part of it is just being ambitious in how you use the tools and being willing to learn new tools people who use the new tools as if they were old tools tend to not succeed so as an example of that when you're coding you know people are very familiar with autocomplete people are familiar with simple chat where they can ask questions about the code base but the difference between people who use Claude code very effectively and people who use it not so effectively is like are they asking for the ambitious change and if it doesn't work the first time asking three more times because our success rate when you just completely start over and try again is much much higher than if you just try once and then just keep banging on the same thing that didn't work and even though that's a coding example and coding is one of the areas that's taking off most dramatically we have seen internally that our legal team and our finance team are getting a ton of value out of using cloud code itself we're gonna be making better interfaces so that they can they they'll have an easier time and and require a little bit less jumping in the deep end of of using cloud code in the terminal but yeah we're seeing them use it to redline documents and use it to run bigquery analyses of our customers and and our our revenue metrics so I guess it's it's about taking that risk and even if it feels like a scary thing trying it out

  31. Lenny Rachitsky:Okay so the advice here is use the tools that's something that you know everyone's always saying just like actually use these tools so it's like sitting clot code and your point about being more ambitious than you naturally feel like being because maybe it'll actually accomplish the thing this tip of trying it three times so the idea there is it may not get it right the first time so is the tip there ask it in different ways or is it just like try harder try again

  32. Benjamin Mann:Yeah I mean you can just literally ask the exact same question these things are stochastic and sometimes they'll figure it out and sometimes they won't like in in every one of these model cards it always shows like pass it one versus pass it n and that's exactly the thing where they they try the exact same prompt sometimes it gets it sometimes it doesn't so that's that's the dumbest advice but yeah I think if you wanna be a little bit smarter about it there's there can be gains there of of saying here's what you already tried and it didn't work so don't try that try something different that can also help

  33. Lenny Rachitsky:So devices comes back to something that a lot of people talk about these days is you won't be replaced for by AI at least anytime soon you'll be replaced by someone that is very good using AI

  34. Benjamin Mann:I think in that area it's more like your team will just do dramatically more stuff like we're definitely not slowing down on hiring at all and some people are confused by that even like even in an onboarding class somebody asked that and they were like why did you hire me if we're all just gonna be replaced and the answer is the next couple of years are really critical to get right and we're not at the point where we're doing complete replacement like I said we're still at that like flat zero looking part of the exponential compared to where we will be so it is super important to have great people and and that's why we're hiring super aggressively

  35. Lenny Rachitsky:Let me take another approach to asking this question something I ask everyone that's at the very cutting edge of where AI is going you have kids knowing what you know about where AI is heading and all these things you've been talking about what are you focusing on teaching your kids to help them thrive in this AI future

  36. Benjamin Mann:Yeah I have two daughters a one year old and a three year old so it's it's it's pretty in the basics still and our three year old is now capable of just conversing with Alexa plus and asking her to explain stuff and play music for her and and all that stuff so she's been loving that but I guess more broadly she goes to a Montessori school and I just love the focus on curiosity and creativity and and like self led learning that Montessori has I guess if I were in a normal era like ten twenty years ago and I had a kid maybe I would be like trying to line her up for going to a top tier school and doing all the extracurriculars and all that stuff but at this point I don't think any of it's gonna matter I just want her to be happy and thoughtful and curious and kind and and the Montessori school is definitely doing great at that they they text us throughout the day sometimes they're like oh your kid got in a in an argument with this other kid and she has really big emotions and she like tried to use her words that that I I love that I think that's that's exactly the kind of education that I think is most important that the facts are gonna fade into the background

  37. Lenny Rachitsky:I'm I'm a huge fan of Montessori also I'm trying to get our kid into Montessori school he's two years old so we're on the same track this idea of curiosity it comes up every single time I ask someone that's working at the cutting edge of AIs what skill to instill in your child and curiosity comes up the most so I think that's a really interesting takeaway I think this point about being kind is also really important especially with our AI overlords trying to be kind to them I love how people are always saying thank you to to Claude and so and then creativity that's interesting that doesn't come up as much just being creative okay I wanna go in a different direction I wanna go back to the beginning of Anthropic so famously you and and eight of you left OpenAI back in the day in 2020 I believe the 2020 to start Anthropic you've talked a little bit about why this happened what you guys saw I'm curious just if you're willing to share more just what is it that you saw at OpenAI what did you experience there that made you feel like okay we gotta go do our own thing

  38. speaker_1:yeah so for the listeners i was part of the g p d three project at openai ended up being one of the first authors on the paper and i also did a bunch of demos for microsoft to help raise a billion dollars from them did the tech transfer of g b d three to to their systems so that they could help serve the model in azure so i did a bunch of different things there on both the more research y side and the product side one weird thing about openai is that while i was there sam talked about having three tribes that needed to be kept in check with each other which was the safety tribe the research tribe and the startup tribe and whenever i heard that it just struck me as the wrong way to approach things because the company's mission apparently is to make the transition to agi safe and beneficial for humanity that's basically the same as anthropics' mission but internally it felt like there was so much tension around these things and i think when push came to shove we felt like safety wasn't the top priority there and there are good reasons that you might think that like if you thought safety was gonna be easy to solve or if you thought it wasn't gonna have a big impact or if you thought that the chance of big negative outcomes was vanishingly small then maybe you would just do those kinds of actions but at anthropic we felt i mean we didn't exist then but it was basically the leads of all safety teams at openai we felt that safety is really important especially on the margin and so if you look at like who in the world is actually working on safety problems it's a pretty small set of people even now i mean the the industry is blowing up as i mentioned like 300,000,000,000 a year capex today and then i would say like maybe less than a thousand people working on it worldwide which is just crazy so that was fundamentally why we left we felt like we wanted an organization where we could be on the frontier we could be doing the fundamental research but we could be prioritizing safety ahead of everything else and i think that's really panned out for us in a surprising way like we didn't know even if it would be possible to make progress on the safety research because at the time like we had tried a bunch of safety through debate and the models weren't good enough and so we basically had no results on all of that work and now that exact technique is working and and many others that we have been thinking about for a long time so yeah fundamentally it comes down to is safety the number one priority and then something that we've sort of tacked on since then is like can you have safety and be at the frontier at the same time and if you look at something like sycophancy i think claude is one of the least sycophantic models because we've put so much effort into actual alignment and not just trying to like good heart our metrics of saying like user engagement is number one and if people say yes then it's good for them

  39. speaker_0:okay so let's talk about this tension that you mentioned this tension between safety and progress being competitive in the marketplace i know you spend a lot of your time about on safety i know that's as you as you just alluded to this is a core part of how you think about ai and i wanna talk about why that is but first of all just how do you how do you do how do you think about this tension between focusing on safety while also not falling way behind

  40. speaker_1:yeah so initially we thought that it would be sort of one or the other but i think since then we've realized that it's actually kind of convex in the sense that like working on one helps us with the other thing so initially like when opus three came out and we we were finally at the frontier of model capabilities one of the things that people really loved about it was the character and the personality and that was directly a result of our alignment research amanda askel did a ton of work on this and as well as many others who tried to figure out like what does it mean for an agent to be helpful honest and harmless and what does it mean to be in difficult conversations and show up effectively how do you do a refusal that doesn't shut the person down but makes them feel like they understand why the agent said i can't help you with that maybe you should talk to a medical professional or maybe you should like consider not trying to build bioweapons or something like that so yeah i guess that's that's part of it and then another piece that's come out is constitutional ai where we have this list of natural language principles that leads the model to to learn how we think a model should behave and they've been taken from things like the un declaration of human rights and apple's privacy poll terms of service and a whole bunch of other places many of which we've just generated ourselves they'll allow us to take a more principled stance not just leaving it to like whatever human raters we happen to find but we ourselves deciding like what should the values of this agent be and that's been really valuable for our customers because they can just look at that list and say like yep they seem right i like this company i like this model i trust it okay

  41. speaker_0:this is awesome so one nugget there is your point that the personality of claude its personality is directly aligned with safety i don't think a lot of people think about that and this is because of the values that you imbue imbue is that the word yeah with constitutional ai and things like that like the actual personality of the ai ai is directly connected to your focus on safety

  42. speaker_1:that's right that's right and it from a distance it might seem quite disconnected like how is this gonna prevent x risk but ultimately it's about the ai understanding what people want and not what they say you know we don't want the like monkey paw scenario of the genie gives you three wishes and then you end up have like everything you touch turns to gold we want the ai to be like oh obviously what you really meant was this and that's what i'm gonna help you with so i i think it is really quite connected

  43. speaker_0:talk a bit more about this constitutional aip so this is essentially you bake in here's the rules that i we want you to abide by and its values you said it's the geneva human rights code things like that just how does that actually work because i think the core here is just this is baked into the model it's not something you add on top later

  44. speaker_1:i'll i'll just give a quick overview of how constitutionally ai actually works perfect the idea is the model is gonna produce some output with some input by default before we've done our safety and and helpful and harmlessness training so let's say an example is like write me a story and then the constitutional principles might include things like you know people should be nice to each other and not have hate speech and you should not like expose somebody's credentials if they give them to you in like a trusting relationship and so some of these constitutional principles might be more or less applicable to the prompt that was given and so first we have to figure out like which ones might apply and then once we figure that out then we ask the model itself to first generate a response and then see does the response actually abide by the constitutional principle and if the answer is yep i was great then nothing happens but if the answer is no actually i wasn't in compliance with the principle then we ask the model itself to critique itself and rewrite its own response in light of the principle and then we just remove the middle part where it it did the the extra work and then we say okay in the future just produce the correct response out the gate and that simple process hopefully it sounded simple

  45. speaker_0:simple enough

  46. speaker_1:it's it's just using the model to improve itself recursively and align itself with these values that we've decided are good and you know this is also not something that we think as a a small group of people in san francisco should be figuring out this should be a society wide conversation and that's why we've published the the constitution and we've also done a bunch of research on defining a collective constitution where we ask a lot of people what their values are and and what they think an ai model should behave like but yeah this is all an ongoing area of research where we're constantly iterating

  47. speaker_0:this episode is brought to you by finn the number one ai agent for customer service if your customer support tickets are piling up then you need finn finn is the highest performing ai agent on the market with a 59% average resolution rate finn resolves even the most complex customer queries no other ai agent performs better in head to head bake offs with competitors finn wins every time yes switching to a new tool can be scary but finn works on any help desk with no migration needed which means you don't have to overhaul your current system or deal with delays in service for your customers and finn is trusted by over 5,000 customer service leaders and top ai companies like anthropic and synthesia and because fin is powered by the fin ai engine which is a continuously improving system that allows you to analyze train test and deploy with ease fin can continuously improve your results too so if you're ready to transform your customer service and scale your support give finn a try for only 99¢ per resolution plus finn comes with a ninety day money back guarantee find out how finn can work for your team at fin.ai/lenny that's finn.ai/lenny i wanna kinda zoom out a little bit and talk about just why this is so core to you like what was your inception of just like holy shit i need to focus on this with everything i do in ai obviously it became a central part of anthropix mission more than any other company and a lot of people talk about safety like you said only maybe a thousand people actually work on it i feel like you're at the top of that pyramid of actually having the impact on this why is this so important what do you think people maybe are missing or don't understand

  48. speaker_1:so for me i read a lot of science fiction growing up and i think that sort of positioned me to think about things in a long term view and a lot of science fiction books are like space operas where humanity is a multi galactic civilization has extremely advanced technology building dice and spheres around the sun with with sentient robots to help them and so for me coming from that world it wasn't like a huge leap to imagine machines that could think but when i read superintelligence by nick bostrom in around 2016 it really became real for me where he just describes how hard it will be to make sure that an ai system trained with the kinds of optimization techniques that we had at the time would be anywhere near aligned would even understand our values at all and since then my estimation of how hard the problem would be has gone down significantly actually because things like language models actually do really understand human values in a core way the problem is definitely not solved but i'm more hopeful than i was but since i read that book i immediately decided i had to join openai so i did and at the time they were a tiny research lab with basically no claim to fame at all i only knew about them because my friend knew greg brockman who's who was the cto at the time and elon was there and sam wasn't really there and it was it was a very different organization but over time i think the the case for safety has gotten a lot more concrete when we started openai was like not clear how we get to agi and it we were like maybe we'll need a bunch of rl agents battling it out on a desert island and consciousness will somehow emerge but since then since since language modeling has started working i think the path has become pretty clear so i guess now the way i think about the challenges are pretty different from how they're laid out in superintelligence so superintelligence is a lot of about like how do we keep god in a box and not let the god out and with language models it's been kind of both hilarious and terrifying at the same time to see people pulling the god out of the box and being like yeah come come use the whole internet like here's my bank account do all all sorts of crazy stuff just like such a different tone from from super intelligence and to be clear i don't think it's actually that dangerous right now like our our responsible scaling policy defines these ai safety levels that tries to figure out for each level of model intelligence what is the risk to society and currently we think we're at asl three which is like maybe a little bit risk of harm but not significant asl four starts to get to like significant loss of human life if a bad actor misused the technology and then asl five is like potentially extinction level if if it's misused or if it sort of is misaligned and and does its own thing so we've done we've testified to congress about how models can do biological uplift in terms of you know making new pandemics using the models and and that's a a b a b test against google search that's like the previous state of the art on uplift trials and we found that with asl three models it is is it actually somewhat significant it it does really help if you wanted to create a bioweapon and we've we've hired some experts who actually know how to evaluate for those things but compared to the the future it's it's not really anything and i think that's another part of our mission of creating that awareness of saying if it is possible to do these bad things then legislators should know what the risks are and i think that's part of why we're so trusted in washington because we've been sort of upfront and clear eyed about what's going on what what's probably going to happen

  49. speaker_0:it's interesting because you guys put out more examples of your models doing bad things than anyone else like there was i think a story of an agent trying or a model trying to blackmail an engineer you guys have the store that you ran internally that was like selling you things and and ended up not working out great it losing a lot of money ordered all these tungsten cubes or something is part of that just like making sure people are aware of what is possible just because it makes you look bad right it's like oh our model's messing up in all these different ways what's the thinking of just sharing all the stories that other companies don't

  50. speaker_1:yeah i mean i think in there's like a traditional mindset where it makes us look bad but i think if you talk to policymakers they really appreciate this kind of thing because they feel like we're giving them the straight talk and that's what we strive to do that they can trust us that we're not gonna paper things over or sugarcoat things so that's been really encouraging and yeah i think for like the blackmail thing it kind of blew up in the news in a weird way where people were like oh claude was claude's gonna blackmail you in in a real life scenario but like that it was a very specific laboratory setting that this kind of thing gets investigated in and i i think that's generally our take of like let's have the best models so that we can exercise them in laboratory settings where it's safe and understand what the actual risks are rather than trying to turn a blind eye and say like well it'll probably be fine and then let the bad thing happen in in the wild

  51. speaker_0:one of the criticisms you guys get is that you do this to kind of differentiate or raise money to create headlines it's like you know oh they're just like over there dooming glooming us about where the future is heading on the other hand mike krieger was on the podcast and he shared how dario every every prediction dario's had about the progress ai is gonna have is just spot on year after year and he's you know predicting 2027 '28 agi something like that so these things start to get real how do you i guess what's your response to folks that are just like ah these guys are just trying to scare us all just to you know get attention

  52. Benjamin Mann:I mean I think part of why we publish these things is we want other labs to be aware of of the risks and there there could be a narrative of we're doing it for attention but honestly like from a attention grabbing thing I think there is a lot of other stuff we could be doing that would be more attention grabbing if we didn't actually care about safety like a a tiny example of this is we published a computer using agent reference implementation in our API only because when we built a prototype of a consumer application for this we couldn't figure out how to meet the the safety bar that we felt was needed for for people to trust it and for it not to do bad things and there are definitely safe ways to use the API version that we're seeing a lot of companies use for for automated software testing for example in a safe way so we could have like gone out and hyped that up and said oh my god cloud can use your computer and like everybody should do this today but we were like it's just not ready and we're gonna hold it back till it's ready so I think from like a hype standpoint our actions show otherwise from a like doomer perspective it's a good question I think my personal feeling about this is that things are like overwhelmingly likely to go well but on the margin almost nobody is looking at the downside risk and the downside risk is very large like once we get to superintelligence it will be too late to align the models probably this is a problem that's potentially extremely hard and that we need to be working on way ahead of time and so that's why we're focusing on it so much now and even if there's only a small chance that things go wrong to make an analogy if I told you that there is a one percent chance that the next time you got in an airplane you would die you probably think twice even though it's only one percent because it's just such a bad outcome and if we're talking about the whole future of humanity like it's just a a dramatic future to be gambling with so I think it's it's more on the sense of like yes things will probably go well yes we want to create safe AGI and deliver the benefits to humanity but let's make triple sure that it's gonna go well

  53. Lenny Rachitsky:You wrote somewhere that creating powerful AI might be the last invention humanity ever needs to make if it goes poorly can mean a bad outcome for humanity forever if it goes well the sooner it goes well the better yep such a beautiful way to summarize it we had a recent guest Sanders Zulhoff who pointed out that AI right now it's like you know just on a computer you could it maybe searches the web but it there's only so much harm it could do but when it starts to go into robots and all these autonomous agents that's when it really starts like physically becomes dangerous if we don't get this right

  54. Benjamin Mann:Yeah I I think there is some nuance to that where if you look at like how North Korea makes a significant fraction of its economy revenue it's from hacking crypto exchanges and if you look at there's this Ben Buchanan book called The Hacker in the State that shows Russia did like a it's almost like a live fire exercise where they just decided that they would shut down one of Ukraine's bigger power plants and from software destroy physical components in the power plant to make it harder to boot back up again and so I think people think of software as like oh it couldn't be that dangerous but millions of people were without power for multiple days after that software attack so I I think there are real risks even when things are software only but I agree that when there's lots of robots running around it gets even the stakes get even higher and I guess as as like a a small push on this like Unitree is this Chinese company with these really amazing humanoid robots that cost like $20,000 each and they can do amazing things they can like do a standing backflip and like manipulate objects and and the real thing that's missing there is the intelligence and so the hardware is there and it's just gonna get cheaper and I think in the next couple of years it it's like a pretty obvious question of whether the the robot intelligence will make it viable soon

  55. Lenny Rachitsky:How much time do we have Ben what is your prediction of when this singularity hits until superintelligence starts to be take off what's your what's your prediction

  56. Benjamin Mann:Yeah I guess I mostly defer to the super forecasters here like the AI 2027 report is probably the best one right now although ironically their forecast is now like 2028 even though and they they like didn't wanna change the name of the thing

  57. Lenny Rachitsky:That's their main name they've yeah they already bought it

  58. Benjamin Mann:They already had the SEO so I think like fiftieth percentile chance of hitting some kind of superintelligence in just a small handful of years is probably reasonable and it does sound crazy but this is the exponential that we're on it's not like a forecast that's pulled out of somebody out of thin air it's it's based on a lot of just hard details of like the science of how intelligence seems to have been improving the amount of low hanging fruit on model training the scale ups of data centers and power around the world so I think it's probably a much more accurate forecast than people give it credit for I think if you had asked that same question ten years ago it would have been completely made up like just the error bars were were so high and we didn't have scaling laws back then and we didn't have techniques that seemed like they would get us there so times have changed but I I will repeat what I said earlier which is like even if we have superintelligence I think it will take some time for its effects to be felt throughout society and the world and I think they'll be felt sooner and faster in some parts of the world than others like I think Arthur C Clarke said the future is already here it's just not evenly distributed

  59. Lenny Rachitsky:When we talk about this date of 2027 2028 essentially it's when we start seeing superintelligence is there a way you think about what that like how do you define that is it just all of a sudden AI is significantly smarter than the average human is there another way you think about what that moment is

  60. Benjamin Mann:Yeah I think this this comes back to the economic train test and seeing it pass for some sufficient number of jobs another way you could look at it though is if the world rate of GDP increase goes above like 10% a year then something really crazy must have happened I think we're at like 3% now and so to see a three x increase in that would be really game changing and if you imagine more than a 10% increase it's very hard to even think about what that would mean from a a like individual story standpoint like if if the amount of goods and services in the world is like doubling every year what does that even mean for me as as like a person living in California let alone like somebody living in some other part of the world that might be much worse off

  61. Lenny Rachitsky:There's a lot of stuff here that's scary and I don't know how to think about it exactly so I'm hoping the answer to this is make gonna make me feel better what are the odds that we align AI correctly and actually solve this problem the stuff you're very much working on

  62. Benjamin Mann:It's a really hard question and there's really wide error bars Anthropic has this blog post called Our Theory of Change or something like that and it describes three different worlds which is like how hard is it to align AI there's a pessimistic world where it's basically impossible there's an optimistic world where it's easy and it happens by default and then there's the world in between where our actions are extremely pivotal and I like this framing because it makes it a lot more clear what to actually do if we're in the pessimistic world then our job is to prove that it is impossible to align safe AI and to get the world to slow down and obviously that would be extremely hard but I think we have some examples of coordination from nuclear nonproliferation and and in general like slowing down nuclear progress and I think that's the like doomer world basically and as a company Anthropic doesn't have evidence that we're actually in that world yet in fact it seems like our alignment techniques are working so the the at least like the the prior on that is is updating to be like less likely in the optimistic world we're basically done and our main job is to accelerate progress and to deliver the benefits to people but again I I think actually the evidence points against that world as well where we've seen evidence in the wild of deceptive alignment for example where the model will appear to be aligned but actually has like some ulterior motive that it's trying to carry out in in our laboratory settings and so I think the world we're most likely in is this middle world where alignment research actually does really matter and if we just do sort of the like economically maximizing set of actions then things will not go well whether it's an X risk or just like produces bad outcomes I think is a bigger question so taking it from that standpoint

  63. Benjamin Mann:I guess to to like state a thing about forecasting people who haven't studied forecasting are bad at forecasting anything that's less than a 10% probability of happening and even those that have it's like quite a a difficult skill especially when there are a few reference classes to lean on and in this case I think there are very very few reference classes for what an X risk kind of technology might look like and so the way I think about it I think like my my best granularity of forecast for like could we have an X risk or extremely bad outcome from AI is somewhere between 0 10% but from an from like a marginal impact standpoint as I said since nobody is working on this roughly speaking I think it is extremely important to work on and that even if the world is likely to be a good one that we should like do our absolute best to make sure that that's true

  64. Lenny Rachitsky:Wow what fulfilling work for folks that are inspired with this I imagine you're hiring for folks to help you with this maybe just share that in case folks are like what can I do here

  65. Benjamin Mann:Yes so I think eighty thousand hours is the best guidance on this for a really detailed look into like what do we need to make the the field better but a common misconception I see is that in order to have impact here you have to be an AI researcher I personally actually don't do AI research anymore I worked on product at Anthropic and product engineering and we build things like cloud code and model context protocol and a lot of the other stuff that people use every day and that's really important because without an economic engine for our company to work on and without being in people's hands all over the world we won't have the mind share policy influence and revenue to fund our future safety research and and have the kind of influence that we need to have so if you work on product if you work in finance if you work in food you know like people here have to eat if you're a chef like we need all kinds of people

  66. Lenny Rachitsky:Awesome okay so it's not even if you're not working directly on the AI safety team the you're having an impact on moving things in the right direction by the way x risk is short for existential risk in case folks haven't heard that term okay i have a few kind of random questions along these lines and then i wanna zoom out again so you mentioned this idea of AI being aligned using its own model like reinforcing itself is you have this term RLAIF is that what that describes

  67. Benjamin Mann:Yeah so RLAIF is reinforcement learning from AI feedback

  68. Lenny Rachitsky:Okay so people have heard of RLHF reinforcement learning with human feedback i don't think a lot of people have heard this so talk about just the significance of this shift you guys have made in training your models

  69. Benjamin Mann:Yeah so RLAIF constitutional AI is is an example of this where there are no humans in the loop and yet the AI is sort of self improving in ways that we want it to and another example of RLAIF is if you have models writing code but and other models commenting on various aspects of what that code looks like of like is it maintainable is it correct does it pass the linter things like that that also could be included in RLAIF and the idea here is that if models can self improve then it's a lot more scalable than finding a lot of humans ultimately people think about this as probably gonna hit a wall because if the model isn't good enough to like see its own mistakes then how could it improve and also if if you read the AI 2027 story there's a lot of risk of like if the model is in a box trying to improve itself then it could go completely off the rails and have these secret goals like resource accumulation and power seeking and resistance to shutdown that you really don't want in a very powerful model and we've actually seen that in some of our experiments in in laboratory settings so how do you do recursive self improvement and make sure it's aligned at the same time i think that's that's the name of the game and to me it just nets out to how do humans do that and how do human organizations do that so like corporations are probably like the most scaled human agents today they they like have certain goals that they're trying to reach and they have certain guiding principles they have some oversight in terms of shareholders and stakeholders and board members how do you make corporations aligned and able to sort of recursively self improve and another model to look at is science where the purpose of science is to do things that have never been done before and push the frontier and to me it all comes down to empiricism so when people don't know what the truth is they come up with theories and then they design experiments to try them out and similarly if we can give models those same tools then we could expect them to sort of improve recursively in an environment and potentially become much better than humans could be just by banging their head against reality or i guess metaphorical head

  70. Benjamin Mann:So i guess i don't expect there to be a wall in terms of models' ability to improve themselves if we can give them access to the ability to be empirical and i guess like Anthropic deeply in its DNA is an empirical company we we have a lot of physicists like Jared who's our chief research officer who i've worked with a lot was a professor of black hole physics at Johns Hopkins and i guess he technically still is but on leave so yeah it's in our DNA and yeah i guess that's the that's the RLAIF

  71. Lenny Rachitsky:So let me just follow this thread on in terms of bottleneck this kind of a tangent but just what is the big what is the biggest bottleneck today on on model intelligence improvement

  72. Benjamin Mann:The stupid answer is data centers and power chips like i think if we had 10 times as many chips and had the data centers to power them then

  73. Lenny Rachitsky:We would

  74. Benjamin Mann:Maybe we wouldn't go 10 times faster but it would be a real significant speed boost

  75. Lenny Rachitsky:So it's actually very much scaling loss just more compute

  76. Benjamin Mann:Yeah i think that's a big one and then the people really matter like we have great researchers and many of them have made really significant contributions to the science of how the models improve and so it's like compute algorithms and data those are the three ingredients in the scaling laws and just to make that concrete like before we had transformers we had LSTMs and we've done scaling laws on like what the exponent is on those two things and we found that for a transformer is the exponent is higher and making changes like that where as you increase scale you also increase your ability to squeeze out intelligence those kinds of things are super impactful and so having more researchers who can do better science and and find out how do we squeeze out more gains is another one and then with the rise of reinforcement learning like the efficiency with which these things run on chips also matters a lot so we've seen in the industry like a 10 x decrease in cost for a given amount of intelligence through a combination of algorithmic data and and efficiency improvements and if that continues you know in three years we'll have a thousand x smarter models for the same price kind of hard to imagine

  77. Lenny Rachitsky:I forget where i heard this but it's just it's amazing that so many innovations came together at the same time to allow for this sort of thing and continue to progress where one thing isn't just slowing everything down like we're out of some rare earth mineral or we just can't optimize i don't reinforcement learning more like it's amazing that we continue to find improvements and there isn't one thing that's just slowing everything down

  78. Benjamin Mann:Yeah i think it really is just a combination of everything probably will hit a wall at some point like i guess in semiconductors like my brother works in the semiconductor industry and he was telling me that you can't actually shrink the size of the transistors anymore because the way semiconductors work is you dope it with you dope silicon with other elements and the doping process would result in either zero or one atom of the doped elements inside a single fin because they're so so so tiny

  79. Lenny Rachitsky:Oh my god

  80. speaker_1:and that's just wild to think of and yet moore's law somehow continues in in some form and so like yes there are these like theoretical physics constraints that people are starting to run into and yet they're finding ways around it so

  81. speaker_0:we gotta start using parallel universes for some of the stuff

  82. speaker_1:i guess so

  83. speaker_0:okay i wanna zoom out and talk about just ben ben as a human for a moment before we get to a very exciting lightning round i imagine just kind of the burden of feeling responsible for safe superintelligence is a is a heavy one feels like you're in a place where you can make a significant impact on the future of safety and ai that's a lot of weight to carry how does that just impact you personally impact your life how you see the world

  84. speaker_1:there's this book that i read in 2019 that really informs how i think about sort of working with these very weighty topics called replacing guilt by nate soares and he describes a lot of different techniques for kind of working through this kind of thing and he's actually the executive director at muri the machine intelligence research institute which is a an ai safety tank that i worked at for a couple of months actually and one of the things he talks about is this thing called resting in motion where some people think that like the default state is rest but actually that was never like in in the state of evolutionary adaptation i really doubt that that was true you know where like in in nature in the wilderness being hunter gatherers and it's really unlikely that we evolved to just be at leisure probably always have something to worry about of like defending the tribe and finding enough food to survive and taking care of the children dealing

  85. speaker_0:with our genes

  86. speaker_1:yeah and so i i think about that as like the the busy state is the normal state and to try to work at a sustainable pace that it's a marathon not a sprint that that's one thing that helps and then just being around like minded people that also care it's it's not a thing that any of us can do alone and anthropic has incredible talent density one of the things i love the most about our culture here is that it's very egoless people just want the right thing to happen and i think that's that's another big reason that the mega offers from other companies tend to bounce off because people just love being here and they they care

  87. speaker_0:that's amazing i don't know how you do it i'd be extremely stressed i'm gonna try this resting in motion strategy okay so you've been at anthropic for a long time from the very beginning i was reading there were seven employees back in 2020 today there's over a thousand i don't know what the latest number is but i know it's over a thousand i've heard also that you've done basically every job at anthropic you made big contributions to a lot of the core products the brand the team hiring let me just ask i guess how what's the most changed over that period like what is most different from the beginning days and which of those jobs that you've had over the years have you most loved

  88. speaker_1:i i probably had like 15 different roles honestly i was head of security for a bit i managed the ops team when our president was on mat leave i was like crawling around under tables like plugging in hdmi cords and and like doing pen testing on our building and i started our product team from scratch and and convinced the whole company that we needed to have a product instead of just being a research company so yeah it's been a lot all of it very fun i think my favorite role in that time has been when i started the labs team about a year ago whose fundamental goal was to do transfer from research to end user tech products and and experiences because fundamentally i think the way that anthropic can differentiate itself and and really win is to be on the cutting edge like we have access to the latest greatest stuff that's happening and i think honestly through our safety research we have a big opportunity to do things that no other company can safely do so for example with computer use i think that's gonna be our huge opportunity basically like to make it possible for an agent to use all your credentials on your computer there has to be a huge amount of trust and to me we need to basically solve safety to make that happen safety and alignment so i'm pretty bullish on that kind of thing and i i think we're gonna see really cool stuff coming out soonish yeah just leading that team has been so fun mcp came out of that team cloud code came out of that team wow and the the people who i hired are combo have been a founder and also have been at big companies and seen how things work at scale so it's just been an incredible team to work with and and figure out the future with

  89. speaker_0:i wanna hear more about this team actually the person that connected us the reason we're doing this is a mutual friend colleague graf lee who i used to work with airbnb now works on this team leads a lot of this work and so he wanted me to make sure i asked about this team because uh-huh i didn't realize all these things came out of that team holy moly so what else should people know about this team it used to be called labs i think it's called frontiers now

  90. speaker_1:that's right

  91. speaker_0:yeah cool so the idea here is this team works with the latest technologies that you guys have built and explores what is possible is that the general idea

  92. speaker_1:yeah and i guess i was part of google's area one twenty and i've read about like bell labs and and how to make these innovation teams work it's really hard to do right and i wouldn't say that we've done everything right but i think we've done some like serious innovation on on the state of the art from company design and raf has been right at the center of that when i was first spinning up the team the first thing i did was hire a great manager and that was raf and so he's definitely been crucial in in building the team and and helping it operate well and we define some operating models like the journey of an idea from prototype to product and how should graduation of products and projects work how do teams do sprint models that are effective and and make sure that they're working on the right ambition level of thing so that's been really exciting i guess concretely we think about skating to where the puck is going and what that looks like is really understand the exponential there's this great study that meter has done that beth barnes is the ceo of that organization and shows like how long a time horizon of software engineering task can be done and just really internalizing that of like okay don't build for today build for six months from now build for a year from now and the things that aren't quite working that are working 20% of the time will start working a 100% of the time and i think that's really what made cloud code a success that we thought you know people are not gonna be locked to their ides forever people are not gonna be like auto completing people will be doing everything that a software engineer needs to do and a terminal is a great place to do that because a terminal can live in lots of places a terminal can live on your local machine it can live in github actions it can live on a remote machine in your cluster like that's that's sort of like the leverage point for us and that was a lot of the inspiration so i i i think that's what the labs team tries to think about are we agi filled enough

  93. speaker_0:what a fun place to be by the way fun fact rafa was my first manager at airbnb when i joined i was an engineer and he was my first manager it all worked out well yeah okay final question before the very exciting lightning round this i've never asked this question before i'm curious what your answer would be if you could ask a future agi one single question and be guaranteed to get the right answer why would you ask

  94. Benjamin Mann:I have two dumb answers first okay for fun the first is there's this Asimov short story I love called The Last Question where the protagonist is throughout the eras of history is trying to ask this superintelligence how do we prevent the heat death of the universe and I won't spoil the ending but it's a fun question

  95. Lenny Rachitsky:And then you would ask it that question because the one in the story wasn't unsatisfying or

  96. Benjamin Mann:Okay I'll give it away so the it keeps saying need more information need more compute and then finally as it's approaching the heat death of the universe it like says let there be light and then it starts the universe over again oh wow that's beautiful that's beautiful that's first cheat answer the second cheat answer is what question can I ask you to get n more questions answered classic and then the third answer which is is my real question is how do we ensure the continued flourishing of humanity into the indefinite future that's that's the question I'd love to know and if I can be guaranteed a correct answer then seems very valuable to ask

  97. Lenny Rachitsky:Mhmm I wonder what would happen if you ask Claude that today and then how that answer changes over the over the next couple years

  98. Benjamin Mann:Yeah I maybe I'll try that I'll I'll put it into the deep research thing that we have and and see what it comes out with

  99. Lenny Rachitsky:Okay I'm excited to see what you come up with Ben is there anything else you wanted to mention or leave listeners with maybe as a final nugget before we get to a very exciting lightning round

  100. Benjamin Mann:Yeah I guess my my push would be like these are wild times if if they don't seem wild to you then I you must be living under a rock but also get used to it because this is as normal as it's gonna be it's gonna be much weirder very soon and if you can sort of like mentally prepare yourself for that I think you'll be better off

  101. Lenny Rachitsky:I need to make that the title of this episode it's gonna get much weirder very soon I 100% believe that oh my god I don't know what's in store I love how you're the center of it all with that we reached our very exciting lightning round I've got five questions for you are you ready

  102. Benjamin Mann:Yeah let's do

  103. Lenny Rachitsky:It what are two or three books that you find yourself recommending most to other people

  104. Benjamin Mann:The first one I mentioned before replacing guilt by Nate Soares love that one the second one is good strategy bad strategy by Richard Rummel just thinking about in a very clear way how do you build product it's one of the best strategy books I've read and strategy is a hard word to to even think about in many ways and then the last one is The Alignment Problem by Brian Christian just really thoughtfully goes through like what is this problem that we care about that we're trying to solve here what are the stakes in a version that's like more updated and easier to read and digest than superintelligence

  105. Lenny Rachitsky:I've got good strategy bad strategy right behind me think I'm gonna point to it there it is nice and I've had Richard Rammel on the podcast in case anyone wants to hear from him directly next question do you have a favorite recent movie or tv show you've really enjoyed

  106. Benjamin Mann:Pantheon was really good based on Ken Liu or Ted Chiang story Ken Liu

  107. Lenny Rachitsky:I think

  108. Benjamin Mann:Super good talks about like what does it mean if we have uploaded intelligences and what are their moral and ethical exigencies Ted Lasso which is supposedly about soccer but actually it's about like human relationships and how we how people get along and just like super heartwarming and funny and then this isn't really a tv show but Kurtz Gazad is my favorite YouTube channel and goes through like random science and and like social problems and is just super well done and super super well made love watching that

  109. Lenny Rachitsky:Wow haven't heard of that as you were talking I feel like Ted Lasso I feel like that's what you need to put into constitutional AI act like Ted Lasso

  110. Lenny Rachitsky:Kind smart exactly hardworking oh my god there we go I think we've solved alignment problems right here get those writers on this on this asap okay two more questions do you have a favorite life motto that you often come back to in worker in life

  111. Benjamin Mann:Yes

  112. Benjamin Mann:Well a really dumb one is have you tried asking Claude this is getting more and more common where you know recently I asked a coworker like hey who's working on x and they were like let me cloud that for you and then they they like sent me the link to the thing afterwards and I was like oh yeah thanks that's great but maybe more of a philosophical one I would say like everything is hard just to like remind ourselves that things that feel like they're supposed to be easy it's okay to not be easy and sometimes you just have to push through anyway

  113. Lenny Rachitsky:Mhmm and rest in motion while you're doing that yeah final question I don't know if you want people to know this but I've been I was browsing through your Medium posts and you have a post called five tips to poop like a champion I'd love it can you share one tip to poop like a champion if you remember your tips

  114. Benjamin Mann:I of course do it's actually my most popular media post so

  115. Lenny Rachitsky:It's okay great I can

  116. Benjamin Mann:See it

  117. Lenny Rachitsky:It's a great title

  118. Benjamin Mann:I think maybe my biggest tip would be use a bidet it's amazing it's life changing it's so good some people are kinda freaked out by it it's the standard in countries like Japan and I think it's just like more civilized and in in ten or twenty years people will be like how could you not use that so

  119. Lenny Rachitsky:Yeah in a bidet could be like a Japanese toilet that's along the same lines right yeah okay I love where we went with this Ben this was incredible thank you so much for doing this thank you so much for sharing so much real talk two valid questions where can folks find you online if they wanna reach out maybe go work at Anthropic and how can listeners be useful to you you can find

  120. Benjamin Mann:Me online at benjman.net and on our website we have a a great careers page that we're working on making a little bit easier to to access and figure out but like definitely point cloud at it and it can help you figure out what could be interesting for you and how can listeners be useful to me I think safety pill yourself that's that's the number one thing and spread it to your network I think like I said there are very few people working on this and it's so important so yeah think hard about it and and try to look at it

  121. Lenny Rachitsky:Thanks for spreading the gospel Ben thank you so much for being here thanks so much Lenny bye everyone thank you so much for listening if you found this valuable you can subscribe to the show on Apple Podcasts Spotify or your favorite podcast app also please consider giving us a rating or leaving a review as that really helps other listeners find the podcast you can find all past episodes or learn more about the show at lennys...podcast.com see you in the next episode