We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
B
Bret
Topics
Bret Taylor: 我认为自己首先是一名工程师,我的职业生涯都是以此为基础展开的。即使在 Salesforce 担任联席 CEO 期间,我仍然会在周末编写代码。工程对我来说不仅仅是一份工作,更是一种思维方式,影响着我生活中的方方面面。因此,无论我担任什么角色,我始终将自己视为一名工程师。

Deep Dive

Chapters
Brett Taylor, a self-identified engineer, shares his career journey, emphasizing the importance of combining product and engineering skills. He recounts his experience at Stanford during the dot-com boom and bust, highlighting his decision to join Google and his pivotal role in rewriting Google Maps.
  • Brett self-identifies as an engineer, despite holding various corporate and board roles.
  • His career highlights include co-creating Google Maps and the Facebook Like button.
  • He emphasizes the importance of integrating product management and engineering skills for successful product development.

Shownotes Transcript

Translations:
中文

Hello, AI engineers. We're excited to bring you a special conversation with Brett Taylor, CEO of Sierra, the conversational AI platform now worth $4.5 billion, as well as chairman of the board of OpenAI, which needs no introduction. He has had a long-established career in tech, including co-creating Google Maps and the Facebook Like button, starting Quip, and serving as co-CEO of Salesforce and chairman of Twitter Inc.,

Through all his dizzying accomplishments, Brett is an AI engineer at heart and is incredibly passionate about the future of software development, arguing that we are moving forward from the autopilot era of software engineering to the autonomous era. He's also equally at ease talking about the internals of JavaScript React web apps as he is talking about the human side of negotiating high-stakes situations like the OpenAI board drama of 2023.

He's observed the rise of the specialist AI architect role, which is a more senior technical AI leader complementing the AI engineer, in the various high-level conversations with senior AI leaders in his roles at Sierra and OpenEye.

Throughout Brett's career, we observed a strong formula of his success being the tight integration of product management and engineering in small teams, aligned with customers by a, and I quote, maniacal focus on outcomes rather than exposing implementation details on the pricing chart.

We organized this conversation in the lead up to the AI leadership track at the AI Engineer Summit in New York City on February 20th, where we have gathered CTOs and VPs of AI of major companies from Bloomberg to LinkedIn to talk about their AI strategy with a close grounding on technical detail. This is our last call for AI leadership attendees to join us.

The engineering track is now sold out, and we expect our leadership slots to close soon. Apply for an invite at apply.ai.engineer. See you in two weeks. Watch out and take care.

Hey everyone, welcome to the Lead in Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swix, founder of Small AI. Hey, and today we're super excited to have Brett Taylor join us. Welcome. Thanks for having me. It's a little unreal to have you in the studio. I've read about you so much over the years, like even before OpenAI effectively. I mean, I used Google Maps to get here.

So thank you for everything that you've done. Your storied history, I think people can find out what your greatest hits have been. How do you usually like to introduce yourself when you talk about, you summarize your career? How do you look at yourself? Yeah, it's a great question. Before we went on the mics here, we were talking about the audience for this podcast being more engineering. And I do think, depending on the audience, I'll introduce myself differently because I've had a lot of

corporate and board roles. I probably self-identify as an engineer more than anything else though. So even when I was co-CEO of Salesforce, I was coding on the weekends. So I think of myself as an engineer and then all the roles that I do in my career sort of

with that just because I do feel like engineering is sort of a mindset and how I approach most of my life. So I'm an engineer first and that's how I describe myself. You majored in computer science like 1998. I was high school actually. My college degree was 02 undergrad, 03 masters. Not that old. Yeah, yeah. I mean, I was going like 1998 to 2003. But like...

Engineering wasn't a thing back then. We didn't have the title of senior engineer. It was just you were a programmer, you were a developer maybe. What was it like in Stanford? What was that feeling like? Were you feeling on the cusp of a great computer revolution or was it just a niche interest at the time? Well, I was at Stanford, as you said, from 1998 to 2002.

1998 was near the peak of the dot-com bubble. So this was back in the day where most people did their coding in the computer lab, just because there was these Sun Microsystems Unix boxes there that most of us had to do our assignments on. And every single day there was a dot-com like buying pizza for everybody. I didn't have to like, I got free food like my first two years of university.

And then the dot-com bubble burst in the middle of my college career. And so by the end, there was like tumbleweed going through the job fair, you know, it's like, cause it was hard to describe unless you were there at the time, the like level of hype and being a computer science major at Stanford was like,

opportunities. And then when I left, it was like Microsoft, IBM, and then the two startups that I applied to were VMware and Google. And I ended up going to Google in large part because a woman named Marissa Meyer, who had been

a teaching assistant when I was what was called a section leader, which was like a junior teaching assistant kind of for one of the big intro CS classes. She had gone there and she was recruiting me and I knew her and it was sort of felt safe. You know, like, I don't know. I thought about it much, but it turned out to be a real blessing. I realized like, you know,

You always want to think you'd pick Google if given the option, but no one knew at the time. And I wonder if I'd graduated in like 1999 where I've been like, mom, I just got a job at pets.com. It's good. But you know, at the end, I just didn't have any options. So I was like, do I want to go like make kernel software VMware? Do I want to go build search at Google? And I chose Google 50, 50 ball. I'm not really a 50, 50 ball. So I feel very fortunate in retrospect that the economy collapsed.

Because in some ways it forced me into like one of the greatest companies of all time. But I kind of lucked into it, I think. So the famous story about Google is that you rewrote the Google Maps backend in one week after the Quest Maps acquisition. What was the story there? Is

Is it actually true? Is it being glorified? Like how did that came to be? And is there any detail that maybe Paul hasn't shared before? It's largely true, but I'll give the color commentary. So it was actually the front end, not the back end. But it turns out for Google Maps, the front end was sort of the hard part just because Google Maps was largely the first-ish kind of really interactive web application. I say first-ish, I think Gmail certainly was, though Google

Gmail, probably a lot of people who weren't engineers probably didn't appreciate its level of interactivity. It was just fast. But Google Maps, because you could drag the map and it was sort of graphical, really in the mainstream, I think. Was it MapQuest back then? You had the arrows up and down? It was up and down arrows. Each map was a single image and you just click left and then wait for a few seconds to the new map to load. It was really small too because generating a big image

was kind of expensive on computers that day. So Google Maps was truly innovative in that regard. The story on it, there was a small company called WhereTo Technologies started by two Danish brothers, Lars and Jens Rasmussen, who are two of my closest friends now. They had made a Windows app called Expedition, which had beautiful maps. Even in 2008,

For whenever we acquired or sort of acquihired their company, Windows software was not particularly fashionable, but they were really passionate about mapping. And we had made a local search product that was kind of middling in terms of popularity, sort of like a yellow pages search product. So we wanted to really go into mapping. We had started working on it. Their small team seemed passionate about it. So we're like, come join us. We can build this together.

It turned out to be a great blessing that they had built a Windows app because you're less technically constrained when you're doing native code than you are building a web browser, particularly back then when there weren't really interactive web apps.

And it ended up changing the level of quality that we wanted to hit with the app because we were shooting for something that felt like a native Windows application. So it was a really good fortune that we sort of, you know, their unusual technical choices turned out to be the greatest blessing. So we spent a lot of time basically saying, how can you make an interactive, draggable map in a web browser? How do you progressively load, you know, new map tiles, you know, as you're dragging and

Even things like down in the weeds of the browser at the time, most browsers like Internet Explorer, which was dominant at the time, would only load two images at a time from the same domain. So we ended up making our map tile servers have like 40 different subdomains so we could load maps and parallels, like lots of hacks.

I'm happy to go into as much depth. Oh, just for like HTTP connections and stuff? There was just maximum parallelism of two. And so if you had a set of map tiles of like eight of them. So we were down in the weeds of the browser. Anyway, so there's lots of plumbing. I know a lot more about browsers than most people.

But then by the end of it, it was fairly, there was a lot of duct tape on that code. If you've ever done an engineering project where you're not really sure the path from point A to point B, it's almost like building a house by building one room at a time. There's not a lot of architectural cohesion at the end. And then we acquired a company called Keyhole, which became Google Earth, which was like that three, it was a native Windows app as well.

Separate app, great app. But with that, we got licenses to all this satellite imagery. And so in August of 2005, we added satellite imagery to Google Maps, which added even more complexity in the code base. And then we decided we wanted to support Safari. There was no mobile phones yet. So Safari was this like nascent browser on the Mac.

And it turns out there's like a lot of decisions behind the scenes sort of inspired by this Windows app, like heavy use of XML and XSLT and all these like technologies that were like briefly fashionable in the early 2000s and everyone hates now for good reason.

And it turns out that all of the XML functionality in Internet Explorer wasn't supported in Safari. So people are like re-implementing in like XML parsers. And it was just like this like pile of shit. I'm not allowed to say shit on your podcast. Yeah, of course. So it went from this like beautifully elegant...

application that everyone was proud of to something that probably had hundreds of K of JavaScript, which sounds like nothing. Now we're talking like people had modems, you know, not all modems, but it was, it was a big deal. So it was like slow. It took a while to load and just, it wasn't like a great code base. Like everything was fragile. So I just got, uh,

super frustrated by it. And then one weekend I did rewrite all of it. And at the time the word JSON hadn't been coined yet too, just to give you a sense. So it's all XML. Yeah. So we used what is now you would call JSON, but I just said like, let's use eval so that we can parse the data fast. And, and again, that's, it would literally as JSON, but at the time there was no name for it. So we just said, let's pass it on JavaScript.

from the server and eval it. And then suddenly just refactored the whole thing. And it wasn't like I was some genius. It was just like, you know, you knew everything you wished you had known at the beginning. And I knew all the functionality because I was the primary, one of the primary authors of the JavaScript. And I just like,

I just drank a lot of coffee and just stayed up all weekend. And then I guess I developed a bit of reputation. And no one knew about this for a long time. And then Paul, who created Gmail, and I ended up starting a company with him too after all of this, told this on a podcast. And now it's lower. But it's largely true. I did rewrite it. And my proudest thing, and I think JavaScript people appreciate this, like the...

un-gzipped bundle size for all of Google Maps when I rewrote it was 20K. Gzipped was like much smaller for the entire application. It went down by like 10X. So what happened on Google, Google's a pretty mainstream company. And so like our usage just shot up because it turns out like

It's faster. Just being faster is worth a lot of percentage points of growth at a scale of Google. So how much modern tooling did you have? Like test suite, no compilers? Actually, that's not true. We did have one thing. So actually, Google, you can download it. There's a

Google has a... Closure compiler. Yeah, Closure compiler. I don't know if anyone still uses it. Oh, yeah, yeah. Yeah, it's sort of gone out of favor. Facebook uses it. Yeah, well, even until recently, it was better than most JavaScript minifiers because it was more like... It did a lot more renaming of variables and things. Most people use ESBuild now just because it's fast. Closure compiler is...

built on Java and super slow and stuff like that. But so we did have that. That was it. Oh, wow. Okay. So, and that was traded internally. You know, it was a really interesting time at Google at the time because there's a lot of teams working on fairly advanced JavaScript when no one was. So.

Google Suggest, which Kevin Gibbs was the tech lead for, was the first kind of type ahead autocomplete, I believe, in a web browser. And now it's just pervasive in search boxes that you sort of see a type ahead there. I mean, ChatGPT just added it. It's kind of like a round trip. Totally. No, it's now pervasive as a UI affordance, but that was like Kevin's 20% project.

And then Gmail, Paul, you know, he tells the story better than anyone, but he's like, you know, basically was scratching his own itch. But what was really neat about it is email, because it's such a productivity tool, just needed to be faster. So, you know, he was scratching his own itch of just making more stuff work on the client side. And then we, because of Lars and Yen sort of like setting the bar of this Windows app, we're like, we need our maps to be draggable. And so we ended up

not only innovating in terms of having a big sync, what would be called a single page application today, but also all the graphical stuff. You know, we were crashing Firefox like it was going out of style because, you know, when you make a document object model with the idea that it's a document and then you layer on some JavaScript and then we're essentially abusing all of this, it just was running into code paths that were not well trodden, you know, at this time.

And so it was super fun. And, and, you know, in the building you had, so you had compilers, people helping minify JavaScript just practically, but there is great engineering teams. So they were like that's why closure compiler so good. It was like a person who actually knew about programming languages, doing it, not just, you know, writing regular expressions. And then the team that is now the Chrome team believe, and I, my, I don't know this for a fact, but I'm pretty sure Google is the main contributor to Firefox for a long time in terms of code.

And a lot of browser people were there. So every time we would crash Firefox, we'd like walk up two floors and say like, what the hell is going on here? And they would load their browser like in a debugger and we could like figure out exactly what was breaking. And you can't change the code, right? Because it's the browser. It's like slow, right? I mean, slow to update. So, but we could figure out exactly...

where the bug was and then work around it in our JavaScript. So it was just like new territory. Like it's a super, super fun time. Just like a lot of, a lot of great engineers figuring out new things. And, and now, you know, the word, this term is no longer in fashion, but the word Ajax, which was asynchronous JavaScript and execution XML. Cause I'm telling you XML, but see the word XML there, to be fair,

the way you made HTTP requests from a client to server was this object called XML HTTP request, because Microsoft and making Outlook web access back in the day made this. And it turns out to have nothing to do with XML. It's just a way of making HTTP requests because XML was like the fashionable thing. It was like, that was the, the way you, you know, you did it, but the JSON came out of that, you know, and then a lot of the,

best practices around building JavaScript applications. It was pre-React. I think React was probably the big conceptual step forward that we needed. Even my first social network after Google, we used a lot of like HTML injection and making real-time updates was still very hand-coded. And it's really neat when you see conceptual breakthroughs like React because it's

I just love those things where it's like obvious once you see it, but it's so not obvious until you do. And actually, I'm sure we'll get into AI, but I sort of feel like we'll go through that evolution with AI agents as well. I feel like we're missing a lot of the core abstractions that I think in 10 years we'll be like, gosh, how did you make agents before that? But it was kind of that early days of web applications. There's a lot of contenders for the reactive jobs of AI, but no clear winner yet.

I would say. One thing I observed, I mean, there's so much we can go into there. You just covered so much. One thing I just observed is that I think the early Google days had this interesting mix of PM and engineer, which...

which I think you are. You didn't wait for a PM to tell you, this is my PRD, this is my requirements. Oh, okay. I wasn't technically a software engineer. I mean, by title, obviously. Right, right, right. It's like a blend. And I feel like these days, product is its own discipline and its own lore and own industry and engineering is its own thing. And there's this process that happens and they're kind of separated, but you don't produce as good of a product as if they were the same person.

And I'm curious if that sort of resonates in terms of comparing...

early Google versus modern startups that you see out there? I certainly like wear a lot of hats. So, you know, sort of biased in this, but I really agree that there's a lot of power and combining product design engineering into as few people as possible because, you know, few great things have been created by committee, you know? And so if engineering is an order taking organization for product, I

You can sometimes make meaningful things, but rarely will you create extremely well-crafted breakthrough products. Those tend to be small teams who deeply understand the customer need that they're solving, who have a maniacal focus on outcomes. And I think the reason why it's, I think for some areas, if you look at like software as a service five years ago, maybe you can have a separation of product and engineering because most software as a service created five years ago, I

I wouldn't say there's like a lot of like technological breakthroughs required for most, you know, business applications. And if you're making expense reporting software or whatever, it's useful. I don't mean to be dismissive of expense reporting software, but you probably just want to understand like what are the requirements of the finance department? What are the requirements of an individual final expense report? Okay. Yeah.

go implement that. And you kind of know how web applications are implemented. You kind of know how databases work, how to build auto-scaling with your AWS cluster, whatever. It's just you're just applying best practices to yet another problem. When you have areas like the early days of mobile development or the early days of interactive web applications, which I think Google Maps and Gmail represent, or now AI agents, you're in this constant...

conversation with what the requirements of your customers and stakeholders are and all the different people interacting with it and the capabilities of the technology. And it's almost impossible to specify the requirements of a product when you're not sure of the limitations of the technology itself. And that's why I use the word conversation. It's not literal, though it's sort of funny to use that word in the age of conversational AI.

You're constantly sort of saying like, ideally, you could sprinkle some magic AI pixie dust and solve all the world's problems, but it's not the way it works. And it turns out that, actually, I'll just give an interesting example. I think most people listening probably use co-pilots to code like Cursor or Devon or Microsoft Co-Pilot or whatever. Most of those tools are remarkable. I couldn't imagine development without them now.

But they're not autonomous yet. Like I wouldn't let it just write most code without my interactively inspecting it. We just are somewhere between it's an amazing co-pilot and it's an autonomous software engineer. As a product manager, like your aspirations for what the product is are like kind of mixed.

But if you're a product person, yeah, of course you'd say it should be autonomous. You should click a button and programs should come out the other side. The requirement's meaningless. Like what matters is like what is based on the like very nuanced limitations of the technology? What is it capable of? And then how do you maximize the leverage it gives a software engineering team given those very nuanced tradeoffs?

coupled with the fact that those nuanced trade-offs are changing more rapidly than any technology in my memory, meaning every few months you'll have new models with new capabilities. So how do you construct a product that can absorb those new capabilities as rapidly as possible as well? That requires such a combination of technical depth and understanding of the customer that you really need more integration skills

of product design and engineering. And so I think it's why with these big technology waves, I think startups have a bit of a leg up relative to incumbents because they tend to be sort of more self-actualized in terms of just like bringing those disciplines closer together. And in particular, I think entrepreneurs

And entrepreneurs, the proverbial full stack engineers, you know, have a leg up as well, because I think most breakthroughs happen when you have someone who can understand those extremely nuanced technical trade-offs, have a vision for a product. And then in the process of building it, have that, as I said, like metaphorical conversation with the technology, like, gosh, I ran into a technical limit that I didn't expect.

It's not just like changing that feature. You might need to refactor the whole product based on that. And I think that it's particularly important right now. So I don't, you know, if you're building a big ERP system, probably there's a great reason to have product and engineering. I think in general, the disciplines are there for a reason. I think when you're dealing with something as nuanced as the like technologies, like large language models today, there's a ton of advantage of having technology

individuals or organizations that integrate the disciplines more formally. That makes a lot of sense. I've run a lot of engineering teams in the past, and I think the product versus engineering tension has always been more about effort than like whether or not the feature is buildable. But I think, yeah, today you see a lot more of like models actually cannot do that. And I think the most interesting thing is on the startup side, people don't yet know where a lot of the value is going to accrue. So you have this rush of people building frameworks, building infrastructure layer things.

but we don't really know the shape of the compute. I'm curious at Sierra, like how you thought about building in-house a lot of the tooling for evals or like just, you know, building the agents and all of that versus how you see some of the startup opportunities that is maybe still out there. We build most of our tooling in-house at Sierra. Not all. It's not like not invented here syndrome necessarily, though maybe slightly guilty of that in some ways, but yeah,

because we're trying to build a platform that's enduring, you know, we really want to have control over our own destiny and, and,

You had made a comment earlier that we're still trying to figure out who the reactive agents are and the jury's still out. I would argue it hasn't been created yet. I don't think the jury's still out. To go use that metaphor, we're sort of in the jQuery era of agents, not the React era. And that's like a throwback for people listening. We shouldn't rush it. No, yeah, that's my point. And so because we're trying to create an enduring company at Sierra that outlives us...

I'm not sure we want to attach our cart to a horse where it's not clear that we've figured out. And I actually want, as a company, we're trying to enable, just at a high level, and I'll quickly go back to tech. At Sierra, we help consumer brands build customer-facing AI agents. So, yeah.

Everyone from Sonos to ADT Home Security to SiriusXM, if you call them on the phone, an AI will pick up. If you chat with them on the SiriusXM homepage, it's an AI agent called Harmony that they've built on our platform.

What are the contours of what it means for someone to build an end-to-end complete customer experience with AI, with conversational AI? We really want to dive into the deep end of all the trade-offs to do it. Where do you use fine tuning? Where do you string models together? Where do you use reasoning? Where do you use generation? How do you use reasoning?

How do you express the guardrails of an agentic process? How do you impose determinism on a fundamentally non-deterministic technology? There's just a lot of really, like, it's an important design space. And I could sit here and tell you we have the best approach every entrepreneur will, you know. But I hope that in two years we look back at our platform and laugh at how naive we were because that's the pace of change. Broadly, you talk about, like, the startup opportunities, right?

I'm not wholly skeptical of tools companies, but I'm fairly skeptical. There's always an exception for every rule, but I believe that certainly there's a big market for frontier models, but largely for companies with huge CapEx budgets. So OpenAI and Microsoft, Anthropic and Amazon Web Services, Google Cloud, which is very well capitalized now. But I think the

The idea that a company can make money sort of pre-training a foundation model is probably not true. It's hard to, you're competing with just, you know, unreasonably large CapEx budgets. And I just like the cloud infrastructure market, I think will be largely there. I also really believe in the applications of AI and,

I define that not as like building agents or things like that. I define it much more as like you're actually solving a problem for a business. So it's what Harvey is doing in the legal profession or what Cursor is doing for software engineering or what we're doing for customer experience and customer service. The reason I believe in that is I do think that in the age of AI, what's really interesting about software is it can actually complete a task. It can actually do a job, which is very different than the value proposition of software was before.

to ancient history two years ago. And as a consequence, I think the way you build a solution and for a domain is very different than you would have before, which means that it's not obvious, like the incumbent incumbents have like a leg up, you know, necessarily, they certainly have some advantages, but there's just such a different form factor, you know, for providing a solution.

And it's just really valuable. You know, it's like just think of how much money Cursor is saving software engineering teams or the alternative, how much revenue it can produce.

Toolmaking is really challenging. If you look at the cloud market just as an analog, there are a lot of interesting tools companies, Confluent, Monetized, Kafka, Snowflake, Hortonworks. There's a bunch of them. A lot of them have that mix of Confluence or have the open source or open core or whatever you call it. I'm not an expert in this area.

You know, I do think that developers are fickle. I think that in the tool space, I probably like default towards open source being like the area that will win. It's hard to build a company around this. And then you end up with companies sort of built around open source too that can work. Don't get me wrong. But I just think that it's nowadays the tools are changing so rapidly that I'm like not totally skeptical of tool makers, but I just think that open source will broadly win. But I think that

the CapEx required for building frontier models is such that it will go to a handful of big companies. And then I really believe in agents for specific domains, which I think is sort of the analog to software as a service in this new era. It's like, if you just think of the cloud, you can lease a server, it's just a low-level primitive, or you can buy an app like Shopify or whatever. And most

People building a storefront would prefer Shopify over hand rolling their e-commerce storefront. I think the same thing will be true of AI. So I tend to like, if I have a, like an entrepreneur asked me for advice, I'm like, you know, move up the stack as far as you can towards a customer need broadly. But I, but it doesn't reduce my excitement about what is the reactive building agents kind of thing, just because it is, it is the right question to ask, but I think we'll probably play out probably an open source space more than anything else.

Yeah, and it's not a priority for you. There's a lot in there. I'm kind of curious about your idea maze towards, there are many customer needs. You happen to identify customer experience as yours, but it could equally have been coding assistance or whatever. I think for some, I'm just kind of curious at the top down, how do you look at the world in terms of the potential problem space? Because there are many people out there who are very smart and pick the wrong problem. Yeah, that's a great question. By the way,

I would love to talk about the future of software too, because despite the fact I didn't pick coding, I obviously think a lot about it. But I can answer your question though. You know, I think when a technology is as cool as large language models are,

You just see a lot of people starting from the technology and searching for a problem to solve. And I think it's why you see a lot of tools companies, because as a software engineer, you start building an app or a demo and you encounter some pain points. It's too hard. A lot of people are experiencing the same pain point. What if I make a thing to solve that? It's just very incremental. And, you know, I always like to use the metaphor like

You can sell coffee beans, roasted coffee beans. You can add some value. You took coffee beans and you roasted them. And roasted coffee beans largely are priced relative to the cost of the beans. Or you can sell a latte. And a latte is rarely priced directly as a percentage of coffee bean prices. In fact, if you buy a latte at the airport...

It's a captive audience, so it's a really expensive latte. And there's just a lot that goes into like, how much does a latte cost? And I bring it up because there's a supply chain from growing coffee beans to roasting coffee beans to like, you know, you could make one at home or you could be in the airport and buy one.

And the margins of the company selling lattes in the airport is a lot higher than the, you know, people roasting the Kasi beans. And it's because you've actually solved a much more acute human problem in the airport. And it's just worth a lot more to that person in that moment.

It's kind of the way I think about technology too. It sounds funny to liken it to coffee beans, but you're selling tools on top of a large language model. Yeah, in some ways your market is big, but you're probably going to like be price compressed just because you're sort of a piece of infrastructure and then you have open source and all these other things competing with you naturally. If you go and solve a really big business problem for somebody that's actually like a meaningful business problem that AI facilitates,

they will value it according to the value of that business problem. And so I actually feel like people should just stop. You're like, no, that's unfair. If you're searching for an idea, people, I love people trying things, even if I mean, most of the, a lot of the greatest ideas have been things no one believed in. So I like, if you're passionate about something, go do it. Like who am I to say? Yeah. Or Gmail, like Paul, as far as I mean, I, some of it's Laura at this point, but like Gmail is Paul's own email. Yeah.

for a long time. And then I amusingly, and Paul can correct me, I'm pretty sure he sent her in a link and like the first comment was like, this is really neat. It would be great if it was not your email, but my own. I don't know if it's a true story. I'm pretty sure. Yeah, I've read that before. So scratch your own itch, fine. Like it depends on what your goal is. If you want to do like a venture-backed company, if it's a passion project, fucking passion, do it. Like don't listen to anybody. In fact, but if you're trying to start, you know, an enduring company, scratch your own itch.

solve an important business problem. And I do think that in the world of agents, the software industries have shifted where you're not just helping people be more productive, but you're actually accomplishing tasks autonomously. And as a consequence, I think the addressable market has just greatly expanded just because software can actually do things now and actually accomplish tasks. And how much is coding autocomplete worth?

fair amount. How much is the eventual, I'm certain we'll have it, the software agent that actually writes the code and delivers it to you? That's worth a lot. And so, you know, I would just maybe look up from the large language models and start thinking about the economy and, you know, think from first principles. I don't want to get too far afield, but just think about which parts of the economy will benefit most from this intelligence and which parts can absorb it.

most easily. And what would an agent in this space look like? Who's the customer of it? Is the technology feasible? And I would just start with these business problems more. And I think, you know, the best companies tend to have

great engineers who happen to have great insight into a market. And it's the last part that I think some people, whether or not they have it, it's like people start so much in the technology, they lose the forest for the trees a little bit. How do you think about the model of still selling some sort of software versus selling more package labor? I feel like when people are selling the package labor, it's almost more

stateless, you know, like it's easier to swap out if you're just putting an input and getting an output. If you think about coding, if there's no IDE, you're just putting a prompt and getting back an app. It doesn't really matter who generates the app. You know, you have less of a buy-in versus the platform you're building. I'm sure on the backend, customers have to like put on their documentation. They have, you know, different workflows that they can tie in. What's kind of like the line to draw there versus like WinFull where your managed customer support team

as a service outsource versus this is the Sierra platform that you can build on. What was that decision? I'll sort of like decouple the question in some ways, which is when you have something that's an agent, who is the person using it and what do they want to do with it? So let's just take your coding agent for a second. I will talk about Sierra as well.

Who's the customer of an agent that actually produces software? Is it a software engineering manager? Is it a software engineer? And it's their intern, so to speak? I don't know. I mean, we'll figure this out over the next few years. What is that? And

Is it generating code that you then review? Is it generating code with a set of unit tests that pass? What is the actual, for lack of a better word, contract? Like, how do you know that it did what you wanted it to do? And then I would say like the product and the pricing, the packaging model sort of emerged from that. And I don't think the world's figured out. And I think it'll be different for every agent. You know, in our customer base, we do what's called outcome-based pricing. So essentially every time the AI agent,

the problem or saves a customer or whatever it might be, there's a pre-negotiated rate for that. We do that because we think that that's sort of the correct way

way agents should be packaged. I look back at the history of cloud software and notably the introduction of the browser, which led to software being delivered in a browser like Salesforce, who famously invented sort of software as a service, which is both a technical delivery model through the browser, but also a business model, which is you subscribe to it rather than pay for a perpetual license.

Those two things are somewhat orthogonal, but not really. If you think about the idea of software running in a browser that's hosted data center that you don't own, you sort of needed to change the business model because you can't really buy a perpetual license to something. Otherwise, how do you afford making changes to it? So it only worked when you're buying a new version every year or whatever. So to some degree, but then...

The business model shift actually changed business as we know it, because now things like Adobe Photoshop, now you subscribe to rather than purchase. So it ended up where you had a technical shift and a business model shift that were very logically intertwined, that actually the business model shift turned out to be as significant as the technical shift. And I think with agents, because they actually accomplish a job, I do think that

It doesn't make sense to me that you'd pay for the privilege of like using the software, like that coding agent, like if it writes really bad code, like fire it. I don't know what the right metaphor is. You should pay for a job well done, in my opinion. I mean, that's how you pay your software engineers, right? And...

Well, not really. We put them on salary and give them options and they vest over time. That's fair, but my point is that you don't pay them for how many characters they write, which is sort of the token-based, you know, whatever. Like, there's that famous Apple story where we're, like, asking for a report of how many lines of code you wrote and...

one of the engineers showed up with like a negative number because he had just like done a big refactoring. There was like a big F you to management who didn't understand how software is written. You know, my sense is like the traditional usage based or seat based thing. It's just going to look really antiquated because it's like asking your software engineer, how many lines of code did you write today? Like who cares? Like, cause absolutely no correlation. So my whole view is, I don't think it'd be different in every category, but I do think that that is the,

If an agent is doing a job, you should, I think it properly incentivizes the maker of that agent and the customer of your paying for the job well done. It's not always perfect to measure. It's hard to measure engineering productivity, but you can, you should do something other than how many keys you typed, you know, talk about perverse incentives for AI, right? Like I can write really long functions to do the same thing, right?

So broadly speaking, you know, I do think that we're going to see a change in business models of software towards outcomes. And I think you'll see a change in delivery models too. And, and, you know, in our customer base, you know, we empower our customers to really have their hands on the steering wheel of what the agent does. They, they want and need that, but the role is different. You know, at a lot of our customers, the customer experience operations folks have renamed themselves the AI architects, which I think is really cool. And, and,

It's like in the early days of the internet, there was the role of the webmaster. And I don't know whether webmaster is not a fashionable term, nor is it a job anymore. I just, I don't know. Will AI architects stand the test of time? Maybe, maybe not. But I do think that, again, I like, you know, because everyone listening right now is a software engineer, like what is the form factor of a coding agent? And actually I'll take a breath because actually I have a bunch of opinions on like, I wrote a blog post right before Christmas just on the future of software development and

One of the things that's interesting is like, if you look at the way I use cursor today, as an example, it's inside of a repackaged visual studio code environment. I sometimes use the sort of agentic parts of it, but it's largely, you know, I've sort of gotten a good routine of making it auto-complete code in the way I want through tuning it properly. When it actually can write

I do wonder what the future of development environments will look like. And to your point on what is a software product, I think it's going to change a lot in ways that will surprise us. But I use the metaphor in my blog post of, have you all driven around in a Waymo around here? Yeah, everyone has. And there are these Jaguars, really nice cars. But it's funny because it still has a steering wheel, even though there's no one sitting there and the steering wheel is like turning and stuff. Yeah.

Clearly in the future, if once we get to that be more ubiquitous, like why have the steering wheel and also why have all the seats facing forward, maybe just for car sickness. I don't know, but like you could totally rearrange the car. I mean, so much of the car is oriented around the driver.

So it stands to reason to me that like, will autonomous agents for software engineering run through Visual Studio Code? That seems a little bit silly because having a single source code file open one at a time is kind of a goofy form factor for when like the code isn't being written primarily by you. But it begs the question of what's your relationship with that agent? And I think the same is true in our industry of customer experience, which is like,

who are the people managing this agent? What tools do they need? And they definitely need tools, but it's probably pretty different than the tools we had before. It's certainly different than training a contact center team. And as software engineers, I think that I would like to see, particularly on the passion project side or research side, more innovation in programming languages. I think that

We're bringing the cost of writing code down to zero. So the fact that we're still writing Python with AI cracks me up just because it's like literally was designed to be ergonomic to write, not safe to run or fast to run. I would love to see more innovation in how we verify program correctness. I studied formal verification in college a little bit and

It's not very fashionable because it's really like tedious and slow and doesn't work very well. If a lot of code is being written by a machine, you know, one of the primary values we can provide is verifying that it actually does what we intend that it does. I think there should be lots of interesting things in the software development lifecycle, like how we think of testing and everything else, because...

Because if you think about if we have to manually read every line of code that's coming out of these machines, it will just rate limit how much the machines can do. The alternative is totally unsafe. So I wouldn't want to put code in production that didn't go through proper code review and inspection. So my whole view is like, I actually think there's like an AI native. I don't think the coding agents don't work well enough to do this yet. But once they do, what is sort of an AI native software development lifecycle? And how do you actually...

enable the creators of software to produce the highest quality, most robust, fastest software and know that it's correct. And I think that's an incredible opportunity. I mean, how much C code can we rewrite and rust and make it safe so that there's fewer security vulnerabilities?

can we like have more efficient, safer code than ever before? And can you have someone who's like that guy in the matrix, you know, like staring at the little green things like where could you have an operator of a code generating machine be like superhuman? I think that's a cool vision. And I think too many people are focused on like

auto complete, you know, right now. I'm not, not even, I'm guilty as charged. I just like, I'd like to see some bolder ideas. And that's why when you were joking, you know, talking about what's the reactive, whatever, I think we're clearly in a local maximum, you know, metaphor, like sort of conceptual local maximum. Obviously it's moving really fast. I think we're moving out of it.

Yeah, at the end of '23, I wrote this blog post from syntax to semantics. Like, if you think about Python, it's taking C and making it more semantic. And LLMs are like the ultimate semantic program, right? You can just talk to them and they can generate any type of syntax from your language. But again, the languages that they have to use were made for us, not for them.

But the problem is like, as long as you will ever need a human to intervene, you cannot change the language under it. You know what I mean? So I'm curious at what point of automation we'll need to get. We're going to be okay making changes to the underlying languages, like the programming languages versus just saying, hey, you just got to write Python because I understand Python. And I'm more important at the end of the day than the model. But I think that will change. But I don't know if it's like two years or five years. I think it's more nuanced, actually. So I think there's a...

Some of the more interesting programming languages bring semantics into syntax. So maybe that's a little reductive, but like Rust as an example. Rust is memory safe statically. And that was a really interesting conceptual break. It's why it's hard to write Rust. It's why most people write Python instead of Rust.

I think Rust programs are safer and faster than Python, probably slower to compile. But like broadly speaking, like given the option, if you didn't have to care about the labor that went into it, you should prefer a program written in Rust over a program written in Python. Just because it will run more efficiently. It's almost certainly safer, et cetera, et cetera, depending on how you define safe.

But most people don't write Rust because it's kind of a pain in the ass and the audience of people who can is smaller, but it's sort of better in most ways. And again, unless say you're making a web service and you didn't have to care about how hard it was to write, if you just got the output of the web service, the Rust one would be cheaper to operate. It's certainly cheaper and probably more correct just because there's so much in the static analysis implied by the Rust programming language that it probably will have fewer runtime errors and things like that as well.

So I just give that as an example because Rust, at least my understanding is it came out of the Mozilla team because there's lots of security vulnerabilities in the browser and it needs to be really fast. So they said, "Okay, we want to put more of a burden at the authorship time to have fewer issues at runtime. And we need the constraint that it has to be done statically because browsers need to be really fast."

My sense is if you just think about like the needs of a programming language today, where the role of a software engineer is to use an AI to generate functionality and audit that it does in fact work as intended, maybe functionally, maybe from like a correctness standpoint, some combination thereof. How would you create a programming system that facilitated that?

And, you know, I bring up Rust just because I think it's a good example of like, I think given a choice of writing in C or Rust, you should choose Rust today. I think most people would say that, even C aficionados, just because C is largely less safe for very similar, you know, trade-offs, you know, for the system. And now with AI, it's like, okay, well, that just changes the game on writing these things. And so like, I just wonder if a combination of programming languages that are more structurally oriented towards

the values that we need from an AI generated program, verifiable correctness and all of that. If it's tedious to produce for a person, that maybe doesn't matter. But one thing like if I asked you, is this Rust program memory safe? You wouldn't have to read it. You just have to compile it.

So that's interesting. I mean, that's one example of a very modest form of formal verification. So I bring that up because I do think you have AI inspect AI, you can have AI review, do AI code reviews. It would disappoint me if the best we could get was AI reviewing Python and having scaled a few very large websites that were written on Python. It's just like, you know, expensive and expensive.

It's like every, trust me, every team who's written a big web service in Python has experimented with like PyPy and all these things just to make it slightly more efficient than it naturally is. You don't really have true multi-threading. Anyway, it's just like clearly that you do it just because it's convenient to write. And I just feel like we're, I don't want to say it's insane. I just mean, I do think we're at a local maximum. And I would hope that we create a programming system, a combination of programming languages, formal verification, testing, automated code reviews, where we're

You can use AI to generate software in a high scale way and trust it. And you're not limited by your ability to read it necessarily. I don't know exactly what form that would take, but I feel like that would be a pretty cool world to live in. Yeah, we had Chris Lattner on the podcast. He's doing great work with Modular. I love LLVM. Yeah, basically merging Rust and LLVM.

That's kind of the idea. But for them, a big use case was making it compatible with Python, same APIs, so that Python developers could use it. And so I wonder at what point... At least my understanding is they're targeting the data science machine learning crowd, which is all written in Python. So it still feels like a local maximum. Yeah, exactly. I'll force you to make a prediction. Python's roughly 30 years old. In 30 years from now, is Rust going to be bigger than Python?

I don't know this, but just, I don't even know this is a prediction. I just am sort of like saying stuff I hope is true. I would like to see an AI native programming language and programming system. And I use language because I'm not sure language is even the right thing, but...

But I hope in 30 years there's an AI native way we make software that is wholly uncorrelated with the current set of programming languages. Or not uncorrelated, but I think most programming languages today were designed to be efficiently authored by people and...

Some have different trade-offs. You know, you have Haskell and others that were designed for abstractions, for parallelism and things like that. You have programming languages like Python, which are designed to be very easily written, sort of like Perl and Python lineage, which is why data scientists use it. It has an interactive mode, things like that. And I love, I'm a huge Python fan. So despite all my Python trash talk, I'm a huge Python fan. Wrote

At least two of my three companies were exclusively written in Python. And then C came out of the birth of Unix and it wasn't the first, but certainly the most prominent first step after assembly language, right? Where you had higher level abstractions rather than going beyond go-to to like abstractions like the for loop and the while loop.

So I just think that if the act of writing code is no longer a meaningful human exercise, maybe it will be. I don't know. I'm just saying it sort of feels like maybe it's one of those parts of history that just will sort of like go away. But there's still the role of the software engineer, like the person actually building the system, right? And what does a programming system for that form factor look like? And I just have a, I hope to be, just like I mentioned React, I remember I was at

in the very early days when what is now React was being created. And I remember when it was released open source, I had left by that time and I was just like, this is so fucking cool. Like, you know, to basically model

your app independent of the data flowing through it just made everything easier. And now, you know, I can like there's a lot of the front end software is like a little chaotic for me to be honest with you. It's like it's sort of like abstraction soup right now for me. But like some of those core ideas felt really ergonomic.

I just want to, I'm looking forward to the day when someone comes up with a programming system that feels both really like an aha moment, but completely foreign to me at the same time, because they created it with sort of like from first principles, recognizing that like.

authoring code in an editor is maybe not like the primary like reason why a programming system exists anymore. And I think that's like, that would be a very exciting day for me. Yeah. I would say like the various versions of this discussion have happened. At the end of the day, you still need to precisely communicate what you want as a manager of people, as someone who's, who has done many, many legal contracts, you know how hard that is.

And then now we have to talk to machines doing that and AI is interpreting what we mean and reading our minds effectively. I don't know how to get across that barrier of translating human intent to instructions. And yes, it can be more declarative.

But I don't know if it'll ever cross over from being a programming language to something more than that. I agree with you. And I actually do think if you look at like a legal contract, you know, the imprecision of the English language might feel like a flaw in the system. How many holes there are. And I do think that when you're making a mission critical software system, I don't think it should be English language prompts. I think that is silly because...

you want the precision of a programming language. My point was less about that and more about if the actual act of authoring it, like if you...

I think some embedded systems do use formal verification. I know it's very common in like security protocols now so that you can, because the importance of correctness is so great. My intellectual exercise is like, why not do that for all software? I mean, probably that's silly just literally to do what we literally do for these low level security protocols.

But the only reason we don't is because it's hard and tedious and hard and tedious are no longer factors. So like if I could, I mean, just think of like the silliest app on your phone right now. The idea that that app should be like formally verified for its correctness feels laughable right now because like, God, why would you spend the time on it? But if it's zero costs, like, yeah, I guess so. I mean, it never crashed. That's probably good. You know, why not? I just want to like set our bars really high. Like we should make...

Software's been amazing. Like there's that Marc Andreessen blog post, software's eating the world. And, you know, our whole life is mediated digitally and that's just increasing with AI. And now we'll have our personal agents talking to the agents on the CRO platform and it's agents all the way down.

you know, our core infrastructure is running on these digital systems. We now have like, and we've had a shortage of software developers for my entire life. And as a consequence, you know, if you look, remember like healthcare.gov, that fiasco, security vulnerabilities leading to state actors getting access to critical infrastructure. I'm like, we now have like created this like amazing system that can like, we can fix this, you know? And I just want to...

I'm both excited about the productivity gains of the economy, but I just think as software engineers, we should be bolder. Like we should set it, have aspirations to fix these systems so that like in general, as you said, as precise as we want to be in the specification of the system, we should be bolder.

We can make it work correctly now. And I'm being a little bit hand wavy and I think we need some systems. I think that's where we should set the bar, especially when so much of our life depends on this critical digital infrastructure. So I'm just like super optimistic about it. But actually, let's go to what you said for a second, which is specifications. I think this is the most interesting part of AI agents broadly, which is that

Most specifications are incomplete. So let's go back to our product engineering discussions. You're like, okay, here's a PRD, product requirements document. And it's really detailed. There's mock-ups and this. And when you click this button, it does this. And it's like...

I 100% you can think of a missing requirement in that document. Let's say you click this button and the internet goes out. What do you do? I don't know if that's in the PRD. It probably isn't. You know, there's always going to be something because like humans are complicated, right? So...

What ends up happening is like, I don't know if you couldn't measure it, like what percentage of a product's actual functionality is determined by its code versus its specification, like for a traditional product? Oh, 95%. I mean, a little bit, but a lot of it. So like code is the specification. It's actually why, if you just look at the history of technology, why open source has won out over specifications, like

you know, for a long time, there was a W3C working group on the HTML specification. And then, you know, once WebKit became prevalent, the internet evolved a lot faster. And it's not the expense of the standards organizations. It just turns out having a committee of people argue is like a lot less efficient than,

someone checking in code. And then all of a sudden you had vector graphics and you had like all this really cool stuff that, you know, someone who in the Google maps days, the guy like, God, that would have made my life easier. You know, it's like SVG support, right? Life would have been a breeze. Try drawing a driving directions line without vector graphics.

And so, you know, in general, I think we've gone from these protocols defined in a document to basically open source code that becomes an implicit standard, like systems calls and Linux. Like there is a specification, there is POSIX as a standard, but like the kernel is the, like, that's what people write against. And it's both...

the documented behavior and all of the undocumented behaviors as well, for better or for worse. And it's why, you know, Linus and others are so adamant about things like binary compatibility and all that. Like this stuff matters.

So one of the things I really think about is like working with agents broadly is how do you it's I don't say it's easy to specify the guardrails, you know, but what about all those unspecified behaviors? So so much of like being a software engineer is like you come to the point where you're like the Internet's out and you get back the error code from the call and you got to do something with it.

And, you know, what percentage of the time do you just be like, yeah, I'm going to do this because it seems reasonable. And what percentage of the time do you like write a slack to your PM and be like, what do I do in this case? It's probably more the former than the latter. Otherwise it'd be really fricking inefficient to write software. But what happens when your AI makes that decision for you? It's not a wrong decision. You didn't say anything about that case.

The AI agent, the word agent comes from the word agency, right? So it's demonstrating its agency and it's making a decision. Does it document it? That would probably be tedious too, like, because there's so many implicit decisions. What happens when you click the button and the internet's out and it does something you don't like? How do you fix it? I actually think that we are like entering this new world where like the, how we express to an AI agent what we want is,

is always going to be an incomplete specification. And that's why agents are useful because they can fill in the gaps with some decent amount of reasoning, how you actually tune these over time and imagine like building an app with an AI agent as your software engineering companion. There's like an infinitely long tail. Infinite is probably over exaggerating a bit, but there's a fairly long tail of functionality that I guarantee you is not specified. How

How you actually tune that. And this is what I mean about creating a programming system. I don't think we know what that system is yet. And then similarly, I actually think for every single agentic domain, whether it's customer service or legal or software engineering, that's essentially what the company building those agents is building is like the system through which.

You express the behaviors you want, esoteric and small as it might be. Anyway, I think that's a really exciting area, though, just because I think that's where the magic or that's where the product insights will be in this space. It's like, how do you encounter those moments? It's kind of built into the UX and it can't just be, the answer can't just be prop better.

Yeah, I know. It's impossible. The prompt would be too long. Imagine getting a PRD that literally specified the behavior of everything that was represented by code. The answer would just be code. Just be the code. So this is my point. Prompts are great, but it's not actually a complete specification for anything. It never can be. And I think that's...

How you do interactivity, like the sort of human loop thing, when and how you do it. And that's why I really believe in domain-specific agents, because I think answering that in the abstract is like an interesting intellectual exercise. But that's why talking about agents in the abstract kind of did...

actively disinterested in it because I don't think it actually means anything. All it means is software is making decisions. That's what, you know, at least in a reductive way. But in the context of software engineering, it does make sense because you know, like, what is the process of first you specify what you want in a product, then you use it, then you give feedback. You can imagine building a product that actually facilitated that closed loop system. And then how is that represented, that complete specification of both

what you knew you wanted, what you discovered through usage, the union of all of that is what you care about and the rest is less to the AI. In the legal context, I'm certain there's a way to know when should the AI ask questions, when shouldn't it? How do you actually intervene when it's wrong? And certainly in the customer service case, it's very clear how our customers review every conversation

how we help them find the conversations they should review when they're having millions so they can find the few that are interesting, how when something is wrong in one of those conversations, how they can give feedback so it's fixed the next time in a way where we know the context of why I made that decision. But it's not up to us what's right, right? It's up to our customers.

So that's why I actually think for right now, when you think about building an agent and domain, to some degree, how you actually interact with the people specifies behavior is actually where a lot of the magic is. Yeah. Stop me if this is a little bit annoying to you.

But I have a bit of a trouble squaring domain-specific agents with the belief that AGI is real or AGI is coming because the point is general intelligence. And one way to view the bitter lesson is we can always make progress on being more domain-specific. Take whatever soda is and you make progress being more domain-specific and then you will be wiped out. The next advance happens. Clearly, you don't believe in that. But how do you...

personally square those things? Yeah, it's a really heavy question. And I think a lot about AGI given my role at OpenAI, but it's even hard for me to really conceptualize. And I love spending time with OpenAI researchers and actually just like people in the community broadly, just talking about the implications because there's the first order effect and effects of something that

It's super intelligent in some domains. And then there's the second and third order effects, which are harder to predict. So first, as I think that it seems likely to me that, you know, at first and something that is AGI will be good in digital domains, you know, because...

It's software. So if you think about something like AI discovering a new, say, like pharmaceutical therapy, the barrier to that is probably less the discovery than the clinical trial. And AI doesn't necessarily help with the clinical trial, right? That's a process that's

independent of intelligence and it's a physical process. Similarly, if you think about the problem of climate change or like carbon removal, there's probably a lot of that domain that requires great ideas, but like whatever great idea you came up with, if you wanted to sequester that much carbon, there's probably a big physical component to that. So it's not really limited by intelligence. It might be, I'm sure it could be accelerated somewhat by intelligence.

There's a really interesting conversation with an economist named Tyler Cohen recently. I just watched a video of him and he was just talking about how there's parts of the economy where intelligence is sort of the limited resource that will take on AI slash AGI really rapidly and will drive incredible productivity gains.

but there are other parts of the economy that aren't. And those will interact. It goes back to these complex second order effects, like prices will go up in the domains that can't absorb intelligence rapidly, which will actually then slow down. I don't think it'll be evenly spread. I don't think it'll be perhaps as rapidly felt in all parts of the economy as people think. I might be wrong, but I just think you can generalize in terms of its ability to

reason about different domains, which I think is what AGI means to most people, but it may not actually generalize in the world until because there's a lot of intelligence is not the limiting factor in like a lot of the economy. So going back to your more practical question is like, why make software at all of, you know, AGI is coming and

I'm going to say it that way. Should we learn to code? There's all variations of this. You know, my view is that I really do view AI as a tool and AGI as a tool for humanity. And so my view is when we were talking about like,

Is your job as a maker of software to author a code in an editor? I would argue no. Just like a generation ago, your job wasn't to punch cards in a punch card. That is not what your job is. Your job is to produce digital something. Whatever it is, what is the purpose of the software that you're making? Your job is to produce that. And so I think that our jobs will change rapidly and meaningfully

But,

But I think the idea that like our job is to type in a, in an editor is, is an artifact of the tools that we have, not actually what we're hired to do, which is to produce a digital experience to, you know, make firmware for a toaster or whatever, whatever it is we're doing. Right. Like that's our job. Right. And as a consequence, I think with things like AGI, I think the certainly software engineering will be one of the disciplines most impacted and

And I think that it's very, so like, I think if you're in this industry and you define yourself by the tools that you use, like how many characters you can type into Vim every day, that's probably not like a long-term stable place to be because that's something that certainly AI can do better than you. Yeah.

But your judgment about what to build and how to build it still apply. And that will always be true. And one way to think about it's like a little bit reductive is like, you know, look at startups versus larger companies. Like companies like Google and Amazon have so many more engineers than the startup, but then some startups still win. Like, why was that? Well, they made better decisions, right? They didn't type faster or produce more code. They did the right thing in the right market, the right time. And similarly,

If you look at some of the great companies, it wasn't the lack of, they had some unique idea. Sometimes that's the reason why a company succeeds, but it's often a lot of other things and a lot of other forms of execution. So like broadly, like the existence of a lot of intelligence will change a lot and it'll change everything.

our jobs more than any other industry. Maybe not, maybe it's exaggerated, but certainly as much as any other industry. But I don't think it like changes like why the economy around digital technology exists. And as a consequence, I think I'm really bullish on like the future of

of the software industry. I just think that like some things that are really expensive today will become almost free. And, but I think that, I mean, let's be honest, the half-life of technology companies is not particularly long as it is. Yeah. I, I brought this anecdote in a recent conversation, but

When I started at Google, we were in one building in Mountain View and then eventually moved into a campus, which was previously the Silicon Graphics campus. That was the first campus Google moved. I'm pretty sure it still has that campus. I think it's got a billion now. SGI was a company that was like really, really big, big enough to have a campus and then went out of business. And it wasn't that old of a company, by the way. It's not like IBM. You know, it was like big enough to get a campus and go to business in my lifetime. You know, that type of thing.

And then at Facebook, we had an office in Palo Alto. I didn't go into the original office when I joined. It was the second office, this old HP building near Stanford. And then we got big enough to want a campus and we bought Sun Microsystems campus. Sun Microsystems famously came out of Stanford, went high flying, was one of the dot-com darlings, and then eventually sort of like bought for pennies on the dollar by Oracle. Yeah.

And, you know, like all those companies, like in my lifetime, were big enough to like go public, have a campus, da-da-da, and then go out of business. So I think a lot will change. I don't mean to say this is going to be easy or like no one's business model is under threat, but will digital technology remain important? Will entrepreneurs having good judgment about where to apply this technology to create something of economic value still apply? Like 100%. And I've always used the metaphor, like,

If you went back to 1980 and described many of the jobs that we have, it would be hard for people to conceptualize. Like imagine I'm a podcaster. What the hell does that mean? Imagine going back to like 1776 and describing to Ben Franklin our economy today, like let alone the technology industry, just the services economy. It would be probably hard for him to conceptualize just like who grows the food just because the idea that so few people...

in this country are necessary to produce the food for so many people would defy so much of his conception of just like how food is grown that it would just be like, it would probably take a couple hours of explaining. It's kind of like the same thing. It's like, we have a view of like how this world works right now that's based on just the constraints that exist, but there's going to be a lot of other opportunities and other things like that. So

I don't know. I mean, certainly writing code is really valuable right now and it probably will change rapidly. I think people just need a lot of agility. I always use the metaphor where like a bunch of accountants and Microsoft Excel was just invented. Are you going to be the first person who sets down your HP calculator and says, I'm going to learn how to use this tool because it's just a better way of doing what I'm already doing? Or are you going to be the one who's like, you know...

pulling out their slide rule and HP calculator and saying, these kids these days, you know, in their Excel, they don't understand. You know, it's been a little bit reductive, but I just feel like probably the best thing all of us can do, not just in software industry, but I do think it's really important

Kind of interesting, just reflection that we're disrupting our own industry as much as anything else with this technology is to lean into the change, try the tools, like install the latest coding assistance. You know, when O3 Mini comes out, write some code with it that you don't want to be the last accountant to embrace Excel. You might not have your job anymore. Yeah.

We have some personal questions on like how you keep up with AI and, you know, all that, all the other stuff. But I also want to, and I'll let you get to your question. I just wanted to say that the analogy that you made on food was really interesting and resonated with me. I feel like we are kind of in like an agrarian economy, like a barter economy for intelligence. And now we're sort of industrializing intelligence. And that really just was an aha moment for me. I just wanted to reflect that.

Yeah. How do you think about the person being replaced by an agent and how agents talk to each other? So even at Sierra today, right, you're building agents that people talk to.

But in the future, you're going to have agents that are going to complain about the order they placed to the customer support agent. Turtles all the way down. Exactly. And, you know, you were the CTO of Facebook. You built OpenGraph there. And I think there were a lot of pros, things that were being enabled, and then maybe a lot of cons that came out of that. How do you think about how the agent protocols should be built? Thinking about all the implications of, you know, privacy, data discoverability and all that. Yeah, I think it's a little early.

for a protocol to emerge. I've read about a few of the attempts and maybe some of them will catch on. One of the things that's really interesting about large language models is because they're trained on language is they are very capable of using the interfaces built for us. And so my intuition right now is that because we can make an interface that works for us and it also works for the AI, maybe that's good enough. You know, I'm being a little bit hand wavy here, but

Making a machine protocol for agents that's inaccessible to people, there's some upsides to it, but there's also quite a bit of downside to it as well.

I think it was Andrej Karpathy, but I can't remember, but like one of the more well-known AI researchers wrote, like I spent half my day writing English, you know, in my software engineering. I have an intuition that agents will speak to agents using language for a while. I don't know if that's true, but there's a lot of reasons why that may be true. And so, you know, when...

your personal agent speaks to a CIRA agent to help figure out why your Sonos speaker has the flashing orange light. My intuition is it will be in English for a while. And I think there's a lot of benefits to that.

I do think that we still are in the early days of like long running agents. I don't know if you tried the deep research agent that just came out. We have one for you. We deep research. Oh, that's great. It was interesting because it was probably the first time I really got like notified by OpenAI when something was done. And I brought up before the interactive parts of it.

That's the area that I'm most interested in right now, just is that like most agentic workflows are relatively short running and the workflows that are multi-stakeholder, long running, multi-system, we deal with a lot of those at Sierra. But broadly speaking, I think that those are interesting just because I always use the metaphor that prior to the mobile phone, every time you got like a notification from some internet service, you get an email.

Not because email was like the best way to notify you, but it's the only way to notify you. And so, you know, you used to get tagged on a photo on Facebook and you get an email about it. Then once this was in everyone's pocket, every app had equal access to buzzing your pocket. And now, you know, for most of the apps I use, I don't get email notifications. I just get it directly from the app. I sort of wonder what the form factors will be for agents using

How do you address and reach out to other agents? And then how does it bring you, the operator of the agent, into the loop at the right time? You know, I certainly think there's companies like, you know, with ChatGPT, that will be one of the major consumer surfaces. So there's a lot of gravity to those services.

But then if I think about sort of domain specific workflows as well, I think there's just a lot to figure out there. So I'm less the agent agent protocols. I actually think I could be wrong. I haven't thought about a lot. Like it's sort of interesting, but actually just how it engages with all the people in it is actually one of the things I'm most interested to see how it plays out as well. Yeah, I think to me, the things that are at the core of it is kind of like our back, you know, it's like.

can this agent access this thing? I think in the customer support use cases, maybe less prominent, but like in the enterprises is more interesting. And also like language, like you can compress the language. If the human didn't have to read it, you can kind of save tokens, make things faster. So speaking of energy. Yeah. Yeah.

You mentioned being notified about deep research. Is there an OpenAI deep research has been achieved internally notification that goes out to everybody and the board gets summoned and you get to see it? Can you give any backstory on that process? OpenAI is a mission-driven nonprofit that I think of primarily as a research lab. It's obviously more than that. In some ways, ChatGPT is a cultural defining product.

But at the end of the day, the mission is to ensure that artificial general intelligence benefits all of humanity. So a lot of our board discussions are about research and its implications on humanity, which is primarily safety. Obviously, I think the one cannot achieve AGI and not think about safety as the primary responsibility for that mission, but it's also access and other things. So

Things like deep research, we definitely talk about because it's a big part of if you think about what does it mean to build AGI. But we talk about a lot of different things, you know, so it's like sometimes we hear about things super early. Sometimes if it's not really related, if it's sort of far afield from the core of the mission, you know, it's like more casual. So it's pretty fun to be a part of that just because it's my favorite part of every board discussion is just hearing from the researchers about AGI.

how they're thinking about the future and just like the next milestone in creating AGI. Well, lots of milestones. Maybe we'll just start at the beginning. Like, you know, there are very few people that have been in the rooms that you've been in. How do these conversations start? How do you get brought into OpenAI? Obviously, there's a bit of drama that you can go into if you want. Just take us into the room. Like, what happens? What is it like?

Was it a Thursday or Friday when Sam was fired? Yeah. So I heard about it like everyone else, you know, just like saw it on social media. I remember where I was walking here. And I was totally shocked and messaged my co-founder, Clay, and I was like, gosh, I wonder what happened.

And then on Saturday, trying to just protect sort of like people's privacy, but I ended up talking to both Adam D'Angelo and Sam Altman and basically getting a kind of synopsis of what was going on. And

My understanding that you could you'd have to ask them for sort of their perspective on this was basically like they both the board and Sam both felt some trust in me. And it was a very complicated situation because the company was reacted pretty negatively, understandably negatively to Sam's.

being fired. I don't think they really understood what was going on. And so the board was, you know, in a situation where they needed to sort of figure out a path forward. And they reached out to me and then I talked to Sam and basically ended up kind of the mediator, for lack of a better word. Not really formally that, but fundamentally that.

And as the board was trying to figure out a path forward, you know, we ended up with a lot of discussions with like how to reinstate Sam as CEO of the company, but also do a review of what happened so that the board's concerns could be fully sort of adjudicated, you know, because

because they obviously did have concerns going into it. So it ended up there. So I think broadly speaking, I was just like a known, like a lot of the stakeholders in it knew of me and I'd like to think I have some integrity. So it was just sort of like, you know, they were trying to find a way out of a very complex situation. So I ended up kind of

meeting that and have formed a really great relationship with Sam and Greg and pretty challenging time for the company. Didn't plan to be, you know, on the board. I got pulled in because of the crisis that happened. And I don't think I'll be on the board forever either. I posted when I joined that I was going to do it temporarily. That was like a year ago. You know, I really like to focus on Sierra, but I also really care about

I mean, it's a fantastic mission. It's just an amazing mission. I've maybe been in high stakes situations like that twice, but obviously not as high stakes. But what principles do you have when you know this is the highest egos, highest amount of stakes possible, highest amount of money, whatever. What principles do you have to go into something like this? Obviously, you have a great reputation, you have a great network. What are your must-dos and what are your must-not-dos?

I'm not sure if there were a playbook for these situations, that would be a lot simpler. Please share. So...

You know, I just probably go back to like the way I operate in general. One is first principles thinking. So I do think that there's crisis playbooks, but there was nothing quite like this. And you really need to understand what's going on and why. I think a lot of moments of crisis are fundamentally human problems. You can strategize about people's incentives and this and that and the other thing. But I think it's really important to understand that.

all the people involved and what motivates them and why, which is fundamentally an exercise in empathy, actually. Like, do you really understand why people are doing what they're doing? And then getting good advice, you know, and I think people, what's interesting about a high profile crisis is everyone wants to give you advice. So there's no shortage of advice, but the good advice is the one. And I think that really involves judgment, which is who are people based on first principles analysis of the situation based on your assessment and

of what, you know, all the people involved who would have true expertise and good judgment, you know, in these situations so that you can either validate your judgment if you have an intuition or if it's an area that's like a area of like,

say legal expertise that you're not expert in, you want the best in the world to give you advice. And I actually find people often seek out the wrong people for advice. And it's really important in those circumstances. Well, I mean, it's super well navigated. I have, I've got one more and then we can sort of move on on this topic. The, the Microsoft offer was real, right? For Sam and team to move over at some, at one point in that weekend.

I'm not sure. I was sort of in it from one vantage point, which was... Actually, what's interesting is I didn't really have particular skin in the game. So I came at this as...

I still don't own any equity in OpenAI. I was just a meaningful bystander in the process. And the reason I got involved, and I will get to answer your question, but the reason I got involved was just because I cared about OpenAI. So, you know, I had left my job at Salesforce and by coincidence, the next month, ChachiBeaty comes out and, you know, I got nerd sniped like everyone else. I'm like, I want to spend my life on this. This is so amazing. And

And I wouldn't, I don't know if I'd be, I wouldn't, I'm not sure I would have started another company if not for OpenAI kind of inspiring the world with ChatGPT. Maybe I would have, I don't know, but it was like, it had a very significant impact on you, all of us, I think.

So the idea that it would dissolve in a weekend just like bothered me a lot. And I'm very, I'd like, I'm very grateful for, for open AI's existence. And my guess is that is probably shared by a lot of the competing research labs to different degrees too. It's just like it kind of that rising tide lifted all boats. Like I think it created the proverbial iPhone moment for AI and changed, changed the world.

So there were lots of Microsoft as an investor and OpenAI has a vested interest and the Sam and Greg had their interests, the employees had their interests, and there's lots of wheeling and dealing. And you can't A-B test decision-making. So I don't know if things had fallen apart with that. I don't actually know. And you also don't know...

what's real, what's not. I mean, so you'd have to talk to them to know what was really real. Mentioning advisors, I heard it seems like Brian Armstrong was a surprisingly strong advisor during the whole journey. My understanding was both Brian Armstrong and Ron Conway were really close to Sam through it. And I ended up talking to them, but also tried to talk a lot to the board too, trying to be the...

I was trying to, you obviously have a position on it. Like, and I felt that, you know, from the outside looking in, I just really wanted to understand, like, why did this happen? And the process seemed, you know, perhaps ham-fisted, you know, to say the least.

But I was trying to remain sort of dispassionate because one of the principles was like, if you want to put Humpty Dumpty back together again, you can't be a single issue voter, right? Like you have to go in and say like, so it was a pretty sensitive moment. But yeah, I think Brian's one of the great entrepreneurs and a true friend and ally to Sam through that. He's been through a lot as well. The reason I bring up Microsoft is because, I mean, obviously huge backer. We actually talked to David Luan who pitched

I think it was Satya at the time on the first billion dollar investment in OpenAI. The understanding I had was that the best situation was for Microsoft was OpenAI stay as is. Second best was Microsoft

acquihires Sam and Greg and whoever else. And that was the relationship at the time. Super close, exclusive relationship and all that. I think now things have evolved a little bit and, you know, with the evolution with Stargate and there's some uncertainty or FUD about

the relationship between Microsoft and OpenAI. And I just wanted to, just kind of bring that up because like we're also working, like Satya is, we're fortunate to have Satya as a subscriber to the InSpace and we're working on an interview with him and we're trying to figure out how this has evolved now. Like what is, how would you characterize the relationship between Microsoft and OpenAI? Microsoft's, you know, the most important partner of OpenAI, you know, so we have a really strong

like deep relationship with them on many fronts. So I think it's always evolving just because the scale of this market is evolving and in particular, the capital requirements for infrastructure are well beyond what anyone would have predicted two years ago, let alone whenever the Microsoft relationship started. What was that?

Six years ago? I actually don't. I should know off the top of my head. But it was a long time, long, sorry, in the world of AI, a longer time ago. I don't really think there's anything to share. I mean, I think the relationships evolved because the markets evolved, but the core tenets of the partnership have remained the same. And it's, you know, by far OpenAI's most important partner. Just double clicking a little bit more, just like obviously a lot of our listeners are

you know, care a lot about the priorities of OpenAI. I've had it phrased to me that OpenAI had sort of five top level priorities, like always have frontier models, always be on the frontier sort of efficiency as well, be the first in sort of multi-modality, whether it's video generation or real-time voice, anything like that. How would you characterize the top priorities of OpenAI, apart from just the highest level AGI thing?

I always come back to the highest level AGI, as you put it. It is a mission-driven organization. And I think a lot of companies talk about their mission, but OpenAI is literally like the mission defines everything that we do. And I think it is important to understand that if you're trying to predict where OpenAI is going to go, because...

If it doesn't serve the mission, it's very unlikely that it will be a priority for OpenAI. You know, it's a big organization. So occasionally you might have like side projects. You're like, you know what? I'm not sure that's going to really serve the mission as much as we thought. Like, let's not do it anymore. But at the end of the day, like people work at OpenAI because they believe in the benefits the AGI can have to humanity. Some people are there because they want to build it.

And the actual act of building is incredibly intellectually rewarding. Some people are there because they want to ensure that AGI is safe. I think we have the best AGI safety team in the world. And there's just so many interesting research problems to tackle there as these models become increasingly capable, as they have access to the internet, as they have access to tools. It's just like...

really interesting stuff. But everyone is there because they're interested in the mission. And as a consequence, I think that...

You know, if you look at something like deep research that lens, it's pretty logical, right? It's like, of course, that's if you're going to think about what it means to create AGI, enabling AI to help further the cause of research is meaningful. You can see why a lot of the AGI labs are working on software engineering and code generation, because that seems pretty useful. You're trying to make AGI, right? Just because a huge part of it is code, you know?

to do it. Similarly, as you look at sort of tool use and agents right down the middle of what you need to do AGI, that is the part of the company. I don't think there is like a top, I mean, sure there's like a, maybe an operational top 10 list, but it is fundamentally about building AGI and ensuring AGI benefits all of humanity. And that's all we exist for. And the rest of it is like not a distraction necessarily, but that's like the only reason the organization exists.

The thing that I think is remarkable is if I had described that mission to the two of you four years ago, like, you know, one of the interesting things is like, how do you think society would use AI? We'd probably think almost maybe like industrial applications, robots, all these other things. I think chat GPT has been the most

And it doesn't feel counterintuitive now, but like counterintuitive way to serve that mission, because the idea that you can go to chatgpt.com and access the most advanced intelligence in the world. And there's like a free tier is like pretty amazing. So actually one of the neat things I think is that,

Chow Chee Pee Tea famously was a research preview that turned into this industry-defining brand. I think it is one of the more key parts of the mission in a lot of ways because it is the way many people use this intelligence for their everyday use. It's not limited to the few. It's not limited to a form factor that's inaccessible. So I actually think that

It's been really neat to see how much that has led to. There's lots of different contours of the mission of AGI, but benefit of humanity means everyone can use it. And so I do think like to your point on is cost important? Oh yeah, cost is really important. How can we have all of humanity access AI if it's incredibly expensive and you need the $200 subscription, which I pay for because I think, you know, O1 Pro mode is mind blowing, you know, but it's, you want both because you need the advanced research side

You also want everyone in the world to benefit. So that's the way. And if you're trying to predict where we're going to go, just think, what would I do if I were running a company to, you know, go build AGI and ensure that it benefits humanity? That's how we prioritize everything. I know we're going to wrap up soon. I would love to ask some personal questions. One, what are maybe I've been guiding principles for you? One, in choosing what to do. So, you know, you were co-CEO of Salesforce. You were CTO of Facebook. I'm sure you got it done a lot more things.

But those were the choices that you made. Do you have frameworks that you use for that? Yeah, let's start there.

I try to remain sort of like present and grounded in the moment. Meditation? No, I wish I did it more, but I don't. I really try to focus on like impact, I guess, on what I work on, but also do I enjoy it? And sometimes I think we talked a little bit about, you know, what should an entrepreneur work on if they want to start a business? And I was sort of joking around about sometimes the best businesses are passion projects, right?

I definitely take into account both. Like I, I want to have an impact on the world. And I also like want to enjoy building what I'm building. And I wouldn't work on something that was impactful if I didn't enjoy doing it every day.

And then I try to have some balance in my life. I've got a family and one of the values of Sierra is competitive intensity, but we also have a value called family. And we always like to say intensity and balance are compatible. You can be in a really intense situation.

person. And I don't have a lot of like hobbies. I basically just like work and spend time with my family, but I have balance there. And, but I, but I do try to have that balance just because, you know, if you're proverbially, you know, on your deathbed, what do you, what do you want? And I want to be surrounded by people I love and to be proud of the impact that I had.

I know you also love to make handmade pasta. I'm Italian, so I would love to hear favorite pasta shapes, maybe sauces. Oh, that's good. I don't know where you found that. Was that deep research or whatever? It was deep research. That's a deep cut. Sorry, where is this from? It was from, I forget. The source was linked. I do love to cook.

So I started making pasta when my kids were little because I found getting them involved in the kitchen made them eat their meals better. So like participating in the act of making the food made them appreciate the food more. And so we do a lot of just like spaghetti, linguine, just because...

It's pretty easy to do and the crank is turning and the part of the pasta making for me was like they could operate the crank and I could put it through and it was very interactive. Sauces I do a bunch, probably, I mean, the like really simple marinara with really good tomatoes and it's like...

just a classic especially if you're a really good pasta but i like them all but i mean i just you know that's probably the go-to just because it's easy so i just said to us when i saw it come up in the research i was like i mean you have to weigh in as the italian here yeah i would say so there's one type of spaghetti called alla chitarra it's kind of like they're

almost square. Those are really good. We're like, you do a cherry tomato sauce with oil. You can put on do again there. Yeah. We can do a different podcast on, on that. It was like head of the Italian tech mafia. Very, very good restaurants. I highly recommend going to a salad restaurants with him. Yeah. Okay. So then my question would be, how do you keep up on AI? There's so much,

going on. Do you have some special news resource that you use that no one else has? No, but most mornings I'll try to sort of like read, kind of check out what's going on on social media, just like any buzz around papers. But the thing that I don't

The thing I really like, we have a small research team at Sierra and we'll do sessions on interesting papers. And I think that's really nice. And, you know, usually it's someone who like really went deep on a paper and kind of does a, you know, you bring your lunch and just kind of do a readout. And I found that to be the most rewarding just because, you know, I love research, but sometimes, you know, some simple concepts are, you know, surrounded by a lot of ornate language and you're like, let's get a few more things.

you know, Greek letters in there to make it seem like we did something smart, you know, and sometimes just talking it through conceptually, I can grok the so what, you know, more easily. And so that's also been interesting as well. And then just conversations, you know, I always try to, when someone says something I'm not familiar with, like I've gotten over the feeling dumb thing. I'm like, I don't know what that is. Explain it to me.

And, and yes, you can sometimes just find neat techniques, new papers, things like that. It's impossible to keep up that if you want to say the answer. For sure. I mean, if you're struggling, I mean, imagine the rest of us, but like, you know, you, you have really privileged and special conversations. What research directions do you think people should pay attention to just based on the buzz you're hearing internally or, you know?

This isn't surprising to you or anyone, but I think in general, the reasoning models. But it's interesting because two years ago, you know, the chain of thought reasoning paper was pretty important, you know, and in general, chain of thought has always been a meaningful thing from the time. I think it was a Google paper, right, if I'm remembering correctly. Google authors, yeah. And I think that...

It has always been a way to get more robust results from models. What's just really interesting is the combination of distillation and reasoning is making the relative performance. And I'll say actually performance is an ambiguous word. Basically, the latency of

of these reasoning models more reasonable. Because if you think about, say, GPT-4, which was, I think, a huge step change in intelligence, it was quite slow and quite expensive for a long time. So it limited the applications. Once you got to 4.0 and 4.0 Mini, you know, it opened the door to a lot of different applications, both for cost and latency.

When 01 came out, really interesting quality-wise, but it's quite slow, quite expensive. So just the limited applications. Now I just saw someone post, they distilled one of the DeepSeq models and just made it really small. And it's doing these chains of thoughts so fast.

It's achieving latency numbers, I think, sort of similar to like GPT-4 back in the day. And now all of a sudden you're like, wow, this is really interesting. And I just think, especially if there's lots of people listening who are like applied AI people, it's basically like price performance quality and performance.

For a long, like for a long time, the market's so young. If you, you really had to pick which quadrant you wanted for the use case and the idea that we'll be able to get like relatively sophisticated reasoning at like, oh, three minutes has been amazing. And if you haven't tried, it's like the speed of it makes me use it so much more than 01, just because 01, I'd actually often craft my prompts using 4.0 and then put it into 01 just because it was so slow. You know, I just didn't want to like the turnaround time. Yeah.

So I'm just really excited about them. I think we're in the early days. In the same way with the rapid change from GPT-3 to 3.5 to 4, and you just saw like every... And I think with these reasoning models, just how we're using sort of inference, time, compute, and the techniques around it, the use cases for it, it feels like we're in that kind of Cambrian explosion of ideas and possibilities. So I just think it's really exciting. Yeah.

And certainly, if you look at some of the use cases we're talking about, like coding, these are the exact types of domains where these reasoning models exist.

and should have better results. And certainly in our domain, there's just some problems that like thinking through more robustly, which we've always done, but it's just been like these models are just coming out of the box with a lot more batteries included. So I'm super excited about them. Any final call to action? Are you hiring, growing the team? More people should use Sierra, obviously. We are growing the team and

And we're hiring software engineers, agent engineers. So send me a note, [email protected]. We're growing like weed. Our engineering team is exclusively in person in San Francisco, though we do have some kind of forward deployed engineers in other offices like London. So. Awesome. Thank you so much for the time, Brett. Thanks for having me.