We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode O3 and the Next Leap in Reasoning with OpenAI’s Eric Mitchell and Brandon McKinzie

O3 and the Next Leap in Reasoning with OpenAI’s Eric Mitchell and Brandon McKinzie

2025/5/1
logo of podcast No Priors: Artificial Intelligence | Technology | Startups

No Priors: Artificial Intelligence | Technology | Startups

AI Deep Dive Transcript
People
B
Brandon McKinzie
E
Elad Gil
E
Eric Mitchell
Topics
Eric Mitchell: O3模型是OpenAI最新的O系列模型,它比之前的模型更聪明,能够给出更准确的答案。更重要的是,它能够使用各种工具来增强其能力,例如浏览网页、编写和执行代码等,从而处理更复杂的任务。这使得O3模型能够更好地理解和响应用户的需求,并提供更有效的解决方案。 我个人认为,工具的使用对于O3模型的测试时间缩放至关重要。它能够让模型更有效地利用计算资源,并获得更好的结果。在使用O3模型的过程中,我发现模型思考的时间越长,获得的结果就越好,这与之前的模型有很大的不同。 我认为,模型的统一性非常重要。我们希望用户能够更容易地使用模型,而不是在众多模型中进行选择。因此,我们将努力让模型的使用体验更加直观和便捷。 在未来,我希望模型能够更好地理解自身的不确定性,并根据需要花费相应的时间来给出答案。如果模型已经知道答案,它应该直接给出答案;如果需要花费时间来计算,它也应该能够准确地评估所需的时间。 我认为,模型应该更容易被用户控制和引导,特别是对于API使用场景,需要模型能够快速给出答案。模型应该能够根据用户的具体情况和要求做出正确的选择,即使这需要进行思考。 网络浏览是工具使用的一个重要应用场景,它能够帮助模型处理需要最新信息的任务。强化学习的目标需要根据预期用户和他们的需求进行调整。 我认为,模型在编码和研究方面具有很大的潜力,能够显著提高工作效率。未来模型与用户交互的方式将会更加直观和自然,例如通过语音或更直接的方式进行交互。模型使用工具的方式与人类非常相似,这可能是因为模型学习的数据中包含了大量人类行为的信息。 在大型环境中使用工具进行异步强化学习需要处理大量的基础设施问题,例如如何优雅地处理工具故障。 Brandon McKinzie: O3模型的训练与之前的模型不同,它使用了强化学习,目标是让模型解决更复杂的任务,并根据需要花费更多时间来找到答案。 工具的使用对于O3模型的测试时间缩放至关重要,它能够让模型更有效地利用计算资源,并获得更好的结果。在使用O3模型的过程中,我发现模型思考的时间越长,获得的结果就越好,这与之前的模型有很大的不同。 我认为,模型应该更容易被用户控制和引导,特别是对于API使用场景,需要模型能够快速给出答案。 工具的使用能够显著提高模型的测试时间缩放效果,特别是对于视觉推理任务。工具的使用能够提高模型的计算效率,例如,编写简单的程序来解决问题比让模型自己尝试要高效得多。 我认为,模型在编码和研究方面具有很大的潜力,能够显著提高工作效率。 模型不再是一个封闭的系统,它可以根据需要寻求外部信息来解决问题。 反复发送相同的提示可以帮助用户了解模型的输出分布,从而更好地利用模型。 发送一些超出预期能力的提示可以帮助用户更好地了解模型的能力边界,并发现模型的惊喜之处。

Deep Dive

Shownotes Transcript

Translations:
中文

Oh, man.

Hi, listeners, and welcome back to No Priors. Today, I'm speaking with Brandon McKenzie and Eric Mitchell, two of the minds behind OpenAI's O3 model. O3 is the latest in the line of reasoning models from OpenAI, super powerful with the ability to figure out what tools to use and then use them across multi-step tasks. We'll talk about how it was made, what's next, and how to reason about reasoning. Brandon and Eric, welcome to No Priors. Thanks for having us. Yeah, thanks for having us. Do you mind walking us through O3, what's different about it,

what it was in terms of a breakthrough, in terms of like, you know, a focus on reasoning and you're adding memory and other things versus this core foundation model, LLM, and what that is. So O3 is like our most recent model in this O series line of models that are focused on thinking carefully before they respond. And

these models are in sort of some vaguely general sense smarter than like models that don't think before they respond, you know, similarly to humans. It's easier to be, you know, more accurate if you think before you respond. I think the thing that is really exciting about O3

is that not only is it just smarter if you make like an apples to apples comparison to our previous O-series models, you know, it's just better at like giving you correct answers of math problems or factual questions about the world or whatever. This is true and it's great. And we, you know, we'll,

continue to train models that are smarter. But it's also very cool because it uses a lot of tools that enhance its ability to do things that are useful for you. So yeah, like you can train a model that's really smart, but like if it can't browse the web and get up-to-date information, there's just a limitation on how much useful stuff that model can do for you. If the model can't actually write and execute code, there's just a limitation to how, you know, the sorts of things that an LLM can do efficiently. Whereas like,

a relatively simple Python program can solve a particular problem very easily. So not only is the model on its own smarter than our previous O-Series models, which is great, but it's also able to use all these tools that further enhance its abilities and whether that's doing

research on something where you want up-to-date information or you want the model to do some data analysis for you, or you want the model to be able to do the data analysis and then kind of review the results and adjust course as it sees fit instead of you having to be so sort of prescriptive about like each step along the way, the model is sort of able to take these like high-level requests, like do some due diligence on this company and

you know, maybe run some reasonable like forecasting models on so and so thing. And then, you know, write a summary for me, like the model will kind of like infer a reasonable set of actions to do on its own. So it gives you kind of like a higher level interface to doing some of these more complicated tasks. That makes sense. So it sounds like basically there's like a few different changes between your core sort of GPT models, where now you have something that takes a pause to think about something. So it inference time,

there's more compute happening and then also it can do sequential steps because they can infer what are those steps and then go act on them. How did you build or train this differently from just a core foundation model or when you all did GPT 2.5 and 4 and all the various models that have come over time,

What is different in terms of how you actually construct one of these? I guess the short answer is reinforcement learning is the biggest one. So yeah, rather than just having to predict the next token and some large pre-training corpus from everywhere, essentially, now we have a more focused goal of the model solving very difficult tasks and taking as long as it needs to do to figure out the answers to those problems. Something that's like

kind of magical from a user experience for me was we've in the past for our reasoning models talked a lot about test time scaling. And I think for a lot of problems, you know, without tools, test time scaling might be

occasionally work but at some point the model is just kind of ranting in its internal chain of thought and especially for like some visual perception ones it knows that it doesn't it's not able to see the thing that it needs and it just kind of like loses its mind and goes insane and I think

tool use is a really important component now to continuing this like test time scaling. And you can feel this when you're talking to O3, at least my impression when I first started using it was the longer it thinks, like I really get the impression that like I'm going to get a better result and you can kind of watch it do really intuitive things. And it's a very different experience, but being able to kind of trust that as you're waiting, like it's worth the wait and you're going to get a better result because of it. And the model's not just off doing some, you know, totally irrelevant thing. That's cool.

I think in your original post about this too, you all had a graph which basically showed that you looked at how long it thought versus the accuracy of result. And it was a really nice relationship. So clearly, you know, thinking more deeply about something really matters. And it seems like, um,

In the long run, do you think there's just going to be a world where we have sort of a split or bifurcation between models, which are sort of fast, cheap, efficient, get certain basic tasks done? And then there's another model, which you upload a legal M&A folder, and it takes a day to think. And it's slow and expensive, but then it produces output that would take you a team of people.

you know, a month to produce? Or how do you think about the world in terms of how all this is evolving or where it's heading? You know, I think for us, like unification of our models is something that, you know, Sam has talked about publicly that, you know, we have this big crazy model switcher in ChatGPT and there are a lot of choices. And, you know, we have...

A model that might be good at any particular thing, you know, that a user might want to do, but that's not that helpful if it's not easy for the user to figure out, well, which model should I use for that task? And so, yeah, making the models better able, you know, making this experience more intuitive is definitely something that is is like.

valuable and something we're interested in doing. And that applies to this question of like, are we going to have like two models that people pick between or a zillion models that people pick between? Or do we put that decision inside the model? I think everyone is going to try stuff and figure out what works well for like the problems they're interested in and like the users that they have. But yeah, I mean, that

So that question of how do you make that sort of decision be as effective, accurate, intuitive as possible is definitely top of mind. Is there a reason from a research perspective to combine reasoning with pre-training or try to have more control of this? Because if you just think about it from the product perspective of the end consumer dealing with chat GPT.

you know, we won't get into the naming nonsense here, but they don't care. They just want like the right answer and the amount of intelligence required to get there in as little time as possible, right? The ideal situation is it's like intuitive, that like how long should you have to wait? You should have to wait as long as it takes for the model to like give you a correct answer. And I hope we can get to a place where our models have a more precise understanding of their own level of uncertainty. Yeah.

Because, you know, if they already know the answer, they should just kind of tell you it. And if it takes them a day to actually figure it out, then they should take a day. But you should always have a sense of like, it takes exactly as long as it needs to for that current like model's intelligence and

I feel like we're on the right path for that. Yeah, I wonder if there isn't a bifurcation, though, between like an end user product and a developer product, right? Because there are lots of companies that use, you know, the APIs to all of these different models and then for very specific tasks. And then on some of them, they might even use like,

open source models with really cheap inference with stuff that they control more. I hope you could just kind of tell the model like, hey, this is an API use case. And yeah, you really can't be over there thinking for like 10 minutes, we got to get an answer to the user. It'd be great if their models kind of get to be more steerable like that as well. Yeah, I think it's just a general steerability question. Like at the end of the day, if the model's smart, like you should be able to specify

the context of your problem and the model should do the right thing. There's going to be some limitations because maybe just figuring out given your situation what is the right thing to do might require thinking in and of itself to figure out. It's not that you can obviously do this perfectly, but pushing all the right parts of this into the model to make things easier for the user is like

seems is a very good goal. Can I go back to something else you said? Like, so the first guest we ever had on the podcast was actually Noam Brown. Oh, nice. I've heard of him. You know, two plus years ago. Yes, hi, Noam. It'd be great to get some intuition from you guys for why tool use helps like test time scaling work much better. I can give maybe very concrete cases for like the visual reasoning side of things. The

Uh, there's a lot of cases where, uh, and back to us also the model being able to estimate its own uncertainty, you'll, you'll give it some kind of question about an image and the model will very transparently tell you an issue. And I thought like, I, I don't know, I can't really see the thing you're talking about very well. Or like, uh, it almost knows like that its vision is not very good. And, uh,

But what's kind of magical is when you give it access to a tool, it's like, okay, well, I got to figure something out. Let's see if I can manipulate the image or crop around here or something like this. And what that means is that it's much more productive use of tokens as it's doing that. And so your test time scaling slope goes from something like this to something much deeper and faster.

We've seen exactly that. The test time scaling slopes for without tool use and with tool use for visual reasoning specifically are very noticeably different. Yeah, I was going to say, for writing code for something, there are a lot of things that an LLM could try to figure out on its own, but would require a lot of...

attempts and self-verification that you could write a very simple program to do in like a verifiable and much faster way. So, you know, I do some research on this company and like use this type of, you know, valuation model to tell me like, you know, what the valuation should be like.

you could have the model try to crank through that and fit those coefficients or whatever in its context, or you could literally just have it write the code to just do it the right way and just know what the actual answer is. And so, yeah, I think part of this is you can just allocate compute a lot more efficiently because you can defer stuff that the model doesn't have comparative advantage to doing to a tool that is really well suited to doing that thing. One of the ways I've been using

some form of O3 a lot is deep research, right? I think that's basically a research analyst AI that you all have built that basically will go out, will look up things on the web, will synthesize information, will chart things for you. It's pretty amazing in terms of this capability set. Did you have to do anything special in terms of any form of specific reinforcement learning specifically for it to be better at that or other things that you built against it? Or how did you think about

the data training for it, the data that was used for training it. I'm just curious how that product, if it all is a branch off of this and how you thought about building that specifically as part of this broader effort. I think when we think about tool use, I think browsing is one of the most natural

places where you think of as a starting point of like, okay, and it's not always easy. I mean, the initial kind of browsing that we included in GPT-4 a few years back, it was hard to make it work in a way that felt reliable and useful. But in the sort of modern, these days,

last year, you know, uh, two years ago is ancient history. Um, I think it feels like a natural place to start because it's like so widely applicable to so many types of queries, like anything that is, you know, requires up-to-date information, like it should help to browse for. And so, um, in terms of like a test bed for, Hey, like does, you know, the way we're doing RL, like, does it really work? Or, you know, can we really get the model to learn like, uh,

longer time horizon kind of meaningful extended behaviors like it it feels like kind of a natural place to start in some ways in that it you know also is fairly likely to be like useful in a relatively short amount of time so it's like yeah let's let's try that um i mean you know in rl like at the end of the day you're defining an objective and uh if you have an idea for like

Who is going to find this most useful? Like, you know, you might like want to tailor your the objective, you know, to who you expect to be using the thing, what you expect they're going to want, you know, what is their tolerance for

do they want to sit through a 30-minute rollout of deep research? When they ask for a report, do they want a page or five pages or a gazillion pages? So yeah, I mean, you're definitely, you want to tailor things to who you think is going to be using it. I feel like there's a lot of almost white collar behavior work or knowledge work that you all are really capturing through this sort of tooling going forward. And you mentioned software engineering as one potential area.

deep research and sort of analytical jobs is another where there's all sorts of really interesting work to be done that's super helpful in terms of augmenting what people are doing. Are there two or three other areas that you think are the most near-term interesting applications for this, whether OpenAI is doing it or others should do it aside? I'm just sort of curious how you think about the big application areas for this sort of technology. I guess my very biased opinion

One that I'm excited about is coding and also research in general, being able to improve upon the velocity that we can do research at OpenAI and others can do research when they're using our tools. I think our models are getting a lot better very quickly at being actually useful. And it seems like they're kind of reaching some kind of inflection point where

they are useful enough to want to reach out to and use multiple times a day, for me at least, which wasn't the case. They're always a little bit behind what I wanted them to be, especially when it comes to navigating and using our internal code base, which is

not simple. And it's amazing to see more recent models actually really spending a lot of time trying to understand the questions that we ask them and coming back with things that save me many hours of my own time. People say that's the fastest potential bootstrap, right? In terms of each model subsequently helping to make the next model better, faster, and

cheaper, etc. And so people often argue that that's almost like a inflection point on the exponent towards superintelligence is basically this ability to use

AI to build the next version of AI. Yeah. And there's so many different components of research, too. It's not just sitting off in the ivory tower thinking about things, but there's hardware, there's various components of training and evaluation and stuff like this. And each of these can be turned into some kind of task that can be optimized and iterated over. So there's plenty of room to squeeze out improvements. We talked about browsing the web.

Writing code, arguably the greatest tool of all, right? Especially if you're trying to figure out how to spend your compute, right? More efficient code. Generating images, writing text. There are certainly like trajectories of action I think are not in there yet, right?

Right. Like reliably using a sequence of business software. I'm really excited about the computer use stuff. It kind of drives me crazy in some sense that our models are not already just like on my computer all day watching what I'm doing. And well, I know that can be creepy for some people. And like, I think you should be able to like opt out of that or have that opted out by default. I hate typing also. I wish that I could just kind of like be working on something on my computer. I hit some issue and I'm just like, you know, what am I supposed to do with this? And I can just kind of ask.

I think there's tons of space for being able to improve on how we interact with the models. And this goes back to them being able to use tools in a more intuitive way. I guess using tools closer to how we use them

It's also surprising to me how intuitively our models do use the tools we give them access to. It's like weirdly human-like, but I guess that's not too surprising given the data they've seen before. But yeah. I think a lot of things are weirdly human-like. Like my intuition for like, well, why is tool use so impactful to test time scale? Like, why is the combination so much better? Take any, any,

any role, you can make a decision when you are trying to make progress against a task as to like, do I get external validation or do I sit and think really hard? Right. And usually you want to do like one or is more efficient than the other. And it's not always just sit in a vacuum and think really hard with what you know. Yeah, absolutely. You can seek out sort of new inputs like it doesn't have to be this closed system anymore. And I do feel like the the closed system ness of the models is still sort of a limitation in some ways. Like you're not you're

You're not necessarily like turning this. I mean, like, I think it'd be great if the model could control my computer for sure. But in some sense, it's there's a reason we don't go hog wild and say like, oh, yes, here's like the keys to the kingdom, like have at it. There are still, you know, asymmetric costs to like the time you can save and the types of errors you can make. And so we're trying to like iteratively kind of, you know, deploy these things and like try them out and figure out like,

where are they reliable you know and where are they not um because yeah like if you did just let the model control your computer it could do some cool stuff like i have no doubt but you know do i trust it to like respond to all of the you know random emails that brandon sends me actually maybe for that task it doesn't require that much intelligence but you know that's true like

Do I trust it to do everything I'm doing? Some things. And I'm sure that set of things will be bigger tomorrow than it was yesterday. But yeah, I think part of this is we limit the affordances

And keep it a little bit in the, like, sandbox, just out of caution so that, you know, you don't send some crazy email to your boss or, you know, delete all your texts or delete your hard drive or something. Is there some sort of, like, organizing mental model for, like, the tasks that one can do with...

you know, increasing intelligence, test time scaling and improved tool use, right? Because I look at those and I'm like, okay, well, you have complexity of task and you have time scale. Then you have like the ability to come up with these RL rewards and environments, right? Then you have like usefulness. Maybe you have some, of course, you have some intuition about like diversity and generalization across the different things you can be doing, but

it seems like a very large space and scaling our new gen RL is not, it's just not obvious. To me, it's not obvious how you do it or how you choose the path. Is there some sort of organizing framework that you guys have that you can share? I mean, I don't know if there's like one organizing framework. I think there are a few factors, at least that I think about in the very, very grand scheme of things is like,

how much, like, in order to solve this task, like, how much uncertainty with the environment do I have to, like, wrestle with? Like, for some things where it's, like, this is a purely fat, like, who was the first president of the United States? Like, there's zero, like,

I need to interact with to like reach the answer to this question correctly. I just need to remember the answer and say the answer. You know, if I want you to like write some code, you know, that like solve some problem. Well, now I have to deal with a little bit of like

not purely internal model stuff, but also like, okay, I need to execute the code. And like that code execution environment is maybe more complicated than my model can memorize internally. So I have to do like a little bit of like writing code and then executing it and making sure it does what I thought it did and then testing it and then giving it to the user. And things get like the amount of that sort of stuff outside the model that you have to like, you know, you can't just recall the answer and give it to the user. You have to like,

test something and, you know, run an experiment in the world and then wait for the results of that experiment. Like, the more you have to do that, the more uncertain the results of those experiments. Like, in some sense, that's, like, one of the core, like,

attributes of like what makes the tasks hard. Um, and I think another is like how, you know, simulatable they are, um, like stuff that is really bottlenecked by like time, like the physical world, um, is also, you know, just, just harder than stuff that we can simulate really well. You know, it's not a, it's not a coincidence that, you know, so many people are interested in coding and, you know, coding agents and things. Um,

And that like, you know, robotics is hard and it, you know, it's, it's slower. And, you know, I used to work on robotics and like, it's frustrating in a lot of ways. I think both this, like how much of the external environment do you have to deal with? And then like, how much do you have to wrestle with the unavoidable slowness of the real world are two like dimensions that I sort of think about. It's super interesting because if you look at historically some of these models, you

One of the things that I think has continued to be really impressive is the degree to which they're generalizable. And so I think when GitHub Copilot launched, it was on Codex, which was like a specialized code model. And then eventually that just got subsumed into these more general purpose models in terms of what a lot of people are actually using for coding related applications. How do you think about that in the context of things like robotics? So, you know, there's like probably a dozen different robotics foundation model companies now.

Do you think that eventually just merges into the work you're doing in terms of there's just these big general purpose models that can do all sorts of things? Or do you think there's a lot of room for these standalone other types of models over time? I will say the one thing that's always struck me is kind of funny about us doing RL is that we don't yet do it on the most canonical RL task of robotics. And I personally don't see any reason why we couldn't have these be the same model.

I think there are certain challenges with, like, I don't know, do you want your RL model to be able to, like, generate an hour-long movie for you natively as opposed to, like, a tool call? Like, that's where it's probably tricky to have you have more conflict between having, like, everything in the same set of weights, but...

Certainly, like the things you see O3 already doing in terms of like, you know, exploring a picture and things like that are kind of like early signs of something like an Asian exploring like an external environment. So I don't think it sounds too far fetched to me. Yeah, I mean, I think the thing I came up earlier of the also the like intelligence per cost thing, you know, the real world is like an interesting litmus test because at the end of the day, like

There is a frame rate in the real world you need to live on. And it doesn't matter if you get the right answer after you think for two minutes, like, you know, the ball is coming at you now and you have to catch it. Gravity is not going to wait for you. So that's an extra constraint that we get to at least softly ignore when we're talking about these purely disembodied things. That's kind of interesting, though, because really small brains are very good at that.

You know, so you look at a frog, you start looking at different organisms and you look at sort of relative compute. Yeah. And, you know, very simple systems are very good at that. Ants, you know, like so I think that's kind of a fascinating question in terms of what's the baseline amount of capability that's actually needed for some of these real world tasks that are reasonably responsive. Yeah.

and nature. It's really tricky with vision too. So our models have some, I think, maybe famous edge cases of where they don't do the right thing. I think Eric probably knows where I'm going with this. I don't know if you ever asked our models to tell you what time it is on a clock.

They really like the time 10:10. So yeah. - It's my favorite time too. So that's usually what I tell people. - It's like over 90% or something like that of all clocks on the internet are 10:10. And it's because it looks like, I guess like a happy face and it looks like nice. But anyways, like what I'm getting at is like our visual system was developed by interacting with the external world and having to be good at like navigating things, avoiding predators and

Our models have learned vision in a very different type of way. And I think we'll see a lot of really interesting things if we can get them to be kind of closing the loop by reducing their uncertainty by taking actions in the real world, just as opposed to thinking about stuff. Hey, Eric, you brought up the idea of how...

what in the environment can be simulated, right? As a, as a input is to like, how difficult will it be to improve on this? As you get to long running tasks, like let's just take software engineering, like,

There is a lot of interaction that is not just me committing code continually. It's like, I'm going to talk to other people about the project, in which case you then need to deal with the problem of like, can you reasonably simulate how other people are going to interact with you on the project in an environment? That seems really tricky, right? I'm not saying that, you know, O3 or whatever set of foundation models now doesn't have the intelligence to

respond reasonably. But like, how do you think about that simulation being true to life as a true to life, true to the real world as you involve human beings in an environment in theory? My spicy, I guess, take on that is like, I don't have a spicy, but O3 in some sense is already kind of simulating what it'd be like for a single person to do something with like a browser or something like that. And I don't know, train two of them together.

so that you have two people interacting with each other. And there's no reason you can't scale this up so that models are trained to be really good at cooperating with each other. I mean, there's a lot of already existing literature on multi-agent RL. And yeah, if you want the model to be good at something like collaborating with a bunch of people, maybe a not too bad starting point is making it good with collaborating with other models. Man, someone should do that. Yeah, yeah. We should really start thinking about that, Eric.

I think it is. I think it's a little bit spicy because, yes, the work is going on. It is interesting to hear you think that is a useful direction. I think lots of people would still like to believe not me, like my comment was extra good on this poll request or whatever it is. Right.

Okay, I can sympathize with that. Sometimes I see our models training and I'm like, oh, what are you doing? You know, like, you're taking forever to figure this out. And I actually think it would be really fun if you could actually train models in an interactive way. You know, forget about just like a test time, but I think it'd be really neat to train them to do something like that. Be able to like intervene when it makes sense. And yeah, just more me being able to tell the model to cut it out and like in the middle of its kind of chain of thought and...

it being able to learn from that on the fly, I think would be great. Yeah, I do think this is like the intersection of these two things where it's both like a point of contact with the external environment that is like can be very high uncertainty, like humans can be very unpredictable in some cases. And it's sort of limited by the tick of time in the real world. If you want to like, you know, deal with actual humans, like humans have a fixed, you know, clock

cycle, you know, in their head.

So, yeah, I mean, this is if you, you know, if you want to like do this in the literal sense, it's hard. And so, you know, scaling it up and, you know, making it work well is, you know, it's not obvious how to do this. Yeah, we are a super expensive tool call. You know, if you're a model, you can either ask me, you know, meat bag over here to, you know, help with something. And I'll try to think really slowly. In the meantime, it could have like used browser and read like 100 papers on the topic and something like that. So it's how do you model the trade off there?

But the human part is important. I mean, I think in any research project, like my interaction with brand are the hardest part of the project. You know, like writing the code is that's the easy part. Well, and there's some analog from self-driving. A lot is going to say that, you know, hang out with me every week is the hardest part of doing this podcast. But it's my favorite part. Look at how healthy their relationship is, Eric. We need to learn from this. No, we're honest. It's OK. We've got to work through it. OK.

In self-driving, one of the like classically hard things to do was like predict the human and the child and the dog, like agents in the environment versus like what the environment was. And so I think there's like some analogy to be drawn there. Going back to just like how you progress the O series of models from here, is it?

uh is it a reasonable like assessment that some people have uh that the capabilities of the models are likely to advance in a spikier way because you're relying to some degree more on the creativity of research teams and like making these environments and deciding you know how to create these um evals versus like we're scaling up on existing data set

In pre-training. Is that a fair contrast? Spikier, like, what's the plot here? What's the, like, the x-axis and the y? Domain is the x-axis and y is capability? Yes, because you're, like, choosing what...

domains you are really creating this RL loop in? I mean, I think this is a very reasonable hypothesis to hold. I think there is some like counter evidence that I think should, you know, be factored into people's intuitions. Like, you know, Sam tweeted an example of some creative writing from one of our models that I think was

I'm not an expert and I'm not going to say this is like, you know, publishable or like groundbreaking, but I think it probably updated some people's intuitions on like what, you know, you can train a model to do really well. And so,

I think there is some structural reasons why you'll have some spikiness just because like as an organization, you have to decide like, hey, we're going to prioritize, you know, X, Y, Z stuff. And like as the models get better, the surface area of stuff you could do with them grows faster than, you know, you can potentially like say, hey, this is the niche, you know, we're going to carve out, we're going to try to do this really well. So like there, I think there's some reason for spikiness, but I think some people will

probably go too far with this and saying like, oh yes, these models will only be really good at math and code and like not, you know, like everything else is like, you can't get better at them. And I, I think that is probably not the right intuition to have. Yeah. And I think,

Probably all major AI labs right now have some partitioning between let's just define a bunch of data distributions we want our models to be good at and then just throw data at them. And then another set of people in those same companies are probably thinking about how can you lift all boats at once with some algorithmic change. And I think, yeah, we definitely have both of those types of efforts at OpenAI and we

I think, especially on the data side, like there are going to naturally be things that we have a lot more data of than others. But ideally, yeah, we have plenty of efforts that will not be so reliant on the exact like subset of data we did RL on and it'll generalize better. I get pitched every week, and I bet a lot does too, a company that wants to generate data for the labs in some way.

And, or it's, you know, access to human experts or whatever it is, but like, you know, there's infinite variations of this. If you could wave a magic wand and have like a perfect set of data, like what would it be that you know would advance model quality today? This is a dodge, but like uncontaminated evals, always super valuable. And that's data. And I mean, yeah, like you want, you know,

data to train on and that's of course valuable for making the model better but I think it is often neglected how also important it is to have high quality data which is like a different definition of high quality when it comes to an eval but yeah the eval side is like often just as important because you don't

you need to measure stuff. And like, as you know, from, you know, trying to hire people or whatever, like evaluating the capabilities of like a general, like capable agent is really hard to do in like a rigorous, you know, way. So yeah, I think evals are a little under appreciated. But it's true. Evals are, I mean, it's true.

especially with some of our recent models where we've kind of run out of reliable evals to track because they kind of just solved a few of those. But on the training side, I think it's always valuable to have training data that is kind of at the next frontier of model capabilities. I mean, I think a lot of the things that O3 and O4 Mini can already do, those types of tasks, like basic tool use, we probably aren't.

you know, super in the need for new data like that. But I think it'd be hard to say no to a data set that's like a bunch of like multi-turn user interactions and some code base that's like a million lines of code that, you know,

is like a two-week research task of like adding some new feature to it that requires like multiple poll requests. I mean, I mean like something that was like super high quality and has a ton of supervision signals for us to learn from it. Yeah, I think that would be awesome to have, you know, I definitely wouldn't turn that down. You play with the models all the time. I assume a lot more than average humans do. What do you do with reasoning models that you think other people don't do enough of yet? Send the same prompt many, many, many times to the model.

and get an intuition for the distribution of responses you can get. I have seen, it drives me absolutely mad when people do these comparisons on Twitter or wherever, and they're like, oh, I put the same prompt into blah, blah, and blah, blah, and this one was so much better. It's like, dude, you're like, I mean, it was something we talked about a bit,

when we were launching is like, yeah, O3 can do really cool things. Like when it chains together a lot of tool calls and then like sometimes for the same prompt, it won't have that, you know, moment of magic or it will, you know, just take a little, it'll do a little less work for you. And so, yeah, the like the peak performance is really impressive, but there is a distribution of behavior. And I think people often don't appreciate that there is this distribution of outcomes when you put the same prompt in and getting intuition about that is useful.

So as an end user, I do this and I also have a feature request for your friends in the product org. I'll ask Oliver or something, but it's just I –

want a button where like assuming my rate limits or whatever support it i want to run the prompt automatically like 100 times every time even if it's really expensive and then i want the model to rank them and just give me the top one or two interesting and just let it be expensive or a synthesis across it right you could also synthesize the output and just see if there's some although maybe you're then reverting to the mean in some sense relative to that distribution or something but it seems kind of interesting yeah

Maybe there's a good infrastructure reason you guys aren't giving us that button. Well, it's expensive, but I think it's a great suggestion. Yeah, I think it's a great suggestion. How much would you pay for that? A lot, but I'm a price-insensitive user of AI. I see. Perfect. Those are our favorites. Maybe there are many of us. You should have Sarah Teer as one of your tiers. Exactly. Exactly. I really like...

sending prompts to our models that are kind of at the edge of what I expect them to be able to do, just kind of for funsies. A lot of times before I'm about to do some programming tasks,

I will just kind of ask the model to go see if they can figure it out. A lot of times with like no hope of it being able to do it. And indeed, sometimes it comes back and I just am pretty like, I'm like a disappointed father, but other times it does it. And it's amazing. And it saves me like tons of time. So I kind of use our models almost like a background cue of work where I just, I'll just like shoot off tasks to them. And sometimes those will, those will stick and sometimes they won't, but in either case, like it's always a good outcome if something good happens.

That's cool. Yeah, I do that just to feel better about myself when it doesn't work. I'm still providing value. When it works, I feel even worse about myself. So it's very hit or miss. Yeah. There are some differences in terms of how some of these models are trained or RL'd or, you know, effectively produced. What are some of the differences in terms of process in terms of how you approach it?

that series of models versus other things that have been done at OpenAI in the past? The tools stuff was very, it was quite the experience getting working at a large scale setting. So you can imagine if you're doing like async RL with a bunch of tools that those are, you're just adding more and more failure points to your infrastructure. And what you do when things that get evidently fail is pretty interesting, like engineering problem, but also like an RL, like ML problem too, because, you know, if you're, I don't

I don't know if you're

Python tool, it goes down in the middle of the run. What do you do? Do you stop the run? Probably not. That's probably not the most sane thing to do with that much compute. So the question is, how do you handle that gracefully and not hurt the capabilities of the model as an unintended consequence? So there's been a lot of learnings like that of how you deal with huge infrastructure that's asynchronous for RL. RL is hard. This has been great, guys. Thank you. Yeah, thanks so much for coming on. Yeah, thanks. It was fun. Thanks for having us.

Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.