We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode National Security Strategy and AI Evals on the Eve of Superintelligence with Dan Hendrycks

National Security Strategy and AI Evals on the Eve of Superintelligence with Dan Hendrycks

2025/3/5
logo of podcast No Priors: Artificial Intelligence | Technology | Startups

No Priors: Artificial Intelligence | Technology | Startups

AI Deep Dive AI Chapters Transcript
People
D
Dan Hendrycks
Topics
我长期从事人工智能安全研究,因为我相信人工智能将成为本世纪最重要的技术。我们必须确保人工智能被引导到富有成效的方向,并有效管理其潜在的风险,特别是那些系统性被低估的尾部风险。 大型科技公司在人工智能安全方面发挥的作用有限,它们主要关注的是一些基本的安全措施,例如拒绝与制造病毒等有害活动相关的查询。然而,人工智能安全是一个更广泛的问题,它涉及技术、地缘政治和经济影响等多个层面。 人工智能与国家安全息息相关。虽然目前人工智能的威力有限,但在网络安全、生物安全等领域已经显现出其潜在的威胁。未来,人工智能可能被用于开发更先进的武器,例如无人机、生物武器等,并对国家间的战略竞争产生深远的影响。 为了应对这些挑战,我们提出了“相互保证AI故障”(MAME)的威慑机制,这与核威慑战略类似。MAME旨在通过让国家之间相互承担风险,从而阻止任何一方试图利用人工智能来获得压倒性优势,避免可能导致全球冲突的超级武器竞赛。 在政策方面,我建议政府加强对其他国家人工智能项目的监控,并做好应对潜在网络攻击的准备。同时,加强对人工智能芯片的出口管制,防止其落入不法分子手中。 此外,我们需要改进人工智能评估方法。目前,大多数评估都集中在封闭式问题上,例如测试数学能力。未来,我们需要开发更多评估人工智能在开放式任务中的能力的方法,例如评估其自动化各种数字任务的能力。

Deep Dive

Chapters
Dan Hendrycks discusses his journey into AI safety, emphasizing the importance of addressing AI's potential risks and the role of companies in implementing basic safety measures.
  • Dan Hendrycks is the director of the Center for AI Safety and an advisor to xAI and Scale AI.
  • He highlights the lack of safety efforts in large AI labs and their focus on basic anti-terrorism safeguards.
  • Geopolitical factors significantly influence AI development and competition, particularly concerning China and Russia.

Shownotes Transcript

Translations:
中文

Hi, listeners, and welcome back to No Priors. Today I'm with Dan Hendricks, AI researcher and director of the Center for AI Safety.

He's published papers and widely used evals, such as MMLU and most recently, Humanity's Last Exam. He's also published Superintelligence Strategy, alongside authors including former Google CEO Eric Schmidt and Scale founder Alex Wang. We talk about AI safety and geopolitical implications, analogies to nuclear, compute security, and the state of evals. Dan, thanks for doing this. Glad to be here. How'd you end up working on AI safety? Yeah.

AI was pretty clearly going to be a big deal if one would just think through its conclusion. So early on, it seemed like other people were ignoring it because it was weirder or not that pleasant to think about. It's hard to wrap your head around, but it seemed like the most important thing during this century. So I thought that that would be a good place to devote my career toward. And so that's why I started on it early on. And then since it'd be such a big deal,

We need to make sure that we can think about it properly, channel it in a productive direction and take care of some sort of tail risks, which are generally systematically under addressed. So that's why I got into it. It's a big deal and people weren't really doing much about it at the time. And what do you think of as the center's role versus safety efforts within the large labs? Yeah.

Well, there aren't that many safety efforts in the labs even now. I mean, I think the labs can just focus on doing some very basic measures to refuse queries related to, like, help me make a virus and things like that. But I don't think labs have a...

extremely large role in safety overall are making this go well. They're kind of predetermined to race. They can't really choose not to unless they would no longer be a relevant company in the arena. I think they can reduce like terrorism risks or some like accidents. But beyond that, I don't think they can dramatically change the outcomes in too substantial of a way. They could

because a lot of this is geopolitically determined. If companies decide to act very differently, um, there's the prospect of competing with China, um, or maybe, maybe Russia will become relevant later. And as that happens, this constrains their behavior substantially. So, um,

I've been interested in tackling AI at multiple levels. There's things companies can do to have some very basic anti-terrorism safeguards, which are pretty easy to implement. There's also the economic effects that will need to be managed well, and companies can't really change

how that goes either. It's going to cause mass disruptions to labor and automate a lot of digital labor. If they, you know, tinker the design choice or add some different refusal data, it doesn't change that fact. Safety or making AI go well and the risk management is just much more of a broader problem. It's got some technical aspects.

But I think that's a small part of it. I don't know that the leaders of the labs would say like we can do nothing about this, but maybe it's also a question of, you know, everybody also has like equity in this equation. Right. Maybe it's also a question of semantics. Like, can you describe how you think of the difference between like alignment and safety as you think about it?

I'm just using safety as a sort of catch-all for dealing with risks. There are other risks. If you never get really intelligent AI systems, that poses some risks in itself. There's other sorts of risks that don't run, that are not as necessarily technical, like concentration of power. So I view the distinction between alignment and safety as...

as alignment as being a sort of subset of safety. Obviously, you want the value systems of the AIs to be in keeping with or compatible with, say, the U.S. public for USAIs or for you as an individual. But that doesn't make it necessarily safe. If you have an AI that's reliably obedient or aligned to you,

This doesn't make everything work totally well. China can have AIs that are totally aligned with them. The U.S. can have AIs that are totally aligned with them. You still are going to have a strategic competition between the two. This is going to they're going to need to integrate in their militaries. They're probably going to need to integrate it really quickly. This competition is going to force them to have a high risk tolerance in the process. So even if the AIs are doing their principles as biddings reliably, this doesn't necessarily make

the overall situation perfectly fine. I think it's not just a question of reliability or whether they do what you want. There are other structural pressures that cause this to be riskier, like the geopolitics. At the highest level, like, bundle of weights, increasingly capable, like, why do we care about AI from a national security perspective? Like, what's the most practical way it matters in geopolitics or gets used as a weapon?

I think that AI isn't that powerful currently in many respects. So in many ways, it's not actually that relevant for national security currently. This could well change within a year's time. I think generally I've been focused on the trajectory that it's on, as opposed to saying right now it is extremely concerning. That said, there are some, for instance, for cyber, I don't think AI's

are that relevant for being able to pull off a devastating cyber attack on the grid by a malicious actor currently. That said, we should look at cyber and be prepared and think about its strategic implications. There are other capabilities like virology. The AIs are getting very good at STEM, PhD-level types of topics, and that includes virology. So I think that they are sort of rounding the corner on being able to

provide expert level capabilities in terms of their knowledge of the literature or even helping in practical wet lab situations. So I do think on the virology aspect, they do have already national security implications, but that's only very recently with the reasoning models. But in many other respects, they're not as relevant. It's more prospective that it could well become the way in which a nation might try and dominate another nation, right?

And the backbone for not just war, but also just economic security, the amount of chips that the U.S. has versus China might be the determiner or the determinant of which country is the most prosperous and which one falls behind. So but this is all prospective. I don't think it's just speculative. It's speculative in the same way that like

NVIDIA's valuation is speculative or the valuations behind AI companies are speculative. It's something that I think a lot of people are expecting and expecting fairly soon. Yeah, it's quite hard to think about time horizons in AI. We invest in things that I think of it as like medium term speculative, but they get pulled in quite quickly. You know, just because you mentioned both cyber and bio, we're investors in companies like NVIDIA.

or Sybil on the defensive cybersecurity side or Chai and Somite on the biotech discovery side or, you know, modeling different systems in biology that will help us with treatments. How do you think about the balance of like competition and benefits and safety? Because some of these things I think are, you know, we think they're

working effectively in the near term on the positive side as well. Yeah, I mean, I don't get this big trade-off between safety and... I mean, you're just taking care of a few tail risks. For bio, if you want to expose those capabilities, just talk to sales, get the enterprise account. Here you can have the little refusal thing for Virology. But if you just create an account a second ago and you're asking it how to...

culture of this virus and here's your picture of your petri dish and what's the next step that you should do. If you want to access those capabilities, you can speak to Stills. That's basically in XAI's risk management framework. It's just we're not exposing those expert level capabilities to people

who we don't know who they are. But if we do, then sure, have them. So I think you can, and likewise with cyber, I think you can just very easily capture the benefits while taking care of some of these pretty avoidable tail risks. But then once you have that, you've basically taken care of malicious use for the models behind your API

And that's about the best that you can do as a company. You could try and influence policy by using your voice or something. But I don't see a substantial amount that they could do. They could do some research for trying to make the models more

controllable or try and make policymakers be more aware of the situation more broadly in terms of where we're going. Because I don't think policymakers have internalized what's happening at all. They still think it's a like a

They're just selling hype and they don't actually believe or the companies that the employees don't actually believe that this stuff could, you know, we could get EGI and so to speak in the in the next few years. So I don't know. I don't see like really substantial tradeoffs there. I see much more. I think that the complications really come about when we're dealing with like what's the right stringency and export controls, for instance. That's that's complicated. And.

if you turn the pain dial all the way up for China in export controls, and if AI chips are the currency of economic power in the future, then this increases the probability that they want to invade Taiwan. They already want to. This would give them all the more reason if like AI chips are the main thing and they're not getting any of it and they're not even getting the latest semiconductor manufacturing tools for even making cutting edge CPUs, let alone GPUs. So those are some other types of complicated issues

problems that we have to address and think about and calibrate appropriately. But in terms of just mitigating virology stuff, just to speak to sales, if you're Genentech or a bio startup, and then you have access to those capabilities, problem solved. What is a way you actually expect that AI gets used as a weapon?

Beyond virology and security. Yeah, I wouldn't expect a bioweapon from a state actor, from a non-state actor. That would make a lot more sense. The I think cyber makes sense from me, from state actors and both non-state actors.

Then there's drone applications. These could disrupt other things. These could help with other types of weapons research, like help explore exotic EMPs, could help create better types of drones, could substantially help with situational awareness of

so that one might know where all the nuclear submarines are. Some advancement in AI might be able to help with that, and that could disrupt our second-strike capabilities and mutual assured destruction. So those are some geopolitical implications. It could potentially bear on nuclear deterrence, and that's not even a weapon. The example of just heightened situational awareness and being able to pinpoint where hardened locations

land, nuclear launches are or where nuclear submarines are is just informational, but could nonetheless be extremely disruptive or destabilizing. Outside of that, the default conventional AI weapon would be drones, which is, I don't know if that makes sense that countries would compete on that. And I think that it would be a mistake if the U.S. weren't trying to do more in manufacturing drones. Yeah.

Yeah, I started working recently with an electronic warfare company. I think there's a massive under lack of understanding of just like the basic concept of, you know, we have autonomous systems. They all have communication systems. Our missile systems have targeting communication systems. And from a battlefield awareness and control perspective, like a lot of that thought will be one with radio and radar and related systems.

Right. And so I think there's an area where AI is going to be very relevant and is already very relevant in Ukraine. Speaking about AI is assisting with like command and control. I mean, I was hearing some story about how on Wall Street humans used to not be able to you always had a human in the loop for each decision. So at a later stage, before they removed that requirement on Wall Street, you just had rows of people just clicking the accept, accept, accept button and

And we're kind of getting to a similar state in some context with AI. It wouldn't surprise me if we'd end up automating some more of that decision making. But so this just turns into questions of reliability and doing some doing some reliability research seems seems useful to return to that larger question of what were the sort of safety tradeoffs. I think it's people are largely thinking that this the push for for risk management is to do some sort of pausing or something like that.

An issue is you need teeth behind an agreement. If you do it voluntarily, you just make yourself less powerful and you let the worst actors get ahead of you. You could say, well, we'll sign a treaty. We will not assume that the treaty will be followed.

Like that would be very imprudent. You would actually need some sort of threat of force or something to back it up, some verification mechanism. But absent that, if it's entirely voluntary, then this doesn't seem like a useful thing at all. So I think people's conflation of safety, what we must do is we must voluntarily slow it down. It just doesn't make as much geopolitical sense unless you have some threat of force to back it up or some very strong verification mechanism. Yeah.

But absent that... As a proxy, there's clearly been very little compliance to either treaties or norms around cyber attacks and around corporate espionage, right? Yeah. I mean, corporate espionage, for instance, that was one strategy. It's a voluntary pause strategy. We believe that equals safety. And then maybe last year, there was that paper, Situational Awareness, where people...

by Leopold Aschenbrenner, and he's a sort of a safety person. So his idea was let's instead try and beat China to superintelligence as much as possible. But that is some sort of weaknesses because like it assumes that corporate espionage will not be a thing at all.

which is very difficult to do. I mean, we have, you know, some places, you know, 30% plus of the employees at these top AI companies are like Chinese nationals. I mean, this is not feasible. If you're going to get rid of them, they're going to go to China and then they're probably going to beat you because they're extremely important for the U.S.'s success.

So you're going to want to keep them here. But that's going to expose you to some information security types of issues. But that's just too bad. Do you have a point of view on how we should change immigration policy, if at all, given these risks? So I would, of course, claim that this is the policy on this to be totally separate from southern border policy and other and broader policy. But if we're talking about researchers,

If they're very talented, then I think you'd want to make it easier. And I think that it's probably too difficult for many of them to stay currently. And I think that that discussion should be kept totally separate from Southern modern policy. Just in terms of broad strokes, like things that you think won't work, voluntary compliance and assuming that'll happen or just straight race. So we want to be competitive. And I think it's I think racing in other sorts of spheres, say drones or AI chips,

Seems fine. If you're saying let's race to superintelligence to try and get and turn that into a weapon to crush them and they're not going to do the same or they're not going to have access to it or they're not going to prevent that from happening. That seems like quite a tall claim. I mean, if if we did have a substantially better AI, they could just co-opt it.

You could just steal it. Unless you had really, really strong information security, like you move the AI researchers out to the desert, but then you're reducing your probability of actually beating them because a lot of your best scientists ended up going to...

back to China, even then, if there were signs that they were really pulling ahead and going to be able to get some powerful AI that will crush that will enable China or that would enable the U.S. to crush China, they would then try to deter them from doing something like that. They're not going to sit idly by and say, you know, yeah, go ahead, develop your develop your super intelligence or whatever, and then you can boss us around and we'll just accept your dictates until the end of time. So that I think that there is kind of a failure of some sort of

second order reasoning going on there, which is, well, how would China respond to this sort of maneuver if we're building a trillion dollar compute cluster in the desert, totally visible from space? And it's basically the only plausible read on this is that this is a bid for for dominance or a sort of monopoly on superintelligence. So I think it's it reminds me of on

In the nuclear era, there's a brief period where some people were saying, you know, we got to just like preemptively destroy or preventively destroy the USSR. We got to nuke them. Even people, even pacifists or people who are normally pacifists like Bertrand Russell were advocating for this. The opportunity window for that was like maybe didn't ever exist. But there was there was a prospect of it for some time. But I don't think that the opportunity window exists.

really exists here because of the complex, um, um, independence and the multinational talent, um, dependence in, in the United States. But I don't think you can have China be totally severed from, um, any awareness, um, or any ability to, um,

gain insight or imitate what we're doing here. We're clearly nowhere close to that as a real environment right now, right? No, it would take years. It would take years to do well. And like, I don't even think the timelines for some very powerful AI systems, they've

There might not even be enough time to do that securitization anyway. So, OK, in reaction, you propose, along with some, you know, other esteemed authors and friends, Eric Schmidt and Alex Wang, a new deterrence regime, mutually assured AI malfunction.

I think that's the right name. MAME, a bit of a scary acronym and also a nod to Mutually Assured Destruction. Can you explain MAME in plain language? Let's think of what happened in nuclear strategy. Basically, a lot of states deterred each other from doing a first strike because they could then retaliate. They had a shared vulnerability.

So they were, we're not going to do this really aggressive action of trying to make a bid to wipe you out because that will end up causing us to be damaged. And we have a somewhat similar situation later on when AI is more salient, when it is viewed as pivotal to the future of a nation. When people are on the verge of making a super intelligence more, when they can say automate, you know, pretty much all AI research,

I think states would try to deter each other from trying to leverage that to develop it into something like a super weapon that would allow the other countries to be crushed or use those AIs to do some really rapid automated AI research and development loop that could

have it bootstrapped from its current levels to something that's super intelligent, vastly more capable than any other system out there. I think that later on, it becomes sort of stabilizing that China just says, we're going to do something preemptive, like do a cyber attack on your data center.

And the U.S. might do that to China. And Russia, coming out of Ukraine, will reassess the situation, get situationally aware, think, oh, what's going on with the U.S. and China? Oh, my goodness, they're so ahead on AI. AI is looking like a big deal. Let's say it's later in the year when a big chunk of software engineering is starting to be impacted by AI. Right.

oh, wow, this is looking pretty relevant. Hey, if you try and use this to crush us, we will prevent that by doing a cyber attack on you. And we will keep tabs on your projects because it's pretty easy for them to do that espionage. All they need to do is do a zero day on Slack, and then they can know what DeepMind is up to in very high fidelity and OpenAI and XAI and others.

So it's pretty easy for them to do espionage and sabotage. Right now, they wouldn't be threatening that because it's not at the level of severity. It's not actually that potentially destabilizing. It's still too distant, the capabilities. A lot of decision makers still aren't taking this AI stuff that seriously, relatively speaking. But I think that'll change as it gets more powerful. And then I think that this is how they would end up responding. And this makes us not wind up in a situation where we are doing something extremely destabilizing, like trying to

create some weapon that enables one country to like totally wipe out the other. And as was proposed by people like Leo. What are the parallels here that you think makes sense to nuclear and don't? I think that more broadly, just as a dual use technology tool used to be the civilian applications. It has military applications.

Its economic applications are still in some ways limited and likewise its military applications are still limited, but I think that will keep changing rapidly. Like chemical, it was important for the economy. It had some military use, but they kind of coordinated not to go down the chemical route and bio as well can be used as a weapon and has enormous economic applications.

And likewise with nuclear, too. So I think it has some of those those properties for each of those technologies. Countries did eventually coordinate to, you

to make sure that it didn't wind up in the hands of rogue actors like terrorists. There have been a lot of efforts taken to make sure that rogue actors don't get access to it and use it against them because it's in neither of their interests. Basically, like, bioweapons, for instance, and chemical weapons are a poor man's atom bomb, and this is why we have the Chemical Weapons Convention and Bioweapons Convention.

That's where there's some shared interest. So they might be rivals in other senses in the way that the U.S. and the Soviet Union were rivals, but there's still coordination on that because it was incentive compatible. It doesn't benefit them in any way if terrorists have access to these sorts of things. It's just inherently destabilizing. So I think that's an opportunity for coordination. That isn't to say that they have an incentive to both

pause all forms of AI development, but it may mean that they would be deterred from some particular forms of AI development, in particular ones that have a very plausible prospect of enabling one country to get a decisive edge over another and crush them.

Um, so no like super weapon type stuff, but more conventional type of warfare, like drones and things like that. I expect that they'll continue to race and, um, probably not, maybe not even coordinate, um, on anything like that, but, and that's just how things will go. That's just, you know, bows and arrows and nuclear it.

It just made sense for them to develop those sorts of weapons and threaten each other with them. If you all could propose a magical adoption tactically of some policy or action to the current administration, what is the first step here? It is the, you know, we will not build a super weapon and we're going to be watching for other people building them, too.

As I've sort of been alluding to throughout this whole conversation, like what would the companies do? Like not that much. I mean, add some basic anti-terrorism safeguards, but I think this is like pretty technically easy. This is unlike refusal for other things. Refusal robustness for other things is harder. Like if you're trying to get it like crimes and torts,

That's harder because it's a lot messier. It overlaps with typical everyday interaction. I think likewise here, the asks for states are not that challenging either. It's just a matter of them doing it. So one would be the CIA has a cell that's doing more espionage of other states' AI programs. So that way they have a better sense of what's going on and aren't caught by surprise. And then secondly, maybe some part of government, like let's say Cybercom, which has a lot of cyber offensive capabilities,

gets some cyber attacks ready to disable other data centers in other countries if they're looking like they're doing something, running or creating a destabilizing AI project. That's it for the deterrence. For nonproliferation of AI chips to rogue actors in particular, I think there'd be some adjustments to export controls, in particular, just knowing where the AI chips are at

reliably. We want to know where the eye chips are at for the same reason we want to know where our fissile material is at. Um,

um, for the same reason that we want Russia to know where its fissile material is at. Like it's just, that's just generally a good bit of information to collect. And that can be done with some very basic state's craft of having a licensing regime. And for allies, they just notify you whenever it's being shipped to a different location and they get a license exemption, um, uh, on that basis. And then you have enforcement officers prioritize doing some basic, um, uh, inspections for AI chips for her, um, and use checks. And so I think like all of these are, um,

a few texts away, um, or a basic document away. And I think that, that kind of like 80, 20 is a lot of it. Of course, this is, this is always a changing situation. Um, safety isn't, as I've been trying to reinforce, not really that much of a technical problem. This is more of a complex, um, geopolitical problem with, with technical aspects. Later on, maybe we'll need to do more. Maybe we will, um,

there might be some new risk sources that we need to, to take care of and adjust. But I think like right now, I think that SB and ISU CIA, um, um,

sabotage with cybercom, building up those capabilities, buying those options, seems like that takes care of a lot of the risk. Let's talk about compute security. If we're talking about a hundred thousand networked state-of-the-art chips, you can tell where that is. How does DeepSeek and the recent releases they've had factor into your view of compute security, given expert controls have clearly been

led to innovation toward highly compute efficient pre-training that works on ships that China can import at what one might consider like an irrelevant scale, a much smaller scale today. It's hard for me to see directionally that training becoming less efficient, even if we, you know, people want to scale it up. And so, like, does that change your view at all? No, I think it just sort of undermines other types of strategies like the this, you know,

Manhattan Project type of strategy of let's, you know, move people out to the desert and do a big cluster there. And what it shows is that you can't rely as much on restricting and other superpowers as capabilities, their ability to make models. So you can restrict their intent, which is what deterrence does. But I don't think you can reliably or robustly restrict their capabilities.

You can restrict the capabilities of rogue actors. And that's what I would want things like compute security and export controls to facilitate with. Make sure it doesn't wind up in the hands of Iran or something. China will probably keep getting some fraction of these chips, but we should basically just try and know where they're at more and we can tighten things up. But I would primarily you could even coordinate with China.

to make sure that the chips aren't winding up in rogue actors' hands. I should also say that the export controls, it wasn't actually a priority among leadership at BIS, to my understanding, a substantial priority, the AI chips, for some people. But for the enforcement officers, like,

did any of them go to Singapore to see where 10% of NVIDIA's chips were going? I think they would have very quickly found, oh, they were going to China. So some basic end-use check would have taken care of that. I don't think this is that export controls don't work. We've done nonproliferation of lots of other things like chemical agents and fissile material. So it can be done if people care. But

But even so, I still think if you really tighten the export controls you made so that China can't get any of those chips at all, and this is one of your biggest priorities, they're just going to steal the weights anyway. I think it'll be too difficult to totally restrict their capabilities, but I think you can restrict their intent through deterrence.

It also seems like either stuff is powerful or it's not. It seems infeasible to me, given the economic opportunity that China will say we don't need the capability. Yeah, yeah. I fail to see a version of the world where great like leadership and another great power that believes that there is value here says we don't need that from an economic value perspective. Yeah, that's right. Yeah, just just for a lot of these, it would be.

Maybe it would be nicer if everything went, you know, 3x slower and maybe there'd be fewer like mess ups if there's like some magic button that would do that. I don't know whether that's true or not, actually. I don't have a position on that. Given the structural constraints and the competitive pressures between these companies, between these states, it just makes a lot of these things infeasible. A lot of these other gestures that could be useful for risk mitigation when you consider them or when you

think about the structural realities of it, it just becomes a lot less tractable. That said, there still would be in some ways some pausing or halting of development of particular projects that you could potentially lose control of or that if controlled would be very destabilizing because it would enable one country to crush the other. I think people's conceptions about...

what risk management looks like is it's that people think it's a peacenik thing or something like that. Like it's all kumbaya and we just have to ignore structural realities in operating in this space. I think instead the right approach toward this is that it's sort of like

nuclear strategy like it is an evolving situation. It depends. There's some basic things you can do, like you're probably going to need to stockpile nuclear weapons. You're going to need to secure a second strike. You're going to need to keep an eye on what they're doing. You're going to need to make sure that there isn't proliferation of rogue actors.

um, when the capabilities are extremely hazardous. And this is a continual battle, but it's not, you know, it's not going to be clearly a, an extremely positive thing, no matter what. It's not going to be doomsday, no matter what for nuclear strategy. Um, it was obviously risky business. The Cuban missile crisis became pretty close to an all out, uh, nuclear war. It depends on what we do. Um, uh,

And I think there's some basic interventions and some very basic statescraft can take care of can take care of a lot of these these sorts of risks and make it manageable. I imagine then we're left with more domestic type of problems, like what to do about automation and things like that. But I think maybe we'll be able to get a handle on some of the geopolitics here. I want to change tax for our last couple of minutes and talk about evals.

And it's obviously very related to safety and understanding where we are in terms of capability. Can you just contextualize where you think we are? You came out with a triggeringly named humanities last exam eval and then also enigma. Like, why are these relevant and where are we in evals? Yeah, yeah. So for context, I've been making evaluations to try and understand where we're at in this in AI for, I don't know, about as long as I've been doing research.

So previously I've done some datasets like MMLU and the math dataset. Before that, before ChatGPT, there's things like ImageNet-C and other sorts of things. So Humanity's last exam was basically an attempt at getting at what would be the

end of the road for the evaluations and benchmarks that are based on exam-like questions, ones that test some sort of academic type of knowledge. So for this, we asked professors and researchers around the world to submit a really challenging question, and then we would add that to the data set. So it's a big collection of

What professors, for instance, would encounter as challenging problems in their research that have a definitive closed-ended objective answer. With that, I think the genre of here's a closed-ended answer where it's a multiple choice or a simple short answer, I think that genre will roughly be expired when performance on this data set is near the ceiling.

So, and when performance is near the ceiling, I think that basically be an indication that like you have something like a superhuman mathematician or a superhuman STEM scientist for in many ways for when there are, when closed ended questions are very useful, such as a math, but it doesn't get at other things to measure such as what's its ability to perform open-ended tasks. So that's more agent type of evaluations. And I think,

That will take more time. So we'll try and measure just directly what's its ability to automate various digital tasks, like collect various digital tasks, have it work on them for a few hours, see if they successfully completed them, something like that coming out soon. We have a test for...

Closed-ended questions, things that test knowledge in the academy and things like mathematics. But they still are very bad at agent stuff. This could possibly change overnight, but it's still near the floor. I think they're still extremely defective as agents.

So, there'll need to be more evaluations for that. But the overall approach is just to try and understand what's going on, what's the rate of development so that the public can at least understand what's happening.

Because if all the evaluations are saturated, it's difficult to even have conversation about the state of AI. Nobody really knows exactly where it's at or where it's going or what the rate of improvement is. Is there anything that qualitatively changes when, let's say, these models and model systems are just better than humans, right? Like exceeding human capability and how we do eval? Does it change our ability to evaluate them? So I think the intelligence frontier is just so jagged.

What things they can do and can't do is often surprising. They still can't fold clothes. They can answer a lot of tough physics problems, though. Why that is, it's, you know, they're complicated reasons. So it's not all uniform. And so in some ways, they'll be better than humans. Seems totally plausible that they'll be better at humans and mathematics not too long from now.

but still not able to book a flight. The implications of that are when you have them being better, they just might be better in some limited ways and that just might have

kind of limited, just influence its domain, but not necessarily generalized to other sorts of things. But I do think it's possible that they'll be better at reasoning skills than us. We still could have humans checking because they can still verify. If an AI mathematician is better than a human, humans can still run the proof through a proof checker and then confirm that it was correct. So in that way, humans can still understand what's going on in some ways. But in other ways, like if they're getting better taste

in things. If there's, if that makes any sense, maybe it doesn't make any philosophical sense. That would be pretty difficult for people to, to confirm. I think we're, we're on track overall to have AIs that are like,

have really good Oracle-like skills. Like you can ask them things and just, wow, it just totally said something insightful or very non-trivial or push the boundaries of knowledge in some particular way, but not necessarily able to carry out tasks on behalf of people for some while. So I think this is why we don't take the AI set seriously because they still can't do like

a lot of very trivial stuff. But when they get some of the agent skills, then I don't think that there are many barriers for their economic impacts or people thinking that this is kind of an interesting thing to this being the most important thing. I think that's an emergent property with agent skills, that the vibes really shift and it's pretty clear that this is...

the much bigger than, you know, some prior technology like that, the app store or social media, it's in a category. So. Well, Dan, thanks for doing this. It was a great conversation. Yeah, glad. Thank you for having me. Yeah.

Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.