Now that artificial intelligence has tried to break out of a training environment, cheat at chess, and deceive safety evaluators, is it finally time to start worrying about the risk that AI poses to us all? We'll talk about it with Dan Hendricks, the director of the Center for AI Safety, right after this.
Want better ad performance? Rated number one in targeting on G2, StackAdapt provides access to premium audiences and top-performing ad placements. Visit go.stackadapt.com backslash LinkedIn and launch winning campaigns today.
From LinkedIn News, I'm Jessi Hempel, host of the Hello Monday podcast. Start your week with the Hello Monday podcast. We'll navigate career pivots. We'll learn where happiness fits in. Listen to Hello Monday with me, Jessi Hempel, on the LinkedIn Podcast Network or wherever you get your podcasts.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond. Today, we are joined by Dan Hendricks. He is the director of the Center for AI Safety, an advisor to Elon Musk's XAI, and also an advisor to Scale AI, here to speak with us about all the risks that AI might pose to us today and into the future. Dan, it's so great to see you. Welcome to the show. Glad to be here. It's an opportune moment to have you on the show.
Because I'm recently doom curious. And I'll explain what that means. So I had long been skeptical of this idea that AI could potentially break out of its training set or out of the computers and start to potentially even harm humans. I still think I'm on that path.
i'm starting to question it we've recently seen research of ai starting to try to export its weights and scenarios where it thinks it might be rewritten trying to fool evaluators and even trying to break a game of chess by rewriting the rules because it's so interested in winning uh the game so i'm just going to put this to you right away uh
is what I'm seeing in these early moments of AI's trying to deceive evaluators or trying to change the rules that it's been given
Is that the early signs of us having AI as an adversary and not as a friend? The easier way to see that it could be adversarial is just if people maliciously use these AI systems against us. So if we have an adversarial state trying to weaponize it against us, that's an easier way in which it could cause a lot of damage to us. Now, there is a
an additional risk that the AI itself could have an adversarial relation to us and be a threat in itself, not just the threat of humans in the forms of terrorists or in the forms of state actors, but the AIs themselves potentially working against us. I think
Those risks would potentially grow in time. I don't think they're as substantial now compared to just the malicious use sorts of risks. But yeah, I think that as time goes on and as they're more capable, if some small fraction of them do decide to deceive us or try to self-exfiltrate themselves –
or develop an adversarial posture toward us, then that could be extraordinarily dangerous. So it depends. So I want to distinguish between what things are particularly concerning in the next year versus somewhat more in the future. And I think in the shorter term, it is more of this malicious use, but that's not to downplay the fact that AS can be threats later on. Now, from what I understand it from your first answer, you are concerned both...
The way that humans use AI and AI itself sort of taking its own actions, our loss of control of artificial intelligence. So can you just rank sort of where you see the problems in terms of most serious to least serious and what we should be focusing on? That's a really good question. So the risks and their severity sort of depend on time. Some become much more severe later.
So I don't think AI poses a risk of extinction like today. I don't think that they're powerful enough to do that.
Because they can't make PowerPoints yet, right? They don't have agential skills. They can't accomplish tasks that require many hours to complete. And so since they lack that, this puts a severe limit on the amount of damage that they could do or their ability to operate autonomously. So I think there's a variety of risks. I think there's in malicious use.
In the shorter term, when AIs get more agential, I'd be concerned about AIs causing cyber attacks on critical infrastructure, possibly as directed by a rogue actor. There'd also be the risk of AIs facilitating the development of bioweapons, in particular pandemic-causing ones, not smaller-scale ones like anthrax.
Those are, I think, the two malicious use risks that we'll need to be getting on top of in the next year or two. At the same time, there's loss of control risks, which I think primarily stem from people in AI companies trying to automate all of AI research and development.
And they can't have humans check in on that process because that would slow them down too much. If you have a human do a week review every month of what's been going on and trying to interpret what's happening, this would slow them down substantially. And the competitor that doesn't do that will end up getting ahead. What that would mean is that you'd have basically AI development going very rapidly where there's no...
nobody really checking what's going on or hardly checking. And I think a loss of control in that scenario is more likely. Right. And with the Center for AI Safety, we're going to talk today about risks, but we're also going to talk about solutions. And with the Center for AI Safety, what you're doing is basically pointing out the risks and trying to get to solutions to these problems. You told me you were just at the White House yesterday, the day before we were talking. So, uh,
This stuff is something that you're actually working towards mitigating, and I think we're going to get to that in a bit. But first, let's talk a little bit through some of the risks that you see with AI and how serious they actually are. One of them that just jumped out for me right away was bio, creating bioweapons. Let me run you through what I think the scenario could be in my head, and you tell me what I'm missing. With bioweapons, you'd basically be prompting an LLM to help you come up with
new biological agents effectively that you could go unleash against an enemy. And I think wouldn't that be predicated on the AI actually being able to come up with biological discoveries of its own?
Right now, current LLMs, they don't really extend beyond the training set. Maybe there's an emergent property here or there, but they haven't made any discoveries and sort of been the big knock on them to this point. So I am curious if you're talking about immediate risks and one of them being, okay, there could be bio weapons that are created with AI. Doesn't that suppose that there's going to be something much more advanced than the LLMs that we have today? Because with current LLMs,
To me, it's basically like Google. It's a search for what's on the web, and it can produce what's on the web. But it's not coming up with new compounds on its own. Yeah. So I think that for cyber, that's more in the future. But I think expert-level virology capabilities are much more plausible in the short term. So...
For instance, we have a paper that'll be out maybe in some months. We'll see. But most of the work's been done. And in it, we have Harvard and MIT expert-level virologists sort of taking pictures of themselves in the wet lab,
and asking what steps should I do next? So can the AI, given this image and given this background context, help guide through step by step these various wet lab procedures in making viruses and manipulating their properties?
We are finding that with the most recent reasoning models, quite unlike the models from two years ago like the initial GPT-4, the most recent reasoning models are getting around 90th percentile compared to these expert-level virologists in their area of expertise.
This suggests that they have some of these wet lab type of skills. And so if they can guide somebody through it step by step, that could be very dangerous. Now, there is an ideation step, but that seems like a capability. Them doing brainstorming to come up with ways to make viruses more dangerous, I think that's a capability that they've had.
for over a year, the brainstorming part, but the implementation part seems to be fairly different. So I think in bio, actually, I would not be surprised if in a few months there's a consensus that they're expert level in many relevant ways and that we need to be doing something about that.
Wow. That's crazy to me because I would think it would be the opposite, right? That cyber would be the thing that we need to be worried about because these things code so well, not virology. So I just want to ask you – But on that, biology has been such an interesting subject because they just know the literature really well. They know the ins and outs of it. They've got a fantastic memory. And they have so much background experience. I –
It's been, for some reason, their easiest subject historically, biology and virology, in earlier forms of measurements, like if you see how they do on exams. But now we're looking at their practical wet lab skills, and they have those increasingly as well. So what about the evolution of the technology? Because this is all with large language models, right? Reasoning is just something that's
taking place within a large language model like the GPT, which powers ChatGPT. So what is it about the current capabilities that have increased to the point where they're now able to guide somebody through the creation or manipulation of a virus?
That seems to be like a step-changing capability. Well, now they have this image understanding skills. So that's a problem that they didn't used to have. That makes it a lot easier for them to do guidance or sort of be in a print or sort of a guide on one's shoulder saying, now do this, now do that. But I don't know where that came from, that skill. They've just trained on the internet and...
Maybe they read enough papers and saw enough pictures of things inside those papers to have a sense of the protocols and how to troubleshoot appropriately. So since they've read basically every academic paper written, maybe that's the cause of it. But
It's a surprise. I was thinking that this practical, tacit knowledge or something wouldn't be something that they would pick up on necessarily. It'd make a lot more sense for them to have academic knowledge about knowledge of vocab words and things like that. So I don't know where it came from. It's there. Right. But this is still all stuff that is known to people. It's not like the AI is coming up with
nude viruses on its own. Well, so... You can't like prompt whatever GPT it is and say, create a new coronavirus. So if you're saying, I'm trying to modify this property of the virus so that it has more transmissibility or a longer stealth period...
then I think it could, with some pretty easy brainstorming, make some suggestions. And then if it can guide you through the intermediate steps, that's something that can make it be much more lethal. I don't think it needs a... You don't need breakthroughs for doing some bioterrorism generally. The main limitation for risks generally, risks will be capability and intent. And historically, our bio risks have been fairly low because the
people with these capabilities has been a very small number, maybe a few hundred top virology PhDs, and then a lot of them just don't intend to do this sort of thing. However, if these capabilities are out there without any sorts of restrictions and extremely accessible,
then as it happens, then your risk surface is blown up by several orders of magnitude. A solution for this, to let people keep access to these expert level virology capabilities, is that they can just speak to sales or ask for permission to have some of these guardrails taken off. Like if they're a real researcher,
at Genentech or what have you, wanting these expert level virology capabilities, then they could just ask and then like, oh, you're a trusted user, sure, here's access to these capabilities. But if somebody just made an account a second ago, then by default they wouldn't have access to it. So for safety, a lot of people think that the way you go about safety is, you know,
slowing down all of AI development or something like that. But I think there are very surgical things you can do where you just have it refuse to talk about topics such as reverse genetics or guide you through practical intermediate steps for some virology methods. Wait, those safeguards don't exist today? No.
At XAI they do. You're an advisor at XAI? Yeah, yeah, yeah. But what were the models that you were testing to try to find out whether they would help with the enhanced integration of virologists? So we tested pretty much all of the leading ones that had these sort of multimodal capabilities.
And they'll have some sort of safeguards, but there are various holes. And so those are being patched. We've communicated that, hey, there are various issues here. And so I'm hopeful that very quickly some of these vulnerabilities will be patched with it. And then if people want access to those capabilities, then they could possibly be a trusted third-party tester or something like that or work at a biotech company. And then those restrictions could be lifted for those use cases. But
Random users who we don't know who they are asking how to make some virus more lethal or something. Sorry. Animal affecting virus. It's just punt. Have the model refuse on that. That seems fine. Yeah, we do see the benchmarks come in through each model release. And it's like, oh, now it's scored 84th or 90th percentile or 97th percentile on this math test or on this bio test. And for us, it's like, oh, that's the model doing it. But what you're trying to say is...
And correct me if I'm wrong. If it's getting 90% of the way that an expert virologist might get, then it could take a crafty user a number of prompts effectively to find their way towards that
100% because if they try it enough times, they might accidentally get to the, not accidentally, but they might end up getting the bad virus that we're trying not to have the public create. Yeah. Yeah. So this, this is, this is what concerns me like quite a bit. And I'm being more quiet about this just to,
Well, you're talking about the podcast. Yeah, I guess what I'm talking about now, but I'm not – there's this orders of magnitude. It's being taken care of at XAI, and this is sort of in our risk management framework there. And other labs are taking this sort of stuff more seriously or finding some vulnerabilities, and then they're patching them. So I'm being nonspecific about some of the –
vulnerabilities here, but hopefully can provide more precision once they have that taken care of. Okay, I look forward to reading the paper. You're an advisor to Scale.ai. They are a company that
will give a lot of phd level information to models in post-training right so you've trained up the model on all of the internet it's pretty good at predicting the next word and then it needs some domain specific knowledge scale from my understanding has phds and really smart people writing their knowledge down and then feeding it into the model to make these models smarter uh
How does a company like Scale.ai approach this? Do they have to say, all right, if you're a virology PhD, we shouldn't be fine-tuning the model with your information? What's going on there and how are you advising them? So I've largely been advising on measuring capabilities and risks in these models. So we did, for instance, a paper on the weapons of mass destruction related knowledge that models would have together last year. And
For that, we were finding a lot of the academic knowledge or knowledge that you would find in the literature. Like does it really understand the literature quite well? And we were seeing that in biology and for bioweapons-related papers that they did. However, this just tested their knowledge, not their know-how.
So that's why we did the follow-up paper to see what's their actual wet lab know-how skills and those were lower but now they're higher and so now those vulnerabilities need to be patched and those patches are, I gather, underway.
So we've also worked on other sorts of things together, like in measuring the capabilities of these models, because I think it's important that the public have some sense of how quickly is AI improving? What level is it at currently? So a recent paper we did together was Humanity's Last Exam,
where we put together various professors and postdocs and PhDs from all over the world, and they could join in on the paper if they submit some good questions that stump the AI systems.
And I think this is a fairly difficult test. So it was think of something really difficult that you encountered in your research and try and turn that into a question. And I think each person, each researcher probably has one or two of these sorts of questions. So it's a compilation of that. And I think when there's very high performance on that benchmark, that would be suggestive of –
something that has say in the ballpark of superhuman mathematician capabilities. And so I think that would revolutionize the academy quite substantially because all the theoretical sciences that are so dependent on mathematics would be a lot more automatable. You could just give it the math problem and it could probably crack it or crack it better than nearly anybody on earth could.
So, that's an example capability measurement that we're looking at. We excluded in Humanity's last exam no virology related skills. So, we were not collecting data for that because we didn't want to incentivize the models getting better at that particular skill through this benchmark. And how's the AI doing today on that exam? They're in the ballpark of like 10 to 20% overall. They're the very best models. So,
It'll take a while for it to get to 80 plus percent. But I think once it is 80 plus percent, that's basically a superhero mathematician is one way of thinking of it. - But the thing is they're at 10 to 20% now. And many experts within the AI field, the practitioners, we had Jan on a couple weeks ago talking about how we're getting to the point of diminishing returns with scaling. That current growth trajectory of, or the current trajectory of generative AI in particular,
is limited because basically the labs are maximizing their ability to increase its capabilities. So I'm curious what you think, whether you think that's right because you're obviously working with these companies, you're working with XAI, you're working with scale. If we are getting to this data wall or some wall or some moment of diminishing marginal return on the technology,
Is it possible that all this fear is somewhat misplaced? Because if the AI is not going to get much better than it is right now, at least with the current methods, we may not be a year or two away from AGI. We may not be getting AGI at the end of 2025, like some people are suggesting. And so then maybe we shouldn't be as afraid because, again, the stuff is limited.
Yeah, so if we were trapped at around the capability levels that we're at now, then that would definitely reduce urgency and, you know, means one could chill out a bit more and take it easy. But I'm not really seeing that. I think maybe what he's referring to is the sort of pre-training paradigm, sort of running out of steam. So if you take an AI, train on a big blob of data,
and have it just sort of predict the next token, do what basically gave rise to older models like GPT-4. That sort of paradigm does seem like it's running out of steam. It has held for many, many orders of magnitude, but the returns on doing that are lower. That is separate from the new reasoning paradigm,
that has emerged in the past year, which is where you train models on math and coding types of questions with reinforcement learning. And that has a very steep slope. And I don't see any signs of that slowing down. That seems to have a faster rate of improvement than the pre-training paradigm, the previous paradigm had.
and there's still a lot of reasoning data left to go through and do reinforcement learning on. So I think we have quite a number of months or potentially years of being able to do that. And so
personally, I'm not even thinking too specifically about what AIs will be looking like in a few months. They'll be, I think, quite a bit better at math and coding, but I don't know how much better. So I'm largely just waiting because the rate of improvement is so high and we're so early on in this new paradigm that I don't find it useful to
try and speculate here. I'm just going to wait a little while to see. But I would expect it to be quite better in each of these domains, in these STEM domains. Right. I guess reasoning does make it better at the areas that you're mostly concerned about, doing math, doing science. Yeah, coding. Yeah, that's right. Yeah. Because when it goes, tell me again if I'm wrong, when it goes step by step, it's much better at executing and working on these problems than if it's just programming.
printing answers. Yeah, and there is a possibility, and this is sort of a hope in the field, I don't know whether it will happen, is that these reasoning capabilities might also give these agent type of capabilities where it can do other sorts of things like make a PowerPoint for you and do things that would require operating over a very long time horizon. Potentially those would fall out of this, that skill set would fall out of this paradigm, but it's not clear.
there has been a fair amount of generalization from training on coding and mathematics to other sorts of domains like law, for instance. And maybe if those skills get high enough, maybe it will be able to sort of reason its way through things step by step and act in a more coherent, goal-directed way across longer time spans. I'm going to try to channel Jan here a little bit. I think he would say that this is still going to be constrained by the fact that AI has no real understanding of the real world.
Well, I don't know that sounds like it almost a no true Scotsman type of thing like it's like what's real understanding right me like well Let me give you this predictive ability if it's there's sort of like if it can do this stuff That's what I care about But if it like doesn't satisfy some like strict philosophical sense of something, you know Some people might find that compelling but I don't I'll give you an example like with the video generators like if I really understood physics
than when you say, "Give me a video of a car driving through a haystack." It will actually be a car driving through a haystack, as opposed to what I've done is give it that prompt and it's just hay exploding onto the front of a car with perfectly intact hay bales in the background. I think that for a lot of these sorts of queries, at least with images for instance,
We'd see a lot of nonsensical arrangements of things and things that don't make much sense if you look at it more closely. But then as you just scale up the models, then they tend to just kind of get it increasingly. So we might just see the same for –
for images or excuse me for for video I think as well they have like some good world model stuff like they'll have like vanishing points being more coherent and like like if I were drawing or anything like that I'd probably be lacking you know lacking an understanding of the physics and geometry of the situation and making things internally coherent relative to them so um
I don't know, yeah, they seem pretty compelling and have a lot of the details right, including some of the more structural details. But there'll be gaps that one can keep zooming into. But I just think that that set will keep decreasing, as was sort of the case with images and text before. I mean, text back in the day, the same argument. It doesn't have a real understanding of causality. It's just sort of mixing together words and whatnot. And when it was barely able to construct sentences coherently,
Now it can. Yeah, now it can. So I don't know if it then got a real understanding in the sort of philosophical sense that he's thinking for language, but it was good enough, and that might be the case with video as well.
There were points where I was like, oh, but it is getting the guy sitting on the chair when I say, you know, do a video of a guy sitting on a chair and kicking his legs. And those legs are kicking and they are bending at the joints. So there must be some understanding there. Yeah, in some ways. But if you ask them to do like gymnastics, then I'll just have lips flailing all over. No, the person just disappears into the floor. Yeah.
Okay, like you said at the beginning, chat-chip-tee isn't going to kill us. Yeah, yeah. Yet. Let's talk about hacking. I do think that we glanced over a little bit before, but in terms of we're now going through, I think, the humans plus AI problem, right? And hacking to me is one that I think we should definitely focus on. You mentioned that we're still not quite there, but it does seem to me, again, I'm just going to go back to the point I made earlier,
You can really code stuff up with these things, and they enable pretty impressive code. Already, you could think that ChatGPT could produce pretty good phishing emails if you just kind of creatively, and not just ChatGPT, but all of these GPT models, if you creatively prompt it, it will give you an email that you can send and try to phish somebody. Or even, let's say you just take an open source model, like DeepSeek, download it, and then run it without safeguards.
So where's the risk with hacking? I know you said it's a little bit further off. Why is it further off and what should people be afraid of or what should people be concerned of? Yeah, yeah. So the risk from it, more of the risk comes from when they're able to autonomously do the hacking themselves. So trying to break into a system, finding an exploit, escalating privileges, causing damage from there.
Things like that. And that requires multiple different steps and these agential skills that I keep referring to, uh, that they currently don't have. So although they could facilitate in like ransomware development and other forms of other forms of malware, uh,
For them to autonomously execute and infiltrate systems, that is something that will require the new agential skills. And I don't see – it's very unclear when those arrive. Could be a few months from now.
Could be a year from now. It's a little more suspicious. Maybe it would even take two years for that. So that's something for us to get prepared for, figure out how we're going to deal with that, try and make safeguards increasingly robust to people trying to maliciously use it in those ways. But yeah, I think much of the risk source comes from
being able to take one of these AIs, let's say one of these DeepSeek AIs, let's say it's DeepSeek Agent version and it's able to actually do these cyber attacks, then you could just run 10,000 of them simultaneously. And then some rogue actor could have it target critical infrastructure. Then this is causing quite severe damage. So for critical infrastructure, this could be like have it reduce the detector or the filter in a water plant or something like that.
then the water supply is like ruined. Or you could target these thermostats, these in various homes and because they're often, some of the more advanced ones are connected to Wi-Fi. And then you sort of turn them up and down simultaneously. And this can just like ruin like transformers and like blow them. And then they take multiple years to replace. Things like that. But they aren't capable of doing that sort of thing currently.
So it's more of an on-the-horizon type of thing. But I'm not feeling the urgency with that currently. I'm more concerned about – I think there's more the geopolitics of this, like making sure that –
states are aware of what's going on in AI, at least able to follow the news and things like that in some capacity. I think things like that feel somewhat more urgent to me than trying to address cyber risks. There are things to do, though, and I think we should create incentives beforehand, but, you know.
Maybe I'm too much of an optimist for my own good, but when I hear you talk about this, I also get a little bit excited about the capabilities of these programs because, for instance, if AI can enhance the function of a virus, AI can probably create a vaccine and make medical discoveries. If AI can hack into the infrastructure of some country, find exploits and turn the thermostats up and down, then AI could probably do incredible amounts of
very beneficial coding and computer work for humanity. So if we do get to that point,
It seems to me like there's going to be these maybe two poles here, right? One is the potentially scary and destructive stuff that you can mitigate, right, with some of the controls that you talked about, but also amazing opportunity. Yeah. So it's – and the thermostat thing was for messing with the electricity and that causing strain on the power grid and destroying transformers. Just for clarification in case – but yeah, I think you're pointing at that it's dual use.
I'm not saying AI is bad in every single way. It's like other dual use technologies. Bio is a dual use technology. It can be used for bio weapons, it can be used for healthcare.
nuclear technology is dual use. There's civilian applications for it as well. And chemicals too. And we have managed all of those other ones by selectively trying to limit some particular types of usage and restricting the capabilities of rogue actors to some of these technologies and making sure there are good safeguards for the civilian applications. And then we can actually capture the benefits. So it's not an all or nothing approach.
type of thing with AI. It's what are surgical restrictions one can place so that we can keep capturing the benefits. And so for instance with virology, that's a matter of you add the safeguards and then the researchers who want access to those can speak to sales. That's basically a resolution of that problem provided that you have the models kept behind APIs. And
So now on this dual use part though, there's an offense-defense balance. So for some applications, it can help, it can hurt. And maybe it helps more than it hurts. Maybe it will hurt more than it will help. So in bio, I think that is offense-dominant.
If somebody creates a virus, there's not necessarily a cure that it will immediately find for it. If it would help a rogue actor make a somewhat compelling virus, now that could be enough to cause many millions to die. And it may take months or years to find a cure. There are many viruses for which we have not found cures yet.
And for cyber, in most contexts, there's a balance between offense and defense where if somebody can find a vulnerability with one of these hacking AIs, then they could also use that to patch the vulnerability. There is an exception though where in the context of critical infrastructure,
there the software is not updated rapidly. So even if you identify various vulnerabilities, there will not necessarily be a patch because the system needs to always be on, or there are interoperability constraints, or the company that made the software is no longer in business, these sorts of things. So our critical infrastructure is a sitting duck. And so in that context, cyber is offense dominant. But in normal context, it's roughly
There's roughly a duality. And for virology, I think that's largely offense dominant. So before we go to the nation state element of this, I need to ask you a question about the actual research houses themselves. Every research house has their concern with safety, from open AI to XAI, everything in the middle. Maybe not deep seek. We'll get to deep seek. Yet,
they're the ones that are building this technology. And I find it a little strange that you have companies that are saying, it's weird, we have to build this and advance this technology so we can keep people safe. I never really understood that message. Yeah, I don't know if it's to say that we need to keep people safe. I think it's more that the main organizations that have power in the world now are largely companies.
And so if one's trying to influence the outcomes, one basically needs to be a company is how many of them will reason. They'll think that, yeah, you could be in civil society or you could protest, but this will not determine the course of events as much.
So there's sort of, many of them are buying themselves the option to hopefully influence things in a more positive direction, but most of the effort will be to stay competitive and stay in this arena. So I think over 90% of the intellectual energies that they're going to spend is actually how can we afford the 10x larger supercomputer? And that means being very competitive, speeding this up, and...
Making safety be some priority but not necessarily a substantial one. So I do think there is sort of an interesting contradiction or something that looks like a contradiction there. But I think if we think back to nuclear weapons, nuclear weapons, nobody wants nuclear weapons. There would be zero on Earth.
Fantastic. That would be a nice thing to have if that would be a stable state. But it's not a stable state. One actor may then develop nuclear weapons and they could destroy the other. So this encourages states to do an arms race and it makes everybody all collectively less secure. But that's just how the game theory ends up working. So you get a classic – what's called a security dilemma. Everybody is worse off collectively.
But and even if you took it seriously, you see, I guess nuclear technology is dual use and potentially catastrophic. And we need to be very risk conscious about it. You can agree with all those things, but you still might want nuclear weapons because other parties will also have nuclear weapons. And unilateral disarmament in many cases just didn't make game theoretic sense. So in the way that like an individual company pausing
their development while others race ahead doesn't make game theoretic sense. So I think this just points to the fact that there's some game theory is kind of confusing and so you're getting some things that are seeming contradictions that if you use a nuclear analogy go yeah I suppose that makes sense it's just kind of an ugly reality to internalize. Doesn't that discount the fact that like these companies if they want to influence like the way things are going
they are going to be... It's like you're one in the same. Yes, you're influencing, but without you, this wouldn't be moving as fast as it is. It is interesting, for instance, think about Elon Musk, right? Obviously, he has you in two days a week to work on safety inside XAI, but he's also putting together, what, a million GPU data centers to build the biggest, baddest LLM ever. I don't understand how that... Well, if he didn't, then he would be having less influence over it. So it's...
It's not something that I would envision everybody would just sort of voluntarily pause. So subject to companies not sort of voluntarily rolling over and dying, then what's the best you can do subject to those constraints? But the competitive pressures are quite intense such that they do end up prioritizing, focusing on competitiveness and other priorities like...
What's the budget for safety research?
It will be generally lower than would be nice to have if this were a less competitive environment. Do you think Elon is more interested in restoring this original vision that he had for OpenAI, making everything open source, making it safe? I would imagine he founded OpenAI with Sam Altman as sort of a beachhead against Google because he was afraid of what Google was going to do with this technology. Yeah.
So I'm curious if you think that XAI is along that mission or is he more interested in the sort of soft cultural power that comes with having the world's best AI? For instance, like you can change the way that it speaks about certain sensitive political issues. It can be anti-woke, which we all know is sort of where Elon stands. So what do you think his –
true interest lies in building XAI? Well, I think the, and I won't position myself as sort of speaking on behalf of them here. We won't put you as a young spokesperson, but you are in there a couple times a week. Yeah, so I think that the mission is to understand the universe. And so this means having AIs that are honest and accurate and truthful. I
uh, to improve the public's understanding of the world. So we will be getting in a very fast moving, uh, trying situation with AI if it keeps accelerating. And so good decision-making will be very important. And us understanding the world around us will be a very important. So if there are more, um, features that enable, um, uh, truth seeking and honesty and good forecasts and good judgment and institutional decision-making, those would be great to have with, um,
The hope is that Gra could help enable some of that so that civilization is steered, steering itself more prudently in this potentially more turbulent period that's upcoming. That's one read on the mission statement. But I think that's the objective of it is understand the universe and there are different sub-objectives that that would give rise to. I think its ability to...
help culture process events without censorship or political bias one way or the other is a stated objective and I think that would be indispensable in the years going forward. Do you buy that that's what they're doing? Because we also heard the same thing from Elon when it came to buying Twitter, now X. But
I think Community Notes has been quite good. But that was something that was built under Jack Dorsey. I'm not going to take sides here. I'm going to just observe empirically what I've seen. I mean, we know that Substack links have been deprioritized because it was seen as a competitor with Twitter. We know that Musk, I think, according to reporting, changed the algorithm to have his...
tweets show up more often and his tweets took a strong stance towards supporting Donald Trump in the election. So to me, the idea that like hearing again from Elon, and again, look, I respect what Elon's done as a business person, but hearing again that he has a plan to make a culturally relevant product that's free of censorship and politically unbiased, I don't know if I believe that anymore. So I don't know about some of the specific things such as the, you know,
waiting thing or something like that uh profile things for instance um i think that overall in terms of cultural influence and people being more disagreeable and um doing less self-censoring uh has been um has been successful i think that was the main objective of it and so i think uh i think that x had a large role to play there uh so i don't know i think like
I think in terms of shaping discourse norms in the U.S., that seems to have been successful in my view. Yeah. I'm not saying pre-Elon Twitter didn't center, which is probably the wrong word because that's usually from the government, didn't sort of shape the definition of speech to its own liking. It obviously had a progressive approach and moderated speech on a progressive approach. I just don't think Elon is not –
using his own influence when it comes to how he runs X. But you and I could speak about this forever. Yeah, this isn't even my wheelhouse as much. But yeah, I mean, it's sort of like... Since, you know...
You brought it up. Oh, okay. All right. Sure. All right. I mean just the non-biased and truthful thing. So it's worth it. So I mean it is – if there are like ways in which it's like extremely biased one way or the other, that's useful to know. This is a thing that is –
is continually trying to be improved, at least for XAI's grok. And I think that all of the sort of product offering could get quite better at this. But I'm not speaking as a sort of representative there or anything like that, but I guess maybe in my, I guess right now, in my personal capacity, I think that there's things to improve on for all these models in terms of their bias.
All right. We agree on that front. You hinted at it previously, but you talk a little bit about how companies, basically how you don't think it's a good idea for there to be an arms race here. And
And certainly there is one between the U.S. and China. We know the U.S. has put export controls on China. China has in some ways gotten around them through like very creative procurement processes that go through Singapore. Right. We can probably say that with a pretty good degree of confidence. Then, of course, we see the release of DeepSeek and some other applications from China. And everyone's trying to build the better A.I. so that they have the soft power like we spoke about earlier.
to effectively, you know, A, control, to influence culture across the world, but also it's an offensive capability and defensive, like you're saying. If your country has the ability to manipulate viruses or to do cyber hacks, you become more powerful and you get to sort of, you know, potentially put your view of the world, implant your view of the world on the way that it operates. Yeah.
You have a paper out that's sort of arguing against this arms race. It's called Superintelligence Strategy. It's with you, Eric Schmidt, who we all know, former CEO of Google. I think he just started. He's taking over a drone company, so you can tell me a little bit about that. And Alexander Wang, the former, no, not the former, the current CEO of Scale.ai, who's been formerly on this show.
So talk a little bit about why you don't think it's a good idea for countries to pursue this arms race. You say it might be leading us to mutually assured AI malfunction, not mutually assured nuclear destruction. I think that's what you get that from. Yeah. So the strategy has three parts, one of which is competitiveness. But we're –
saying that some forms of competition could be destabilizing and that you may be irrational to pursue it because you couldn't get away with it. So in particular, this making a bid for superintelligence through some automated AI research and development loop
could potentially lead to one state having some capabilities that are vastly beyond another state's. If one state gets to experience a decade of development in a year and the other one is a year behind, then this results in a very substantial difference in the states' capabilities. So this could be quite destabilizing if one state might then start to get an insurmountable lead relative to the other.
So, I think that form of competition would be very dangerous and because there's a risk of loss of control and because it might incentivize states to engage in preventive sabotage or preemptive sabotage to disable these sorts of projects. So, I think states may want to deter each other from pursuing superintelligence through this means.
And this then means that AI competition gets channeled into other sorts of realms, such as in the military realm of having more secure supply chains for robotics, for instance, and for AI chips, having reduced sole source supply chain dependence on Taiwan for making AI chips.
So, states can compete in other dimensions, but them trying to compete to develop superintelligence first, I think that seems like a very risky idea and I would not suggest that because there's too much of a risk of loss of control and there's too much of a risk that one state, if they do control it, uses it to disempower others and affects the balance of power far too much and destabilizes things.
But the strategy overall, think of the Cold War. Before you go on the strategy, my reaction to that is good luck telling that to China. So I think it's totally – so for deterrence, I think if the U.S. were pulling ahead –
both Russia and China may have a substantial interest in saying, hey, cut this out, pulling ahead to develop superintelligence, which could give it a huge advantage and an ability to crush them. They'd say, you don't get to do that. We are making a conditional threat that if you keep going forward in this because you're on the cusp of building this, then we will disable your data center or the surrounding power infrastructure so that you cannot continue building this.
I think they could make that conditional threat to deter it and we might do the same or the US might do the same to China or other states that would do that. So,
I don't see why China wouldn't do that later on. Right now, they're not as thinking about superintelligence and advanced AI. So this is more of a description of what the dynamics later on when AI is more salient. But it would be surprising to me if China were saying, yes, the United States, go ahead, do your Manhattan Project to build superintelligence, come back to us in a few years, and then tell us you can boss us around because now we're in a complete position of weakness and we'll be at your mercy.
and we'll accept whatever you say or tell us to do. I don't see that happening. I think they would just say move to preempt or deter that type of development so that they don't get put in that fragile position.
Are you in like the Eliezer Yudkowsky camp of bombing the data centers if we get to superintelligence? Well, so I think I'm advocating or pointing out that it becomes rational for states to deter each other by making conditional threats and by means that are less escalatory, such as cyber sabotage on data centers or surrounding power plants.
I don't think one needs to get kinetic for this and I think that if discussions start earlier, I don't see any reason things need to be escalating in that way or unilaterally actually doing that. We didn't need to get in a nuclear exchange with Russia to sort of express that we have a preference against nuclear war. So I think- Thank goodness.
or making conditional threats through deterrence seems like a much smarter move than, hey, wait a second, what are you doing there? And then bomb them. That seems needless. Yeah, I'm not into that solution either. But what you're talking about is sort of assuming that
There will be a lead that will be protectable for a while. But everything we've seen with AI is that no one protects a lead, right? Well, if there's – so one difference is that when you get to a different paradigm like automated AI R&D, the slope might be extremely high such that if the competitor starts to –
do automated AI R&D a year later, they may never catch up just because you're so far ahead and your gains are compounding on your gains. Sort of like in social media companies, Eric will use this analogy, where if one of them starts blowing up and growing before you started, it's often the case that you won't be able to catch up and they'll have a winner-take-all type of dynamic. So
Right now, the rate of improvement is not...
that high or there's less of a path for a winner-take-all dynamic currently. But later on, when you have the ability to run 100,000 AI researchers simultaneously, this really accelerates things. Maybe OpenAI has got a few hundred, maybe it'll say 300 AI researchers, so going from 300 AI researchers to orders of magnitude more world-class ones create quite substantial developments.
This is something that isn't new. This is something that Alan Turing and the founders of computer science had pointed out, that this is a natural property of when you get AIs at this level of capability, then this creates this sort of recursive dynamic effect.
where things start accelerating extremely quickly and quite explosively. Okay. We've managed to spend most of our conversation today talking about present risks or risks in the near future. We should focus a little bit more on intelligence explosion and loss of control, and we're going to do that right after the break.
Your ads deserve better. Smarter targeting, premium cross-channel placements, and simplified measurement all in one platform. StackAdapt is the leading ad buying platform for end-to-end performance. StackAdapt is ranked the number one DSP on G2. It's the ad platform you need for campaign success. See why the best marketers are switching to StackAdapt at go.stackadapt.com backslash LinkedIn.
I'm Kwame Christian, CEO of the American Negotiation Institute. And I have a quick question for you. When was the last time you had a difficult conversation? These conversations happen all the time. And that's exactly why you should listen to Negotiate Anything, the number one negotiation podcast in the world. We produce episodes every single day to help you lead, persuade and resolve conflicts both at work and at home. So level up your negotiation skills by making Negotiate Anything part of your daily routine.
We're back here on Big Technology Podcast with Dan Hendricks. He is the director and co-founder of the Center for AI Safety. Dan, it's great speaking with you about this stuff. Let's talk a little bit. You've been sort of talking about it in the first half, but I want to zero in here on this idea of intelligence explosion or what you talk about as basically having AI autonomously improve itself. Just talk through a little bit about how that might happen and whether you see that being something that is
is actually probable in our future. Yeah. I mean, the basic idea is just imagine automating one AI researcher, one world-class one. Then there's a fun property with computers, which is there's copy and paste. So you can then have a whole fleet of these. It's, well, you know, with humans, you know, if you just have one of them, you know, it's...
Maybe they'll be able to train up somebody else who has a similar level of ability. So this adds a very interesting dynamic to the mix. And then you can get so many of them proceeding forward at once. And AIs also operate quite quickly. They can code a lot faster than people. So maybe we've got 100,000 of these things operating at 100x the speed of a human.
How fast will that go? Maybe conservatively, let's say it's just overall 10x-ing research. But 10x-ing research would mean, say, like a decade's worth of developments in a year. So that telescoping of all these developments makes things pretty wild and means that one player could possibly –
Get AIs that go from like very good, you know, world class to being vastly better than everybody at everything and a super intelligence. Something that towers far beyond any living person or collective of people. So.
If we get an AI like that, this could be destabilizing because it could be used to develop a super weapon potentially. Maybe it could find some breakthrough for anti-ballistic missile systems which would make nuclear deterrence no longer work or other types of ways of weaponizing it. So that –
That's why it's destabilizing. And so states then, if they're seeing, oh, they're, you know, don't run this many AI researchers simultaneously in these data centers working on to build a next generation or superintelligence, because if you do so, then that will put us in, that will make our survival be threatened. So them saying, them deterring that would,
would help them secure themselves and they can make those threats very credible currently and I think we'll continue to be able to have these threats be credible going forward. So this is why I think it might take a while for superintelligence to be developed because there'll be deterrents around it later on. And then maybe in the farther future there could be something multilateral but that's speaking quite far out in very different economic conditions. In the meantime with the AIs that
we'd have in the future, those could still automate various things and increase prosperity and all of that. So we'd still have explosive economic growth if you had something that was just at the average human level ability running for very cheap. So I think that those are some of the later stage strategic dynamics. And I don't think we can get away with
I don't think any state could get away with trying to build a superintelligence, go build a big data center out in the middle of the desert, a trillion-dollar cluster, bring all the researchers there and just not invite the other states to go, what do you think you're doing here? You were at the White House yesterday. Well, this is largely just sort of speaking about some of these strategic implications. Are they receptive? Yeah, I mean, it's a...
This isn't a – there's always interest in thinking what are some of the later term dynamics, what things should happen now and whatnot. But this is – yeah, I think when people think White House, it sounds – Well, it's the word the president lives. So there's the – well, yeah. So there's the Eisenhower building, which is –
part of the White House, kind of not, but that's where everybody works and whatnot. And I think some of the things we were speaking about here, like virology advancements, things like that, there's just a lot of things to speak about and think what things make sense or what things to keep in mind going forward. Yeah, I guess I'd rather an executive branch paying attention to this stuff than not. Yeah, yeah, that's right, yeah. Yeah, and what are the sort of ways that help, you know,
maintain competitiveness because you know how people normally think about this they'll think it's all or nothing and good or bad thing and instead we're saying no it's dual use so that means there are some particular applications that are concerning and there are other applications that are good and you want to stem the particularly harmful applications and what are ways of doing that while capturing the upside. Right okay so the intelligence explosion part of this conversation
Neville brings up the loss of control part where, to me, I think the thing that when people think about AI harm, they are always worried that AI is going to escape the simulation or whatever it is and act on its own and try to basically ensure that it preserves itself.
Uh, we've seen it recently. I think I brought this up at the beginning of the show where Anthropic has done some experiments where the AI has run code to try to copy itself over onto a server if it thinks that its values are at risk of being changed. Um,
Is this – so it's fun to think about, but it's also like probably just probability. Like if you run it enough times because it's a probabilistic engine. Yeah, well, that's concerning though. If it was like, oh, it's only one in a thousand of them intend to do this. Well, if you're running a million of them, then you're basically certain to get many of them to try and self-exfiltrate. And so are you worried that this self-exfiltration is going to be a thing? Yeah.
I think from a recursive automated AR&D thing, I think that has really substantial probability behind it of a loss of control in that situation. So you're worried about this? So there's that, but I would distinguish between that and the sort of
things that are not superintelligences or things that are not coming from that sort of really rapid loop, like the currently existing systems. I think that the currently existing systems are relatively controllable. Or if there is some very concerning failure mode, we have been able to find ways to make them more controllable. For instance, for bioweapons refusal.
We used to not be able to make robust safeguards for them two years ago. But we've done research with methods such as called circuit breakers and things like that. And those seem to improve the situation quite a bit and make it actually prohibitively difficult to do that jailbreaking. And so maybe we'll find something similar with self-exfiltration. So I think people generally want to claim that like, oh, current AIs are not controllable. And I think that they're not highly reliably controllable. They're reasonably controllable.
Maybe we could get some – or it seems plausible that we'll get to have increasing levels of reliability. And so I'm sort of reserving judgment. It will depend more on the empirical phenomena. So I think everybody should research this more and –
And we'll sort of see what the risks actually are. But there are some that seem less empirically tractable or things that can't be empirically solved like this loop thing. Like how are you going to – you can't run this experiment a hundred times or something like that and make it go well. You're making a huge attempt at building a superintelligence and everything.
has destabilizing consequences. This isn't something that's totally unprecedented. And for that, you have more of like a one chance to get it right type of thing. But with the current systems, we can continually adjust them and retrain them and come up with better methods and iterate. So it is concerning. It would not surprise me
if this would really start to make AI development itself extremely hazardous instead of just the deployment, but instead inside the lab, like you need to be worried about the AI trying to be breaking out sometimes. That's totally in the realm of possibility. But yeah, I could see it going either way. Yeah, I mean, this personally freaks me out because, yeah, if you see the AI trying to deceive evaluators, for instance, or you see the AI trying to break out,
You really can't trust anything it's telling you. And we had Demis Hassabis on the show a little while ago, and he's basically like, listen, if you see deceptive behavior from AI, if you see alignment faking, you really can't trust anything in the safety training because it's lying to you.
But there is truth to that. Are you seeing deceptiveness at Grok, by the way? Oh, yeah, yeah. So we have a paper out. Last week we were just measuring the extent to which they're deceptive. And in the scenarios we have, like all the models were in these sorts of scenarios under slight pressure to lie, not being told to lie, but just some slight pressure.
then some of them will lie like 20% of the time, some of them like 60% of the time. So they don't really have this sort of virtue sort of baked into them, the virtue of honesty. So I think we'll need to do more work and we'll need to do it
Quickly. So I'm sort of speaking in a more nonchalant way about this, but I can't like, you know, get worked up about every single risk because they're also just, you know, be at 11 all the time. So there are some that I'm putting in different tiers, different tiers than than other risks. And this is a more speculative one. We've seen these sometimes get surprisingly handleable.
But, yeah, it could end up making things really, really bad. We'll see. We'll do things about it to make that not be the case. Okay. Thank you. Two more topics for you, then we'll get out of here. The Center for AI Safety, who's funding it?
Well, so there's not sort of one funder. It's largely just various philanthropists. The main funder would be Jan Talen and Jan Talen, who's a Skype co-founder. There's a variety of other philanthropies or philanthropists. Generally for...
generally for, so for instance, Elon doesn't, I've never asked him to fund the center. So that isn't to say I don't get any money from Elon. My appointment at XAI, I get a dollar a year. At Scale, I've, at Scale AI, I've increased my salary exponentially to where I get $12 a year, a dollar per month from Scale. But I'll try not to, or I'll try to avoid, you know,
Getting complicate having some complicated relations with them just so that I can you know Not not feel on behalf of any of them in particular. You're basically doing the work for them for free Well, but it's useful right? It's useful to do and I mean, yeah, I mean I think the main objective is yeah just try and try and try and generate some value here and as best as one can so By reducing these sorts of risks and
Yeah, I think it's a good arrangement because it enables me to like...
you know, do have a choose your own, your own venture type of thing. Right. Oh, now I think the politics or geopolitics, this is more relevant. So now I can go off and learn about this for some months and then work on a paper there. And compared to if it's like, no, you got to be coding 80 hours a week. That's, that's your job. Yeah. That, that would be quite restrictive. And I couldn't be speaking with you. So I'm glad you're here. So thank you, Alex Wayne. So let's talk a little bit about, about this funding because it's,
I think that after Sam Altman was fired and then rehired at OpenAI, there was a sort of skepticism around effective altruism's impact on the AI field. Even Jan Talen, I'm reading from his statements, right? After the OpenAI governance crisis highlights the fragility of voluntary AI-motivated governance schemes, so the world should not rely on such governance working as intended.
Now, Jan is, of course, associated with EA. EA is like basically leading the conversation around AI safety. Is that good? So I think that in terms of Jan, I think he's funded organizations that are EA affiliate. I don't know if he'd call himself that, but whatever. People can ascribe labels how they'd like. Yeah.
I think that the, I mean, I've tweeted that EA is not equal to AI safety. I think that EA community generally is insular on these. So I lived in Berkeley for a long time when I was doing my PhD. And there's sort of a school, a sort of AI risk school that was, yeah,
had very particular views about what things are important. So malicious use, for instance, when I was talking about malicious use in the beginning of this thing, they're historically really against that. They're the long term. Yeah, yeah. There'll be only loss of control. Don't talk about malicious use. That's a distraction. And so that was annoying because I'd always been working on robustness as a PhD student where the main thing was malicious use.
So, yeah, I ended up leaving Berkeley before graduating just because of the sort of relatively suffocating atmosphere and the sort of central focus on. There'd be some new fad and you'd have to get interested in that. It's elk eliciting late knowledge. This is the important thing that you have to focus on or you have to focus on inner optimizers. There's lots of these speculative ideas.
empirically fragile things. So for instance, this alignment faking stuff that you're seeing. There's some concern there, but I'm not totally sold that this is a top tier type of priority. But in these communities, this is all that matters currently, roughly speaking. This involuntary commitments from AI companies. I think voluntary commitments from AI companies are also a distraction.
because the companies will, you should expect most of them by default to just break those sorts of commitments if they end up going up against economic competitiveness.
So I think it's a distraction relatively. And so I think it's – I think there are many people who think that EA broadly, their influence on this sort of thing has not been overall positive. I think at least for me and making – and other sorts of researchers in this space who have been interested in AI risks –
the amount of pressure to adopt some particular positions. So on this be extraordinarily high and I think quite quite destructive. So I'm very pleased now that in the past year or so, there's been a lot more diversity of opinion which has been quite quite important. So and I think this is just because the broader world is getting more interested in AI. So a lot of these
a lot of these fixation on this is the one particular risk, this is the most important risk and everything else is distraction. That just doesn't work when you're speaking with the or interfacing with the real world. There's a lot of complications. And AI is so multifaceted. So you can't in your risk management approach, can't just be focusing on one of them. Right. So you're not an effective altruist. I don't think of myself as that. I don't particularly...
get along with this school of thought, this sort of Berkeley AI alignment monolith,
And I'm pleased that people can be more independently operating in this space now, which I don't think was the case for many, many years, including basically the entire time I was doing my PhD. And there'll be many people like Dylan Hadfield-Manel, a professor at MIT, who was also at Berkeley at the time, very suffocating. Rohan Shah, a researcher at D.Vine, very suffocating. They all feel this way, yeah. Okay. Let's bring it home. We've been talking for more than an hour about AI safety as if it's controllable. But...
Open source is really putting up a pretty valiant effort in this field, keeping pace with the proprietary labs. And of course, open source is not controllable. What do you think about that? I mean, we just saw DeepSeek, not to go back to it all the time, but it effectively equaled the cutting edge at the proprietary labs and put the weights on its website. So how can we possibly have...
a relationship of safety with AI if open source is out there exposing everything that's been done? - So I've been, I haven't been endorsing open source historically, but I've thought that releasing the weights of models didn't seem robustly good or bad. So I sort of was like, it's fine, seems to have complicated effects.
There's an advantage to it, which it helped with diffusion of the technology so that more people would have access to it and sort of get a sense of AI and this increased literacy on this topic and just increased public awareness and get the world more prepared for more advanced versions of AI. So that's been my historical position, but this depends on it should always proceed by a cost-benefit analysis. So if the...
If, for instance, they have these cyber capabilities later on, yeah, I think that would be a potential place to be drawing the line on open-weight releases personally.
In particular, the ones that could cause damage to critical infrastructure. You could still capture the benefits by having the models be available through APIs. And if they're software developers, they have access to these more cyber-offensive capabilities. But if they're a random user, they don't. If they're a random faceless user, they don't. And...
And likewise for virology. Once there's consensus, once the capabilities are so high that there's consensus about it being expert level in virology, I think that would be a very natural place to be having an international norm. Not saying a treaty because those take forever to...
write and ratify, but to a norm against open weights if they are expert level virologists for the same reasons that we had the Biological Weapons Convention. Russia and or the Soviet Union and the U.S. got together for the Biological Weapons Convention. The U.S. and China did as well. We also coordinated on chemical weapons with the Chemical Weapons Convention and the Nuclear Nonproliferation Treaty. States find it in their incentive
Two, work together to make sure that rogue actors do not have extremely hazardous, potentially catastrophic capabilities like chem, bio, and nuclear inputs. So I think something similar might be reasonable for AI when they get at that capability threshold. Dan, I am...
At once, kind of reassured that people are thinking about this stuff, but also more freaked out than I was when we sat down. But I do appreciate you coming in and giving us the full rundown of what to be concerned about and what maybe not to be as concerned about as we think about where AI is moving next. So thank you so much for coming on the show. Yep, yep. Thank you for having me. This has been fun.
Super fun. If people want to learn more about your work or get in touch, how do they do that? I guess this paper or strategy you've been speaking about is at nationalsecurity.ai. And I'm also on Twitter or X or whatever it's called. You should know. X.com. X.com slash Dan Hendricks would be another way of following the goings-ons as the situation evolves. Yeah.
We'll keep trying to put out work and seeing what's going on with these risks. And if we come up with technical interventions to make them less, then we'll also put that out too. So, yeah, that's where you can find me. Well, Godspeed, Dan, and we'll have to have you back. Thanks again. All right, everybody, thank you for listening, and we'll see you next time on Big Technology Podcast.