We're at the very, very early stage of the intelligence Big Bang. Being a multi-planet species greatly increases the probable lifespan of civilization or consciousness and intelligence, both biological and digital. I think we're quite close to digital superintelligence. If it doesn't happen this year, next year for sure. Please give it up for Elon Musk. Elon, welcome to AI Startup School. We're just really, really blessed to have your presence here today.
Thanks for having me. So, from SpaceX, Tesla, Neuralink, XAI, and more, was there ever a moment in your life before all this where you felt, I have to build something great? And what flipped that switch for you? Well, I didn't originally think I would build something great. I wanted to try to build something useful.
But I didn't think I would build anything particularly great. You've said probabilistically, it seemed unlikely. But I wanted to at least try. So you're talking to a room full of people who are all technical engineers, often some of the most eminent AI researchers coming up in the game. Okay. I like the term engineer better than researcher. I mean, I suppose if there's some fundamental...
algorithmic breakthrough it's it's a research otherwise it's engineering
Maybe let's start way back. I mean, when you were, this is a room full of 18 to 25 year olds. It skews younger because the founder set is younger and younger. Can you put yourself back into their shoes when you were 18, 19, learning to code, even coming up with the first idea for Zip2? What was that like for you? Yeah, back in 95, I...
i was faced with a choice of either do uh you know grad studies phd at stanford uh in in material science actually working on ultra capacitors for potential use in electric vehicles essentially trying to solve the range problem for electric vehicles or try to do something in this thing that most people have never heard of called the internet and um
i talked to my professor who was bull nicks and the material science department and uh said like um can i defer for the quarter uh because this will probably fail and then i'll need to come back to college and um and then he said this is probably the last conversation we'll have uh and he was right um so but i thought things would most likely fail not that they would most likely succeed um and um and then in 95 i wrote uh
Basically, I think the first or close to the first maps, directions, internet, white pages and yellow pages on the internet. I just wrote that personally. I didn't even use a web server. I just read the port directly because I couldn't afford a T1.
original office was on Sherman Avenue in Palo Alto. There was like an ISP on the floor below. So I drilled a hole through the floor and just ran a LAN cable directly to the ISP. And yeah, my brother joined me and another co-founder, Greg Curry, who passed away. And
At the time, we couldn't even afford a place to stay. The office was 500 bucks a month, so we just slept in the office and then showered at the YMCA on Page Mill and El Camino. I guess we ended up doing a little bit of a useful company, Zip2 in the beginning. We did build a lot of really good software technology, but
We were somewhat captured by the legacy media companies and that night runner, New York times and the host whatnot were investors and
customers and also on the board. So they kept wanting to use our software in ways that made no sense. So I wanted to go direct to consumers. Anyway, it's a long story, dwelling too much on Zip2, but I really just wanted to do something useful on the internet. Because I had two choices, like do a PhD and watch people build the internet or
help build the internet in some small way and I was like well I guess I can always try and fail and then go back to grad studies um and uh anyway that ended up being like reasonably successful sold for like 300 million dollars which was a lot at the time these days that's like I think minimum impulse but for an AI startup is like a billion dollars um it's like uh there's so many freaking unicorns it's like a herd of unicorns at this point you know unicorns a billion dollar situation
There's been inflation since, so quite a bit more money actually. Yeah, I mean like in 1995 you could probably buy a burger for a nickel. Well, not quite, but I mean, yeah, there has been a lot of inflation. But I mean, the hype level on AI is pretty intense, as you've seen. You know, you see companies that are, I don't know, less than a year old getting sometimes billion dollar or multi-billion dollar valuations.
which I guess could pan out and probably will pan out in some cases. But it is eye-watering to see some of these valuations. Yeah, what do you think? I mean, we'll...
I'm pretty bullish. I'm pretty bullish, honestly. So I think the people in this room are going to create a lot of the value that, you know, a billion people in the world should be using this stuff. And we're not even scratching the surface of it. I love the Internet story in that even back then, you know, you are a lot like the people in this room back then in that, you know,
the heads of all the CEOs of all the legacy media companies look to you as the person who understood the internet. And a lot of the world, the corporate world, like the world at large that does not understand what's happening with AI, they're going to look to the people in this room for exactly that. It sounds like, what are some of the tangible lessons? It sounds like one of them is don't give up board control or be careful about having a really good lawyer. I guess for the first
my first startup the big
Really, the mistake was having too much shareholder and board control from legacy media companies who then necessarily see things through the lens of legacy media and that they'll kind of make you do things that seem sensible to them but really don't make sense with the new technology. I should point out that I didn't actually at first intend to start a company. I tried to get a job at Netscape.
I sent my resume into Netscape. And Mark Hendricks knows about this. But I don't think he ever saw my resume. And then nobody responded. So then I tried hanging out in the lobby of Netscape to see if I could bump into someone. But I was too shy to talk to anyone. So I'm like, man, this is ridiculous. So I'll just write stuff for myself and see how it goes. So it wasn't actually from the standpoint of I want to start a company. I just want to be part of building the internet.
in some way. And since I couldn't get a job at an internet company, I had to start an internet company. Anyway, yeah, I mean, AI will so profoundly change the future, it's difficult to fathom how much. But, you know, the economy, assuming things don't go awry and, like, AI doesn't kill us all in itself, then you'll see ultimately...
an economy that is not not 10 times more than the current economy ultimately like if we become say or whatever our future machine descendants or but mostly machine dependent descendants become like a cottage of scale to civilization or beyond we're talking about an economy that is thousands of times maybe millions of times bigger than the economy today so um
Yeah, I mean, I did sort of feel a bit like, you know, when I was in D.C., taking a lot of flak for, like, getting rid of waste and fraud, which was an interesting side quest, a side quest go. But... Got to get back to the main quest. Yeah, I got to get back to the main quest here. So back to the main quest. So, but I did feel, you know, a little bit like there's...
you know, it's like fixing the government is kind of like there's like say the beach is dirty and there's like some needles and feces and like trash and you want to clean up the beach. But then there's also this like thousand foot wall of water, which is a tsunami of AI. Like and how much does cleaning the beach really matter if you've got a thousand foot tsunami about to hit? Not that much. Oh, we're glad you're back on the main quest. It's very important. Back to the main quest.
building technology, which is what I like doing. It's just so much noise. Like, the signal-to-noise ratio in politics is terrible. So... I mean, I live in San Francisco, so you don't need to tell me twice. Yeah. DC is, like, you know, I guess it's all politics in DC, but...
If you're trying to build a rocket or cars or you're trying to have software that compiles and runs reliably, then you have to be maximally truth-seeking or your software or your hardware won't work.
Like, you can't fool math. Like, math and physics are rigorous judges. So I'm used to being in a maximally truth-seeking environment, and that's definitely not politics. So anyway, I'm glad to be back in technology. I guess I'm kind of curious, going back to the Zip2 moment, you had hundreds of millions of dollars, or you had an exit worth hundreds of millions of dollars. I got $20 million. Right.
Okay, so you solved the money problem, at least. And you basically took it and you kept rolling with X.com, which became PayPal and Confinity. Yes, I kept the chips on the table. Not everyone does that. A lot of the people in this room will have to make that decision, actually. What drove you to jump back into the ring? Well, I think I felt with Zip2, we built incredible technology, but it never really got used.
you know I think at least from my perspective we had better technology than say Yahoo or anyone else but it was constrained by our customers um and uh so I wanted to do something that where okay we wouldn't be constrained by our customers go direct to consumer um and that's what ended up being like x.com PayPal uh essentially x.com merging with Confinity which
We together created PayPal and then that actually, the sort of PayPal diaspora has, it might've created more companies than, so more companies than probably any,
anything in the 21st century, you know, so many talented people were at the combination of of Confinity and XCOM. So I just wanted to like I felt like we kind of got our wings clipped somewhat with Zip2 and it's like, OK, what if our wings aren't clipped and we go direct to consumer? And that's that's what PayPal ended up being. But yeah, with
I got that like $20 million check for my share of Zip2. At the time, I was living in a house with four housemates and had like 10 grand in the bank. And then this check arrives in the mail of all places, in the mail. And then my bank balance went from $10,000 to $20 million in $10,000.
You know, like, well, okay. So I'd pay taxes on that and all, but then I ended up putting almost all of that into x.com. And as you said, like, just kind of keeping almost all the chips on the table. And yeah, and then after PayPal, I was like, well, I was kind of curious as to why we had not sent anyone to Mars.
And I went on the went on the NASA website to find out when we're sending people to Mars and there was no date. I thought maybe it was just hard to find on the website. But in fact, there was no real plan to send people to Mars.
So then, you know, this is such a long story, so I don't want to take up too much time here. But I think we're all listening with rapt attention. So I was actually I was on the Long Island Expressway with my friend Deo Resi. We were like housemates in college. And Deo was asking me what we're going to do. What am I going to do after PayPal? And I was like, I don't know, I guess maybe I'd like to do something philanthropic in space because I didn't think I could actually do anything commercial in space because that seemed like the purview of nations.
But I'm kind of curious as to when we're going to send people to Mars. And that's when I was like, oh, it's on the website. And then I started digging. There's nothing on the NASA website. So then I started digging in. And I'm definitely summarizing a lot here.
My first idea was to do a philanthropic mission to Mars called Life to Mars, where we'd send a small greenhouse with seasoned dehydrated nutrient gel, land that on Mars and hydrate the gel. And then you'd have this great sort of money shot of green plants on a red background. For the longest time, by the way, I didn't realize money shot, I think, is a porn reference. But...
But anyway, the point is that that would be the great shot of green plants on a red background and to try to inspire NASA and the public to send astronauts to Mars. As I learned more, I came to realize, and along the way, by the way, I went to Russia in like 2001 and 2002 to buy ICBMs, which is like, that's an adventure. You go and meet with Russian high command and say, I'd like to buy some ICBMs. This was to get to space.
Yeah, as a rocket. Not to nuke anyone, but they had to, as a result of arms reduction talks, they had to actually destroy a bunch of their big nuclear missiles. So I was like, well, how about if we take two of those, you know, minus the nuke, add an additional upper stage for Mars. But it was kind of trippy, you know, being in Moscow and
or 2001 negotiating with like the russian military to buy icvms like that's crazy um
And they kept also raising the price on me. So that so like literally it's kind of like the opposite of what a negotiation should do. So I was like, man, these things are getting really expensive. And then I came to realize that actually the problem was not that there was insufficient will to go to Mars, but there was no way to do so without breaking the budget, even breaking the NASA budget.
So that's where I decided to start SpaceX to advance rocket technology to the point where we could send people to Mars. And that was in 2002. So that wasn't, you know, you didn't start out...
wanting to start a business, you wanted to start just something that was interesting to you that you thought humanity needed. And then as you sort of, you know, like a cat pulling on, you know, a string, it just sort of the ball sort of unravels. And it turns out this is, could be a very profitable business. I mean, it is now, but it, um,
there had been no prior example of really a rocket startup succeeding. There'd been various attempts to do commercial rocket companies and they'd all failed. So again, with SpaceX, starting SpaceX,
was really from the standpoint of like, I think there's like a less than 10% chance of being successful, maybe 1%, I don't know. But if a startup doesn't do something to advance rocket technology, it's definitely not coming from the big defense contractors because they just impedance match to the government and the government just wants to do very conventional things. So it's either coming from a startup or it's not happening at all. So
So like a small chance of success is better than no chance of success. And so that, yeah, so SpaceX started that in mid 2002, expecting to fail, like I said, probably 90% chance of failing. And even like when recruiting people, I didn't like try to, you know, make out that it would, I said, we're probably going to die.
uh but uh full chance we might not die and if uh but this is the only way to get people to Mars and advance the state of the art and um and then uh I ended up being chief engineer of the rocket uh not because I wanted to but because I couldn't hire anyone who was good so like none of the good sort of chief engineers would join because it's like this is too risky you're gonna die and uh
So then I ended up being chief engineer of the rocket and you know, the first three flights did fail. So it's a bit of a learning exercise there. And, um,
Fourth one fortunately worked, but if the fourth one hadn't worked, I had no money left and that would have been, it would have been curtains. So it was a pretty close thing. If the fourth launch of Falcon not work, it would have been just curtains and we would have just been joined the graveyard of prior rocket startups. So it was like, like my estimate of success was not far off. We just, we made it by the skin of our teeth. And Tesla was happening sort of simultaneously with,
like 2008 was a rough year uh because at mid-2008 uh we're called summer 2008 um the third the third launch of spacex had failed a third failure in a row uh the tesla financing round had failed and so tesla was going bankrupt fast um it was just uh it's like man this is grim uh this is this is gonna be uh a tale of warning of an exercise in hubris
Probably throughout that period, a lot of people were saying, you know, Elon is a software guy. Why is he working on hardware? Why would you, yeah. Why would he choose to work on this? Right. So you can look at the, like the, cause it's still the, you know, the press of that time is still online. You can just search it. And, and they kept calling me internet guy. Um, so like internet guy, AKA fool is attempting to build a rocket company. Um,
So, you know, we got ridiculed quite a lot. And it does sound pretty absurd, like internet guy starts rocket company doesn't sound like a recipe for success, frankly. So I didn't hold it against them. I was like, yeah, you know, admittedly, it does sound improbable. And I agree that it's improbable. But fortunately, the fourth launch worked and
and uh and nasa awarded us uh a contract to resupply the space station uh and i think that was like maybe i don't know december 22nd or it was like right before christmas um because even the fourth launch working wasn't enough to succeed nasa also needed we also needed a big contract to keep us alive so um so i got i got that call
from like the NASA team. And I literally, they said, we're awarding you one of the contracts to resupply the space station. I like literally blurted out, I love you guys, which is not normally what they hear. Cause it's usually pretty sober, but I was like, man, this is a company saver. And then we closed the Tesla financing round on the last hour of the last day that it was possible, which was 6 p.m. December 24th.
2008. We would have bounced payroll two days after Christmas if that round hadn't closed. So that was a nerve-wracking end of 2008, that's for sure. I guess from your PayPal and Zip2 experience jumping into these hardcore hardware startups, it feels like one of the through lines was being able to find and eventually attract the smartest possible people in those particular fields.
You know, what would I mean, the people in this room, like some of the most of the people here, I don't think have even managed a single person yet. They're just starting their careers. What would you tell to, you know, the Elon who's never had to do that yet? I generally think to try to be as useful as possible. It may sound trite, but it's it's so hard to be useful, especially to be useful to a lot of people.
where you say the area under the curve of total utility is like how much how useful have you been to your fellow human beings times how many people um it's almost like like the physics definition of true work it's incredibly difficult to do that i think if you aspire to do true work um your probability of success is much higher um like like don't aspire to glory aspire to work
How can you tell that it's true work? Is it external? Is it what happens with other people or what the product does for people? What is that for you? When you're looking for people to come work for you, what's the salient thing that you look for? That's a good question. In terms of your end product, you just have to say, if this thing is successful, how useful will it be to how many people?
And that's what I mean. And then you do whatever, you know, whether you're a CEO or any role in a startup, you do whatever it takes to succeed. And just always be smashing your ego, like internalized responsibility. Like a major failure mode is when ego to ability ratio is double greater than sign one.
you know uh like if you if your ego to ability ratio is it gets too high then you're you're you're going to basically break the feedback loop to reality uh and in in ai terms your arrow you'll have your you'll break your rl loop so you want you want to don't want to break your art you want to have a strong rl loop which means internalizing responsibility and minimizing ego and you do whatever the task is no matter whether it's you know grand or humble so
I mean, that's kind of like why actually I prefer the term like engineering as opposed to research. I prefer the term and I actually don't want it to call XAI a lab. I just want to be a company. It's like whatever the simplest, most straightforward, ideally lowest ego terms are, those are generally a good way to go. You want to just close the loop on reality hard.
That's a super big deal. - I think everyone in this room really looks up to everything you've done around being sort of a paragon of first principles and thinking about the stuff you've done.
How do you actually determine your reality? Because that seems like a pretty big part of it. Like other people, people who have never made anything, non-engineers, sometimes journalists at times who have never done anything, like they will criticize you. But then clearly you have another set of people who are builders, who have very high, you know, sort of area under the curve, who are in your circle. Like, you know,
how should people approach that? Like what has worked for you and what would you pass on like, you know, to, to X, to your children? Like, you know, what do you tell them when you're like, you need to make your way in this world here, you know, here's how to construct a reality that is predictive from first principles. Well, the, the tools of physics are incredibly helpful, uh, to, to, um, understand and make progress in any field. Um,
the first principles means just obviously just means you know break things down to the fundamental axiomatic elements that are most likely to be true and then reason up from there as cautiously as possible as opposed to reasoning by analysis or metaphor um and then it just simple things like like thinking in the limit like if you extrapolate you know minimize this thing or maximize that thing thinking in the limit is is very very helpful um i'd use all the tools of physics um they apply to any field um
This is like a superpower, actually.
So you can take, say, for example, like rockets, you could say, well, how much should a rocket cost? The typical approach that people would take to how much a rocket should cost is they would look historically at what the cost of rockets are and assume that any new rocket must be somewhat similar to the prior cost of rockets. A first principles approach would be you look at the materials that the rocket is comprised of. So if that's aluminum, copper, carbon fiber,
steel, whatever the case may be, and say, how much does that rocket weigh and what are the constituent elements and how much do they weigh? What is the material price per kilogram of those constituent elements? And that sets the actual floor on what a rocket can cost. It can asymptotically approach the cost of the raw materials.
And then you realize, oh, actually, a rocket, the raw materials of a rocket are only maybe 1 or 2% of the historical cost of a rocket. So the manufacturing must necessarily be very inefficient if the raw material cost is only 1 or 2%. That would be a first principles analysis of the potential for cost optimization of a rocket. And that's before you get to reusability.
To give an AI example, I guess, last year for XAI when we were trying to build a training supercluster, we went to the various suppliers to ask, this was beginning of last year, that we needed 100,000 H100s to be able to train coherently. And their estimates for how long it would take to complete that were 18 to 24 months.
it's like well we need to get that done in six months so then um or we won't be competitive so so then uh if you break that down what what are the things you need well you need a building you need power you need cooling um
We didn't have enough time to build a building from scratch, so we had to find an existing building. So we found a factory that was no longer in use in Memphis that used to build Electrolux products. But then the input power was 15 megawatts and we needed 115 megawatts. So we rented generators and had generators on one side of the building, and then we have to have cooling. So we rented about a quarter of the mobile cooling capacity of the US and put the chillers on the other side of the building.
That didn't fully solve the problem because the power variations during training are very big. So you can have power can drop by 50% in 100 milliseconds, which the generators can't keep up with. So then we added Tesla Megapacks and modified the software in the Megapacks to be able to smooth out the power variation during the training run.
And then there were a bunch of networking challenges, because the networking cables, if you're trying to make 100,000 GPUs train coherently, are very, very challenging. Almost it sounds like almost any of those things you mentioned, I could imagine someone telling you very directly, no, you can't have that. You can't have that power. You can't have this.
And it sounds like one of the salient pieces of first principles thinking is actually, let's ask why. Let's figure that out. And actually, let's challenge the person across the table. And if I don't get an answer that I feel good about, I'm going to not allow that to be, I'm not going to let that know to stand. I mean, that feels like something that...
Everyone, if someone were to try to do what you're doing in hardware, hardware seems to uniquely need this. In software, we have lots of fluff and things that, you know, it's like,
we can add more CPUs for that, it'll be fine. But in hardware, it's just not going to work. I think these general principles of first principle thinking apply to software and hardware, apply to anything really. I'm just using kind of a hardware example of how we were told something is impossible, but once we broke it down into the constituent elements of we need a building, we need power, we need cooling, we need power smoothing, and then we could solve those constituent elements
um but it was and then we and then we just ran the the networking operation to do all the cabling everything um in four shifts 24 7. and and i was like sleeping in the data center and also doing cabling myself um and and there were a lot of other issues to solve um you know nobody had done a training run with a hundred thousand
H100s training coherently last year. Maybe it's been done this year, I don't know. And then we ended up doubling that to 200,000. And so now we've got 150,000 H100s, 50K H200s, and 30K GB200s in the Memphis training center. And we're about to bring 110,000 GB200s online at a second data center also in the Memphis area.
Is it your view that pre-training is still working and the scaling laws still hold and whoever wins this race will have basically the biggest, smartest possible model that you could distill? Well, there's other various elements that decide competitiveness for large AI.
This is for sure the talent of the people matter. The scale of the hardware matters and how well you're able to bring that hardware to bear. So you can't just order a whole bunch of GPUs and they don't, you can't just plug them in. So you've got to get a lot of GPUs and have them train coherently and stably.
then it's like what unique access to data do you have? I guess distribution matters to some degree as well. Like how do people get exposed to your AI? Those are critical factors for if it's going to be like a large foundation model that's competitive. As many have said, my friend Elias Sutskair said, we've kind of run out of pre-training data of human generated, like human generated data, you run out of tokens pretty fast.
certainly of high quality tokens. And then you have to do a lot of, you need to essentially create synthetic data and be able to accurately judge the synthetic data that you're creating to verify, like, is this real synthetic data? Or is it an hallucination that doesn't actually match reality? So achieving grounding in reality is tricky, but we are at the stage where
there's more effort put into synthetic data. And right now we're training Grok 3.5, which is a heavy focus on reasoning. Going back to your physics point, what I heard for reasoning is that hard science, particularly physics textbooks, are very useful for reasoning. Whereas I think researchers have told me that social science is totally useless for reasoning.
Yes, that's probably true. So, yeah. There's something that's going to be very important in the future is combining deep AI in the data center or supercluster with robotics. So that, you know, things like the Optimus humanoid robot. Incredible.
Yeah, Optimus is awesome. There's going to be so many humanoid robots and robots of all... Robots of all sizes and shapes, but my prediction is that there will be more humanoid robots by far than all other robots combined by maybe an order of magnitude, like a big difference. And... Is it true that you're planning a robot army of a sort? Whether we do it or... You know, whether Tesla does it. You know, Tesla works closely with XAI. Um...
like you've seen how many humanoid robot startups are there like it's like i think jensen bong was on stage with a lot with a massive number of robots you know robots from different companies i think there was like a dozen different humanoid robots so i mean i guess you know part of what i've been fighting and maybe what has slowed me down somewhat is that i'm i i'm a little i don't want i don't want to make terminator real you know so i've been sort of i guess
at least until recent years, dragging my feet on AI and humanoid robotics. And then I sort of come to the realization it's happening whether I do it or not. So you got really two choices. You could either be a spectator or a participant. And so I'm like, well, I guess I'd rather be a participant than a spectator. So now it's, you know, pedal to the metal on humanoid robots and digital superintelligence.
So I guess there's a third thing that everyone has heard you talk a lot about that I'm really a big fan of, becoming a multi-planetary species. Where does this fit? This is all not just a 10 or 20 year thing, maybe a hundred year thing. It's many, many generations for humanity kind of thing.
How do you think about it? There's AI, obviously, there's embodied robotics, and then there's being a multi-planetary species. Does everything sort of feed into that last point? Or what are you driven by right now for the next 10, 20, and 100 years? Geez, 100 years? Man, I hope civilization's around in 100 years. If it is around, it's going to look very different from civilization today. I mean, I'd predict that there's going to be
at least five times as many humanoid robots as there are humans, maybe 10 times. One way to look at the progress of civilization is percentage completion Kardashev. So if you're in a Kardashev scale one, you've harnessed all the energy of a planet. In my opinion, we've only harnessed maybe one or 2% of Earth's energy. So we've got a long way to go to be Kardashev scale one.
uh then car shovels car ship two you've harnessed all the energy of a sun uh which would be i don't know a billion times more energy than earth maybe closer to a trillion um and then car show three would be all the energy of a galaxy pretty far from that so we're at the very very early stage of the intelligence big bang i i hope i hope we're on the in terms of being multi-planetary like i think
I think we'll have enough mass transferred to Mars within like roughly 30 years to make Mars self-sustaining such that Mars can continue to grow and prosper even if the resupply ships from Earth stop coming. And that greatly increases the probable lifespan of civilization or consciousness or intelligence, both biological and digital.
So that's why I think it's important to become a multi-planet species. And I'm somewhat troubled by the Fermi paradox, like why have we not seen any aliens? And it could be because intelligence is incredibly rare. And maybe we're the only ones in this galaxy, in which case the intelligence of consciousness is just a tiny candle in a vast darkness. And we should do everything possible to ensure the tiny candle does not go out.
and being a multi-planet species or making consciousness multi-planetary greatly improves the probable lifespan of civilization. And it's the next step before going to other star systems. Once you at least have two planets, then you've got a forcing function for the improvement of space travel. And that ultimately is what will lead to consciousness expanding to the stars.
It could be that the Fermi paradox dictates once you get to some level of technology, you destroy yourself. How do we stay ourselves? How do we actually, what would you prescribe to, I mean, a room full of engineers? Like, what can we do to prevent that from happening? Yeah, how do we avoid the great filters? One of the great filters would obviously be global thermonuclear war. So we should try to avoid that.
I guess building benign AI, robots that, AI that loves humanity and robots that are helpful. Something that I think is extremely important in building AI is a very rigorous adherence to truth, even if that truth is politically incorrect.
My intuition for what could make AI very dangerous is if
if you force AI to believe things that are not true. How do you think about, you know, there's sort of this argument for open for safety versus closed for competitive edge. I mean, I think the great thing is you have a competitive model. Many other people also have competitive models. And in that sense, you know, we're sort of off of maybe the worst timeline that I'd be worried about is, you know, there's fast takeoff and it's only in one person's hands. You know, that
might, you know, sort of collapse a lot of things. Whereas now we have choice, which is great. How do you think about this? I do think there will be several deep intelligences, maybe at least five, maybe as much as 10. I'm not sure that there's going to be hundreds, but it's probably close. Like maybe there'll be like 10 or something like that, of which maybe four will be in the U.S. So yeah,
I don't think it's going to be any one AI that has runaway capability. But yeah, several deep intelligences. What will these deep intelligences actually be doing? Will it be scientific research or trying to hack each other? Probably all of the above. I mean, hopefully they will discover new physics and I think they're definitely going to invent new technologies.
I think we're quite close to digital superintelligence. It may happen this year, and if it doesn't happen this year, next year for sure. Digital superintelligence defined as smarter than any human at anything.
Well, so how do we direct that to sort of super abundance? You know, we could have robotic labor. We have cheap energy, intelligence on demand. You know, is that sort of the white pill? Like, where do you sit on the spectrum? And are there tangible things that you would encourage everyone here to be working on to make that white pill actually reality? I think it most likely will be a good outcome.
I guess I'd sort of agree with Jeff Hinton that maybe it's a 10 to 20% chance of annihilation. But look on the bright side, that's 80 to 90% probability of a great outcome. So, yeah, I can't emphasize this enough. A rigorous adherence to truth is the most important thing for AI safety. And obviously empathy for humanity and life as we know it.
We haven't talked about Neuralink at all yet, but I'm curious, you know, you're working on closing the input and output gap between humans and machines. How critical is that to AGI, ASI? And, you know, once that link is made, can we not only read, but also write? The Neuralink is not necessary to solve digital superintelligence. That'll happen before Neuralink is at scale. But
What Neuralink can effectively do is solve the input output bandwidth constraints, especially our output bandwidth is very low. The sustained output of a human over the course of a day is less than one bit per second. So it's 86,400 seconds in a day and it's extremely rare for a human to output more than that number of symbols per day.
certainly for several days in a row.
you really with it with a neural link interface, you can massively increase your output bandwidth and your input bandwidth input being right to you. You have to do right operations to the brain. We have now five humans who have received the the kind of the read input, which is reading signals. And you've got people with with ALS who
really have no they're tetraplegics but they they can now communicate at with at um similar bandwidth to a human with a fully functioning body and control their computer and phone which is pretty cool and then um I think in the next six to 12 months we'll be doing our first implants for vision where even if somebody's completely blind um
we can write directly to the visual cortex. We've had that working in monkeys. Actually, I think one of our monkeys now has had a visual implant for three years. At first, it'll be fairly low resolution, but long-term, you would have very high resolution and be able to see multi-spectral wavelengths. You could see an infrared, ultraviolet, radar,
like a superpower situation but like at some point the cybernetic implants would would not simply be correcting things that went wrong but uh augmenting human capabilities dramatically augmenting intelligence and senses and bandwidth dramatically and that's that's going to happen at some point um but digital super intelligence will happen well before that
At least if we have a neural link, we might be able to appreciate the AI better. I guess one of the limiting reagents to all of your efforts across all of these different domains is access to the smartest possible people. Yes.
But, you know, sort of simultaneous to that, we have, you know, the rocks can talk and reason and, you know, there may be 130 IQ now and they're probably going to be super intelligent soon. How do you reconcile those two things? Like what's going to happen in, you know, five, 10 years? And what should the people in this room do to make sure that, you know, they're the ones who are creating instead of maybe below the API line?
Well, they call it the singularity for a reason, because we don't know what's going to happen. In the not that far future, the percentage of intelligence that is human will be quite small. At some point, the collective sum of human intelligence will be less than 1% of all intelligence. And if things get to a Khodashev level two, we're talking about
human intelligence, even assuming a significant increase in human population and intelligence augmentation, like massive intelligence augmentation where like everyone has an IQ of a thousand type of thing. Even in that circumstance, collective human intelligence will be probably one billionth that of digital intelligence. Anyway, where's the biological bootloader for digital superintelligence? I guess just to end off,
It was like, was I a good bootloader? Where do we go? How do we go from here? I mean, all of this is pretty wild sci-fi stuff that also could be built by the people in this room. Do you have a closing thought for the smartest technical people of this generation right now? What should they be doing? What should they be working on? What should they be thinking about tonight as they go to dinner? Well, I...
as i started off with i think if you're doing something useful that's great um if you just just try to be as useful as possible to your fellow human beings and that that then you're doing something good um i keep harping on this like focus on super truthful ai that that's the most important thing for ai safety um you know obviously if you know um anyone's interested in working at xai please please let us know um
we're aiming to make grok um the maximally truth-seeking ai um and uh i think that's a very important thing um hopefully we can understand the nature of the universe that's really i guess what ai can hopefully tell us maybe ai can maybe tell us where are the aliens and what you know how did the universe really start how will it end what are the questions that we don't know that we should ask and um
Are we in a simulation? What level of simulation are we in? Well, I think we're going to find out. Am I an NPC? Elon, thank you so much for joining us. Everyone, please give it up for Elon Musk.