We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Elon Musk's Neuralink 2025 summer update

Elon Musk's Neuralink 2025 summer update

2025/6/28
logo of podcast Elon Musk Podcast

Elon Musk Podcast

AI Deep Dive AI Chapters Transcript
People
D
DJ Seo
E
Elon Musk
以长期主义为指导,推动太空探索、电动汽车和可再生能源革命的企业家和创新者。
H
Harrison
J
Joey
J
John
一位专注于跨境资本市场、并购和公司治理的资深律师。
J
Julian
N
Nir
R
Ruz
S
Sahej
W
Will Walden
Topics
Elon Musk: 我认为大脑是一个非凡的器官,我们就是大脑本身。我们对意识的本质还知之甚少,但Neuralink的进展将帮助我们更多地了解意识,并解决大脑损伤或发育问题。我们正在创造一种通用的大脑输入/输出技术,以不损害大脑的方式实现信息的输入和输出。我们的目标是将脑机接口的带宽从每秒1比特提升到每秒兆比特甚至吉比特,实现概念性的心灵感应。这将是对人类存在方式的根本性改变。这次演示的主要目的是吸引聪明人加入我们,共同解决这个问题。

Deep Dive

Shownotes Transcript

Translations:
中文

So

Summer is coming right to your door. With Target Circle 360, get all the season go-tos at home with same-day delivery. Snacks for the pool party? Delivered. Sun lotion and towels for a beach day? Delivered. Pillows and lights to deck out the deck? That too. Delivered. Just when you want them. Summer your way, quick and easy. Join now and get all the summer fun delivered right to your home with Target Circle 360. Membership required. Subject to terms and conditions. Applies to orders over $35.

Savor every last drop of summer with Starbucks. From bold refreshers to rich cold brews, the sunniest season only gets better with a handcrafted ice beverage in your hand. Available for a limited time. Your summer favorites are ready at Starbucks. Hey everybody, welcome back to the Elon Musk Podcast. This is a show where we discuss the critical crossroads that shape SpaceX, Tesla, X, The Boring Company, and Neuralink. I'm your host, Will Walden.

Hello, everybody. My name is Alex Tudley. I'm the second participant in the Neuralink study, and I'm here to count us down to the demo. In five, four, three, two, one. Hi, everyone.

Welcome to the Neuralink presentation. This is an update for the progress of the Neuralink team. It's been an incredible amount of progress. We're going to start off high level, generally describing what Neuralink's doing, and then we're going to have a very deep technical dive so you can actually get an understanding of

what exactly we're doing at a granular level and what we can do to enhance human capabilities and ultimately build a great future for humanity. So that's a neuron firing. It's funny to think that me talking right now is a bunch of neurons firing that then result in speech that you hear and cause neurons to fire in your brain. Part of this presentation is about demystifying

the brain. It is a remarkable organ. I mean, we are the brain, basically. When you say "you," that really is-- you're the brain. Like, you can get a heart transplant, you can get a kidney transplant, but I don't know anyone who's gotten a brain transplant. So you are your brain, and your experiences are these neurons firing with trillions of synapses.

that somehow lead to conscious comprehension of the world. This is something that we have only begun to understand. We're really just barely at the beginning of understanding of what is the nature of consciousness. And I've thought a lot about what is consciousness? Where does consciousness arise? Because if you start at the beginning of the universe, assuming physics is true, the standard model of physics is true, then you have this big bang, you know,

the matter condensing into stars, those stars exploding. A lot of the atoms that are in your body right now were ones at the center of stars. Those stars exploded, recondensed. Fast forward 13.8 billion years and here we are. And somewhere along that very long journey, to us at least, consciousness arose. Or the molecules started talking to each other. And

It begs the question of what is consciousness? Is everything conscious? Maybe. It's hard to say where along that line. There's no sort of discrete point where consciousness didn't exist and then suddenly does exist. It seems to be maybe you have a condensation of matter that has a density of... We don't know what... The real answer is we don't know what consciousness is. But with Neuralink...

and the progress that the company's making, we'll begin to understand a lot more about consciousness and what does it mean to be. Along the way, we're going to solve a lot of brain issues where the brains get injured or damaged in some way or didn't develop in quite the right way. There's a lot of brain and spine injuries that we'll solve along the way.

I do want to emphasize that this is all going to happen quite slowly, meaning you'll see it coming. Sometimes people think that suddenly there will be vast numbers of neural links all over the place. This is not going to be sudden. You'll be able to watch it happen over the course of several years. And we go through exhaustive regulatory approvals. So this is not

something that we're just doing by ourselves without government oversight. We work closely with the regulators every step of the way. We're very cautious with the neural links in humans. That's the reason we're not moving faster than we are, is because we're taking great care with each individual to make sure we never miss.

And so far we haven't, and I hope that continues into the future. Every single one of our implants in humans is working and working quite well. And you'll get to hear from some of the people that have received the implants and hear it in their words. So what we're creating here with a Neuralink device is a generalized input/output

technology for the brain. So it's how do you get information into or out of the brain and do so in a way that does not damage the brain or cause any negative side effects. So it's a very hard problem. And

generally the the reactions i've seen to this uh range from it's impossible to it's already been done before um those those people should meet actually um the reality is that there actually have been uh limited brain to computer uh interfaces for uh several decades on a very basic basis uh just

What we're doing with Neuralink is dramatically increasing the bandwidth by many orders of magnitude. So you can, a human bandwidth output is less than one bit per second over the course of a day. So there's 86,400 seconds in a day. It's very rare for a person to do more than 86,400 bits of output per day. You'd have to be really talking a lot or typing all day and you might exceed that.

So what we're talking about here is going from maybe one bit per second to ultimately megabits and then gigabits per second. And the ability to do conceptual, consensual telepathy. Now the input to the brain is much higher, especially because of vision. Depending upon how you count it, it might be on the order of a megabit or in the megabit range.

for input primarily due to sight. But even for input, we think that can be dramatically increased to the gigabit plus level. And a lot of the thinking that we do is we take a concept in our mind and we compress that into a small number of symbols. So when you're trying to communicate with somebody else, you're actually trying to model their mind state

and then take perhaps quite a complex idea that you have, maybe even a complex image or scene or kind of mental video, and try to compress that into a few words or a few keystrokes. And it's necessarily going to be very lossy. Your ability to communicate is very limited by how fast you can talk and how fast you can type. And what we're talking about is unlocking that potential to enable you to communicate

like I said, thousands, perhaps millions of times faster than is currently possible. This is an incredibly profound breakthrough. This would be a fundamental change to what it means to be a human. So we're starting off with reducing human suffering or addressing issues that people have, say if they've been in an accident or they have some

a neural disease that's degenerative, so they're losing capability to move their body or some kind of injury, essentially. So enabling, our first product is called telepathy, and that enables someone who has a

lost the ability to command their body, to be able to communicate with a computer and move the mouse and actually operate a computer with roughly the same dexterity, ultimately much more dexterity than a human with working hands.

then our next product is Blindsight, which will enable those who have total loss of vision, including if they've lost their eyes or the optic nerve, or maybe have never seen, were even blind from birth to be able to see again.

initially low resolution, but ultimately very high resolution, and then in multiple wavelengths. So it could be like Geordi La Forge in Star Trek, and you can see in radar, you can see in infrared, ultraviolet, superhuman capabilities, cybernetic enhancement, essentially. And then along the way, this should help us understand a lot more about consciousness. What does it mean to be a conscious creature?

will understand vastly more about the nature of consciousness as a result of this. And then ultimately, I think this helps mitigate the civilizational risk of artificial intelligence.

We are actually already, we already sort of have three layers of thinking. There's the limbic system, which is kind of your instincts, your cortical system, which is your higher level planning and thinking, and then the tertiary layer, which is the computers and machines that you interact with, like your phone, all the applications you use. So people actually are already a cyborg. You can...

maybe have an intuitive sense for this by how much you miss your phone if you leave it behind. Leaving your phone behind is like, it's almost like missing limb syndrome. Your phone is somewhat of an extension of yourself as is your computer. So you already have this digital tertiary layer, but the bandwidth between your cortex and your digital tertiary layer is limited by speech and by

and by how fast you can move your fingers and how fast you can consume information visually. So, but I think it's actually very important for us to address that input-output bandwidth constraint in order for the collective will of humanity to match the will of artificial intelligence. Test, test, check one, two. You know you need unique New York. You know you need unique New York. Does that sound all right?

Ah, that's better. You can always tell something's missing when you get isolated results, like AI that's only right for one of your systems. Get AI that can work across your data and applications. Learn more at IBM.com. The AI built for business. IBM.

You know that feeling when someone shows up for you just when you need it most? That's what Uber is all about. Not just a ride or dinner at your door. It's how Uber helps you show up for the moments that matter. Because showing up can turn a tough day around or make a good one even better. Whatever it is, big or small, Uber is on the way. So you can be on yours. Uber, on our way.

Are you still quoting 30-year-old movies? Have you said cool beans in the past 90 days? Do you think Discover isn't widely accepted? If this sounds like you, you're stuck in the past. Discover is accepted at 99% of places that take credit cards nationwide. And every time you make a purchase with your card, you automatically earn cash back. Welcome to the now. It pays to discover. Learn more at discover.com slash credit card. Based on the February 2024 Nielsen Report.

That's my intuition at least. Let's see. What this presentation is mostly about is attracting smart humans to come and work with us on this problem.

So this is not a presentation to raise money or anything like that. We're actually very well funded. We have a lot of great investors. Some of the smartest people in the world are invested in Neuralink. But we need smart humans to come here and help solve this problem. So with that, let's proceed.

Hey, everyone. My name is DJ. I'm my co-founder and president of Neuralink. And as Elon mentioned, well, actually, we're standing in the middle of our robot space. We have a stage set up, but this is actually where some of the next generation, most advanced surgical robots are being built. So welcome to our space.

It's important to highlight that this technology is not being built in the dark. This is not a secret lab where we're not sharing any of the progress. In fact, we're actually sharing the progress very openly and as well as also telling you exactly what we're going to be doing. And we're hoping to progress on that as diligently and as safely and as carefully as possible.

To start off, two years ago when we did our previous fundraising round, we outlined this path and timeline to First Human. And we currently have clinical trials in the U.S. for a product that we call Telepathy, which allows users to control phone or computer purely with their thoughts. And you're going to see how we do this and what the impact that this has had.

And not only have we launched this clinical trial, but as of today, we have not just one, but seven participants. And we have an approval. And we also have an approval to launch this trial in Canada, UK and the UAE. Thank you.

So I guess before we dive into what this technology is and what we built, but I wanted to quickly share a video with you guys of when our first five participants met each other for the first time. So here you go.

All right, we have everyone together. What's up, guys? Thanks, everybody, for joining. Definitely want to introduce all of you. Yeah, I'm Nolan, aka P1. My name's Alex. I am the second participant in the Neuralink study. I am Brad Smith, the ALS Cyborg P3. My name is Mike, G4, and ALS Electrode Test. Yeah, I'm RJ. I'm P5.

And I just, I guess I'm kind of the newest one to the team here. So yeah, appreciate it, Noah, Trailblazer. You know, somebody's got to get first, man. That was you. Appreciate that. What's been your favorite thing you've been able to do with the Neuralink so far? I've just had a good time being able to use it as I travel, flying and

Draw a little mustache on a cat. Had a lot of fun doing that. I mean, I've just had a good time playing around with it. Oh, you know what? I do know what my favorite BCI feature is. Probably not a feature, but I love WebGrid more than I love anything in my life, probably. I think I could play that game nonstop forever. Has to be Fusion 360. Being able to design parts. Design the hat logo with the BCI.

That's what's up. Pretty sweet. That's sweet. Yeah. Yeah. I have a little Arduino that takes input from my quad stick, converts it into a PPN signal to go to an RC truck.

Cool. Little rock crawler. Well, with the BCI, I wrote code to drive the plane with the quad stick. That's awesome. The best thing I like about New Orleans is being able to continue.

To provide for my family and continue working. I think my favorite thing is probably being able to turn on my TV. Yeah, like the first time in two and a half years I was able to do that. So it's pretty sweet. I like shooting zombies. That's kind of nice. Excited to see what BCI's got going on. I have a question. What's your shirt say?

I do a thing called whatever I want. Now, one of the major figure of merits that we have is to keep track of monthly hours of independent BCI use. Effectively, are they using the BCI?

and not at the clinic, but at their home. And what we have noticed, and this is a plot of all of the different participants, first five participants and their usage per month over the course of the last year and a half. And we're averaging around 50 hours a week of usage. And in some cases, peak usage of more than 100 hours a week, which is pretty much every waking moment. Woo! Woo!

So I think it's been incredible to see all of our participants demonstrating greater independence through their use of BCI. Not only that, we've also accelerated our implantation cadence as we've amassed evidence of both clinical safety as well as value to our participants. So to date, we have four spinal cord injury participants as well as three ALS participants with the last two surgeries happening within one week of each other.

And we're just beginning. This is just tip of the iceberg. Our end goal is to really build a whole brain interface. And what do we mean by whole brain interface? We mean being able to listen to neurons everywhere, be able to write information to neurons anywhere, be able to have that fast data wireless transfer to enable that high bandwidth connection from our biological brain to the external machines.

and be able to do all of this with fully automated surgery, as well as enable 24 hours of usage. And towards that goal, we're really working on three major product types. Elon mentioned earlier that our goal is to build a generalized input-output platform in technology to the brain. So for the output portion of it, which is extremely slow through our meat sticks, as Elon calls them... LAUGHTER

hands that are holding the mics. We're starting out with helping people with movement disorders, where they lost the mind-body connection, either through a spinal cord injury, ALS, or a stroke, be able to regain some of that digital as well as physical independence through a product that we're building called Telepathy. And this is our opportunities to build a high-channel read and output device. On the input side of things, there's

opportunities for us to help people that have lost the ability to see be able to regain that sight again through a product that we're calling Blindsight. And this is our opportunity to build high channel right capabilities. And last but not least, be able to also help people that are suffering from neurological, debilitating dysregulation or psychiatric conditions or neuropathic pain by inserting our electrodes in

reaching any brain regions to be able to insert them not just on the cortical layer, but into the sulci as well as deeper parts of the brain, the so-called limbic system, to really enable better opportunities to just regain some of that independence.

Our North Star metrics is one, increasing the number of neurons that we can interface with. And second, to expand to many diverse areas, any parts of the brain, starting with microfabrication or lithography to change the way in which we can actually increase the number of neurons that we can see from a single channel. And also doing mixed signal chip design to actually increase the physical channel count, to increase

more neurons that we can interface to sort of allow more information from the brain to the outside world. And then, you know, everything we built from day one of the company has always been read and write capable. And with Telepathy, our first product,

The focus has been on the read capabilities or the output. And we want to hone in on our write capability and also show that through accessing deeper regions within the visual cortex, that we can actually achieve functional vision.

The Coca-Cola Company and its system of independent bottlers is an American story, contributing $59 billion to the American economy by sourcing many of its ingredients and packaging materials right here within the United States. It also supports 860,000 American jobs and invests $128 million in community empowerment programs across the country. Locally bottled, nationally loved. Learn more at coke.com slash US impact.

We can make people happy.

Let's go. Woo!

So now just to step you through what the product evolution is going to look like in the next three years. Today, what we have is 1000 electrodes in the motor cortex, the small part of the brain that you see in this animation called the hand knob area that allows participants control computer cursors as well as gaming consoles.

Next quarter, we're planning to implant in the speech cortex to directly decode attentive words from brain signals to speech. And in 2026, not only are we going to triple the number of electrodes from 1,000 to 3,000 for more capabilities, we're planning to have our first blindsight participant to enable navigation.

And in 2027, we're going to continue increasing channel counts, probably another triple, so 10,000 channels, and also enable, for the first time, multiple implants. So not just one in motor cortex, speech cortex, or visual cortex, but all of the above.

And finally, in 2028, our goal is to get to more than 25,000 channels per implant, have multiple of these, have ability to access any part of the brain for psychiatric conditions, pain, dysregulation, and also start to demonstrate what it would be like to actually integrate with AI.

And all this is to say that we're really building towards set of fundamental foundational technology that would allow us to have hundreds of thousands, if not millions of channels with multiple implants for whole brain interfaces that could actually solve not just these debilitating neurological conditions, but be able to go beyond the limits of our biology. And this vertical integration and the talented team that we have at Neuralink has been and will continue to be the key recipe for rapid progress that we will be making.

Just to recap real quick, Neuralink is implanted with precision surgical robot. It's physically invisible. And one week later, users are able to see their thoughts transform into actions. And to share more about what that experience is like, I'd like to welcome Sahej to the stage. What's up, guys? My name is Sahej. I'm from the Brain Computer Interface team here at Neuralink. And I'm going to be talking about two things today.

The first thing is what exactly is a Neuralink device capable of doing right now? And the second one is how does that actually impact the day-to-day lives of our users? Very simply put, what the Neuralink device does right now is it allows you to control devices

simply just by thinking. Now to put that a bit more concretely, I'm about to play a video of our first user. His name is Nolan, if you remember from DJ Section. And what Nolan is doing is he's looking at a normal off-the-shelf MacBook Pro. And with his Neuralink device, as you're going to see, he's going to be able to control the cursor simply with his mind, no eye tracking, no other sensors. And what's special about this particular moment

is this is the first time someone is using a Neuralink device to fully control their cursor. This is not your ordinary brain-controlled cursor. This is actually...

a record-breaking control, literally on day one, beating decades of brain computer research. And I'm about to show you the clip on day one, Nolan breaking the BCI world record. - Whoa! - You just beat the world record. - Yes, 4.6! - Oh, shit. - Yeah! - Congrats! - Oh, well done, man. Well done.

He's a new world record holder. Oh, no way. On the first day. He's a little bit competitive. Yes. This wasn't surprising. It's not for one of us. I thought it was higher. I thought I would have to get to five or something. Oh, my gosh. That's crazy. It's pretty cool. Thank you.

Yeah, another really fun thing you could do with the Neuralink device outside of controlling a computer cursor is you could actually plug it in through USB through a lot of different devices. And here we actually have Nolan playing Mario Kart. Now, what's special about this particular clip is Nolan is not the only cyborg playing Mario Kart in this clip. We actually have a whole community of users, as mentioned earlier. And this is literally five of our first users of Neuralink playing Mario Kart together over call. Woo!

Now, yeah. Mario Kart is cool. You're using one joystick and then you're clicking a couple of buttons to throw items. What would be even cooler is what if you could

control two joysticks at once simultaneously with your mind. What I'm about to show you, and I think this is for the first time someone playing a first person shooter game with a brain computer interface. This is Alex and RJ playing Call of Duty, controlling one joystick to move and then the other joystick to like think, point your gun and then shooting people as a button. Here's Alex shooting another person.

Oh, dear God. I don't know what to do, but I want him to freaking shoot me long. RJ, Alex got you. I know, they shot me in the face. Now that we have a bit of a sense of what the BCI can do, a very important question to answer is, how does this impact the day-to-day lives of the people that use it every day? So...

I'm about to show you a clip going back to Nolan for a second where he talks. We simply just asked him randomly during a day how he enjoys using the BCI a couple months ago. And this is his candid reaction. I work basically all day from when I wake up. I'm trying to wake up at like 6 or 7 a.m. And I'll do work until session. I'll do session. And then I'll work until, you know,

11, 12 p.m. or 12 a.m. Wow. I'm doing like I'm

I'm learning my languages. I'm learning my math. I'm relearning all of my math. I am writing. I am doing a class that I signed up for. And I just wanted to point out that this is not something I would be able to do without the nerve.

Next, I want to talk a bit about Brad. You guys may already know him as the ALS cyborg. And Brad also has ALS. And what separates him from our other users is he's actually nonverbal, so he can't speak. Why this is pretty relevant is he relies, at least before the Neuralink, on an eye gaze machine to communicate. And a lot of eye gaze machines you can't use outdoors. You really need like a dark room. So what this means is for the last six years since Brad's been diagnosed with ALS,

he's really unable to leave his house. Now with the Neuralink device, we're going to show you a clip of him with his kids at the park, shot by Ashley Vance and the team. - Okay, you guys ready? - Yeah! - No, I was thinking... - Okay. - I am absolutely doing more with Neuralink than I was doing with Eye Gaze. I have been a Batman for a long time, but I go outside now. Going outside has been a huge blessing for me.

and I can control the computer with telepathy. - Dad's watching. Look, he's watching on the camera. Did he lose one of the arms? - The last user I want to talk about is Alex. You've seen some clips of him earlier. What's special about Alex to me is he's a fellow left-handed guy who writes in cursive all the time. And what he mentioned is since his spinal cord injury from like three, four years ago, he's been unable to just like draw or write.

And he always brags about how good his handwriting was. So we actually got to put it to the test. We gave him a robotic arm. And I think this is the first time he tried using the robotic arm to write anything. And this is a sped up version of writing at the convoy trial and drawing something.

Now, yeah, controlling a robotic arm is cool, but this one has a clamp. And what would be cooler is if you could decode the actual fingers, the actual wrist, all the muscles of the hand in real time. Just in the past couple weeks, we were able to do that with Alex, and you're about to see him and his uncle playing a game. Rock, paper, scissors, shoot. Rock, paper, scissors, shoot.

Rock, paper, scissors, shoot. Rock, paper, scissors, shoot. That was scissors. Thumb war? Thumb war.

Cool. Controlling, yeah, that's pretty dope. I don't know. And controlling a robotic hand on screen is obviously not super helpful for most people. Fortunately, we have connections with Tesla who have the Optimus hand and we're actually actively working

on giving Alex an Optimus hand so that he could actually control it in his real life. And here's the actual replay of the end of that video using Alex's neural signals on an Optimus hand. Sean, if you want to play that.

This episode is brought to you by State Farm. Knowing you could be saving money for the things you really want is a great feeling. Talk to a State Farm agent today to learn how you can choose to bundle and save with a personal price plan. Like a good neighbor, State Farm is there. Prices are based on rating plans that vary by state. Coverage options are selected by the customer. Availability, amount of discounts and savings, and eligibility vary by state.

At GMC, ignorance is the furthest thing from bliss. Bliss is research, testing, testing the testing, until it results in not just one truck, but a whole lineup.

The 2025 GMC Sierra lineup featuring the Sierra 1500 heavy duty and EV because true bliss is removing every shadow from every doubt. We are professional grade. Visit GMC.com to learn more. I think you're on mute. Workday starting to sound the same. I think you're on mute. Find something that sounds better for your career on LinkedIn.

Yeah. Actually, let me maybe add a few things to that, which is... So...

As we advance the Neuralink devices, you should be able to actually have full body control and sensors from an Optimus robot. So you could basically inhabit an Optimus robot. It's not just the hand, the whole thing. So you could basically mentally remote into an Optimus robot and...

be kind of cool. The future is going to be weird. But pretty cool. And then now, another thing that could be done also is like for people that have, say, lost a limb, lost an arm or a leg or something like that, then we think in the future we'll be able to attach an Optimus arm or legs. And so you kind of like, I remember that scene from

Star Wars where Luke Skywalker gets his hand, you know, chopped off with a lightsaber and he gets kind of a robot hand. And I think that's the kind of thing that we'll be able to do in the future working with Neuralink and Tesla. So it goes far beyond just operating a robot hand but replacing limbs and having kind of a whole body robot experience.

And then I think another thing that will be possible, I think is very likely in the future, is to be able to bridge the

where the damaged neurons are. So you can take the signal from the brain and transmit that signal past where the neurons are damaged or strained to the rest of the body. So you could reanimate the body so that if you have a Neuralink implant in the brain and then one in the spinal cord, then you can actually bridge the signals and you could walk again.

and have full body functionality. Obviously, that's what people would prefer, to be clear. We realized that that would be the preferred outcome. And so that even if you have a broken neck or you could still, we believe, I'm actually, at this point, I'd say fairly confident that at some point in the future, we'll be able to restore full body functionality. Yeah, so hello, everyone. My name is Nir, and I'm leading the BCI Application Group.

And I think the videos that Sahed just shared with you, I probably watch them maybe thousands of times, but still I get a goose bump every time I watch them. And I think this is one of the cool perks here at Neuralink when you get a job is that you might get goose bumps every week or maybe every few days in the good weeks. And this is really fun. As an engineer,

It's really cool because you can build a new feature, you can build a new machine learning model, a new software feature and test it on the same day with the participant and get feedback. And you already saw with our first device, Telepathy, that we can address a very diverse needs of the different users that we have for moving a cursor, to playing games, to move a robotic arm with multiple fingers.

And we could not have done it without a neural link device. The neural link device gives us something that no other device can give us, which is a single neuron recording from thousands of channels simultaneously. The telepathy product is basically recording the neural activity from the small area in the motor cortex that involves the execution of hand and arm movements. But if we go only about two or three inches below, there's another brain area that's involved in execution of speech.

And with the same device, with the same machine learning model architecture, the same software pipeline, the same surgical robot, we can have a new application. And we can do it very quickly. It's really interesting that if we can decode someone's intention to speak silently and on vocal communication, we can use that to revolutionize the way we interact with computers, with technology, and with information. Instead of typing with your finger or moving the mouse or talking to your phone,

You'll be able to interact with computer with the speed of thoughts. It will make this interaction much faster and much more intuitive. The computers will understand what you want to do. And we can also expand that to AI. We can now build an interface with AI that you will be able to achieve information, will be able to store our thoughts anywhere, anytime, privately and silently.

Again, because we build a fundamental technology platform and we do everything in-house. We own the entire stack from neurons to pixels on the user's computer. Now I'll pass it to Ruz to talk about UI for BCI. Thank you, Neil. Each spike that our implant detects goes on a fairly remarkable journey to ultimately form a pixel on a participant's display. And that experience starts with, of course, unboxing.

the very first time that a participant pairs to and meets their implant, this invisible part of their body, and sees their own spikes materialize across the display. From there, they'll go into body mapping and actually imagine moving their arm again and get a feel for what feels natural to them and what doesn't. And they'll take that into calibration, using one of those motions to actually move a cursor again.

iteratively refining their control as they go throughout this process until finally they're teleported back to their desktop and can experience the magic of neural control for the very first time. And our control interfaces is where the OS integration that we do really shines, letting us adapt both

control and feedback for every interaction. So for familiar interactions like scrolling, we can surface an indicator over the scrollable parts of the display, add a touch of gravity to automatically pop a participant's cursor onto that indicator as they approach, show the actual velocities that we decode inside of it, and add a bit of momentum to those velocities to carry them forward as they glide across the page. There are also unique interactions that we need to solve for in this space.

For example, when a participant is watching a movie or just talking to somebody next to them, the brain is very active still, and that activity can actually induce motion in the cursor, distracting them from that moment. So when a participant wants to just get their cursor out of the way, they can push it into the edge of the display to park it there. And of course, we add gravity to sort of hold it still, but they can push it out with either just a firm push or, in this case, a gesture.

And of course, it goes without saying that all of these control interfaces are designed hand in hand with our participants. So huge shout out to both Nolan and Brad for helping us design these two. And those control interfaces, of course, extend typing. We have a great software keyboard that does everything you'd expect it to, popping up when a participant clicks on a text field, giving them feedback about the click on the surface of the key, and supporting both dictation and swipe.

Hi everyone, I'm Harrison, an ML engineer here at Neuralink. And I must say, being an ML engineer at Neuralink is a bit like being a kid in a candy store.

When you think of the inputs to most ML systems out there, you might think of pixels, of tokens, or of a user's Netflix watch history. The input to our systems is a little different. It is pure, raw brain power. And when we think about the ML systems we can build here at Neuralink, really we're limited by our imagination and our creativity. There's no reason our ML systems can't do anything that the human brain can do, such as controlling a phone, typing, or even gaming.

Right here to my left is actual footage of Alex, one of our participants, playing a first-person shooter against RJ, another one of our participants. Now, for those unfamiliar with first-person shooters, this is not a trivial feat. It requires two fully independent joysticks or four continuous degrees of control, as well as multiple reliable buttons. Now, contrary to popular belief, the Neuralink does not simply read people's minds.

It's simply reading neuronal activations corresponding to motor intent. So one of the fun challenges with this project was figuring out which motions were going to be mapped to the joystick. We started with the typical left thumb and right thumb, but quickly found that the dominant hand overshadowed the non-dominant hand.

My personal favorite is we had one of our participants imagine walking for the left joystick and aiming for the right joystick. So in game, they were simply doing naturalistic motions, like you might do in virtual reality in Ready Player One. And that was really cool to watch. What we ended up on was the thumb for the left joystick and the wrist for the right joystick. And I challenged the audience to try to replicate their motions. I'm really in awe of them being able to pull this off.

I want to talk a bit about the progress to our cursor calibration experience. To my left here, you can see RJ completing his first ever cursor calibration with a redesigned open loop flow where he first gathered information about his intent and how to map the neural activity to the first time he controls a cursor to the final product where he has smooth and fluid control of his computer. And most remarkably, this experience took only 15 minutes from start to finish. 15 minutes from not...

15 minutes from no control to fluid computer use. Contrast that to a year and a half ago with P1, where that was multiple hours to get to the same level of control and several engineers standing around a table pulling their hair out. There was virtually no need for Neuralink engineers to even be at this session. This was basically an out-of-the-box experience for our participant.

And even more remarkably, we're continuing to smash day one records with RJ being able to achieve seven BPS on his very first day with the Neuralink. Now, such an effective and efficient calibration process is only made possible by high fidelity estimations of a user intention or labels. And to briefly illustrate just how challenging of a problem that is, this is an animation of myself trying to draw circles on my desktop with a mouse.

Now, the task was simple: draw uniform circles at a constant speed repeatedly. And, as you can see by that animation, I am horrible at that. Even though my intent was pretty obvious, unambiguous, the execution was really poor. There is a ton of variation in both speed and the shape itself.

To visualize this a little differently, each row here is one of those circles unwound in time with synchronized starts. And you can just see how much variation there is in the timing of each circle as well as what I'm doing at any given point in time. At Capella University, you can learn at your own pace with our FlexPath learning format.

Take one or two courses at a time and complete as many as you can in a 12-week billing session. With FlexPath, you can finish the bachelor's degree you started in 19 months and under $19,000. A different future is closer than you think with Capella University. Learn more at capella.edu. Fastest 25% of students. Cost varies by pace, transfer credits, and other factors. Fees apply.

Have you ever spotted McDonald's hot, crispy fries right as they're being scooped into the carton? And time just stands still. Orthogonal to the labeling problem is neural non-stationarity, or the tendency of neural signals to drift over time.

And I think that's honestly a beautiful thing, right? If your neural signals didn't drift, you couldn't grow. When you wake up the next day, you're not the same person you were the day before. You've learned, you've grown, you've changed. And so too must your neural data change. This animation here is a simple illustration of the learned representation by the decoder and how it drifts the further away we get from the day it was trained on. This is one of the key challenges we need to solve here at Neuralink to unlock fluid and product level experience for our users.

Hey, everyone. Hey, everyone. My name is Joey. Blindsight is our project to build a visual prosthesis to help the blind see again. Users would wear a pair of glasses with an embedded camera and receive an implant in their visual cortex. Scenes from the environment are recorded by the camera and processed into patterns of stimulation delivered to the brain, causing visual perception

and restoring functionality. Now, blindsight will be enabled by placing our implant into visual cortex. This is a new brain area for us, and this brings new opportunities and challenges. So the surface of the brain for visual cortex represents just a few degrees of angle in the center of the visual field. Larger fields of view are represented deep within the cortical folds of the calcarein fissure.

Our threads are able to access these deeper structures, providing the possibility of restoring vision over a functionally useful visual field. So the N1 implant has had experimental stimulation capabilities for quite some time, but our new S2 chip is designed from the ground up for stimulation. It provides over 1600 channels of electrical stimulation,

high dynamic range recording capabilities, and a wide range of micro-stimulation currents and voltages. We can achieve these capabilities because we are vertically integrated and we designed this custom ASIC in-house. Similarly, we design and fabricate our electrode threads in-house. And here you can see one of our standard threads designed for recording in an electron micrograph.

For Blindsight, our requirements are a little different, and our vertical integration allows us to rapidly iterate on the design and manufacturing of these threads for this new purpose. So here I'm using red arrows to highlight the electrode contacts, which are optimized for stimulation. And as you can see, they're a little bit larger, which results in a lower electrical impedance for safe and effective charge delivery, which is important for Blindsight. Now, how can we calibrate our implant for Blindsight? So here's one way.

We stimulate on the array, picking, say, three different channels. The user perceives something, say, three spots of light somewhere in their visual field, and points at them. We track their arm and eye movements and repeat this process for each of the channels on the array. And here's what a simulated example of a blindsight vision could look like after calibration. Now, I showed you how for blindsight we need to insert threads deeper into the brain than we have previously,

And doing this requires state-of-the-art medical imaging. So we worked with Siemens to get some of the best scanners on Earth. We built out our imaging core from scratch in the past year. Actually, it was faster than that. It was about four months from dirt to done. Since bringing the scanners online, we've scanned over 50 internal participants, building out a database of human structural and functional anatomy. What can we do with the imaging information from these scanners?

So medical imaging can be used for surgical placement. It lets us parcel out brain regions by their function. And we use our imaging capabilities to refine the placement for telepathy. It also gives us the capability to target new brain regions for future products such as blindsight or speech prosthesis. And we're working towards more capabilities. So one-click automated planning of surgery from functional images to robot insertion targets.

Here you can see a screen capture from one of our in-house tooling to do end-to-end surgical planning. You can see a region of motor cortex known as hand knob and the thread trajectory plans that will be sent directly to the robot. This is a really incredible degree of automation that's only possible because we're controlling the system from one end to the other.

My name is John, and I lead the robot mechanical team. This is our current R1 robot. It was used to implant the first seven participants. This robot works really well, but it has a few flaws, one of which is the cycle time is rather slow. So to insert each thread, it takes, in a best-case scenario, 17 seconds. And many cases, external disturbances cause us to have to retry to grasp that thread and then reinsert it.

To scale our number of neurons access through higher channel count, increased numbers of threads, we need to have a much faster cycle time. So let me introduce our next generation robot, which is right here.

Through rethinking the way that we hold the implant in front of the robot by holding it directly in front on the robot head, we're able to achieve an 11 times cycle time improvement. So each thread takes one and a half seconds. We also scale up a lot of surgery workflow process improvements through deleting the separate operator station and implant stand.

Now, the outside of the robot looks pretty similar between the two, but it's what's inside that really counts. Each system has been redesigned from the ground up with a focus on reliability, manufacturability, serviceability, and using a lot of our vertical integration techniques, it's enabled us to have a lot more control of the system end-to-end. Now, that fast cycle time doesn't mean much if it's not compatible with a significant portion of the human population.

Prior to each surgery, we scan a participant's anatomy and ensure that they will be compatible with the robot and vice versa. Unfortunately, the robot isn't compatible with everyone, so we had to extend the reach of the needle.

in the next generation robot, and now we're compatible with more than 99% of the human population. We've also increased the depth that the needle can insert threads. Now it can reach more than 50 millimeters from the surface of the brain, accessing and enabling new indications. We have to produce a ton of custom sterile components for each surgery. We actually supply more than 20 of these parts.

Many of these parts are made through traditional CNC manufacturing capabilities, which we do just on the other side of this wall actually, and some custom-developed processes like this femtosecond laser milling used to manufacture the tip of the needle.

Now, these processes take quite a bit of time, effort, and cost. So let's take a look at how we're going to reduce costs and time for one of the components. So the current needle cartridge has a total cycle time of about 24 hours, and the machine components cost about $350. The final assembly is performed by a set of highly skilled technicians. They have to glue a 150 micron diameter cannula onto this wire EDM machined

stainless steel base plate. They have to electropolish a 40 micron wire into a sharp taper, and then they have to thread that 40 micron wire into a 60 micron hole in the cannula. This is done manually. And then they finally have to laser weld all the components together.

next generation needle cartridge, takes only 30 minutes of cycle time and $15 in component. We were able to delete the wire EDM machined base plate and the cannula gluing step by switching to an insert molded component. So we get a box of these base plates with the cannulas already installed for like a thousand of them for like a couple, five, $10 a piece. We also deleted the electro polishing step with the revised needle tip geometry, which is also compatible with inserting the threads through the dura.

We have a few revised manufacturing techniques to delete the manual threading through basically a funnel, rather simple, but it has been a big impact. And then we're able to delete the laser welding through using cramping. Hi, I'm Julian. I'm one of the leads on the implant team. So the way humans communicate today, if they want to output information, is by using their hands and their voice, as I'm doing right now.

And if you want to receive information, you use your ears and your eyes. And of course, that's how you're receiving this very talk. But we've built this implant. And this implant is very special because it is the first time that we're able to add a completely new mode of data transfer into and out of the brain. If you look at this device in a nutshell, it's really just sampling voltages in the brain and sending them over radio.

But if you zoom out and look at the system from end to end, what you actually see is that we're connecting your brain or biological neural net to a machine learning model or a silicon neural net on the right hand side. And

I actually think this is really elegant because the machine learning model on the right hand side is in fact inspired by neurons on the left hand side. And so in some sense, we're really extending the fundamental substrate of the brain. For the first time, we're able to do this in a mass market product. That's a very, very special piece of hardware. So these are some of the first implants that we ever built.

There are electrodes that were made with our in-house lithography tools. We have custom ASICs that we also designed in-house. And this was really a platform for us to develop the technology that allows us to sense micro level volts in the brain across thousands of channels simultaneously. We learn a lot from this, but as you'll notice in the right two images, there are USB-C connectors on these devices. These were not really the most implantable implants.

Close your eyes, exhale, feel your body relax, and let go of whatever you're carrying today. Well, I'm letting go of the worry that I wouldn't get my new contacts in time for this class. I got them delivered free from 1-800-CONTACTS. Oh my gosh, they're so fast. And breathe. Oh, sorry. I almost couldn't breathe when I saw the discount they gave me on my first order. Oh, sorry. Namaste. Visit 1-800-CONTACTS.com today to save on your first order. 1-800-CONTACTS.

As a contractor, I don't pay for materials I don't use. So why would I pay for stuff I don't need in my mobile plan? That's why the new MyBizPlan from Verizon Business is so perfect. Now I can choose exactly what I want and I only pay for what I need. Right now with MyBizPlan, get our best price as low as $25 a line. Visit verizon.com slash business to get started today. New lines only. Price per month with 5 plus lines. Includes auto pay and paper free billing and promotional discounts.

Taxes, fees, economic adjustment charge, applicable add-ons, prices, and terms apply. Guarantee applies to base monthly rate and stated discounts only. Add-on prices additional. Offers end June 30th, 2025.

Hey, business owners. We know you know the importance of maximizing every dollar. With the Delta SkyMiles Reserve Business American Express Card, you can make your expenses work just as hard as you. From afternoon coffee runs to stocking office supplies and even team dinners, you can earn miles on all your business expenses. Plus, you can earn 110,000 bonus miles for a limited time through July 16th. The Delta SkyMiles Reserve Business Card. If you travel, you know. Minimum spending requirements and terms apply. Offer ends 7-16-25.

This next set of images are the wireless implants. And there was a complete evolution that we went through to add the battery, the antenna, the radio, and to make it actually fully implantable. Once it's implanted, it's completely invisible. It's very compact, it's modular, and it's a general platform that you can use in many places in the brain. Going from that top row to the bottom row is very challenging.

The implant you see on the bottom right here is in fact the device that we have working in seven participants today, and it's augmenting their brain every day and restoring their autonomy. But getting to that point involved a huge number of formidable engineering challenges. We first had to make a hermetic enclosure passing a thousand separate conductors through the enclosure of the device. We had to figure out how to make charging seamless and work with very tight thermal constraints in a very, very small area.

Then we also had to scale up our testing infrastructure so that we could support large-scale manufacturing and very safe devices and have confidence in our iteration cycle. What's next? We're going to be increasing our manufacturing so that we don't just produce a small number of implants per year, but thousands and then eventually millions of implants per year. We're also going to be increasing channel count. More channels means more neurons are sensed, which means more capabilities.

In some sense, we often think a lot about the Moore's law of neurons that we're interacting with. And in the same way that Moore's law propelled forward many subsequent revolutions in computing, we think that sensing more and more neurons will also completely redefine how we interact with computers and reality at large. I want to leave you with one final thought. When I was a child, I used a 56 kilobit modem to access the internet.

If you remember what it's like, you would go to a website... 56? You're lucky. You're a lucky bastard. When I was a child, we had acoustic couplers. Oh, yeah, okay. So just beep at each other. Yeah, the first modem was the acoustic coupler. Incredible device, honestly. But then if you, I guess if you're my age, you started with 56k bit modem. And...

You would go to a website and there would be an image and it would scroll slowly. It was loading pixel by pixel on the screen. So that's what it's like to be bandwidth limited. Now imagine using the current internet with that same modem. It's inconceivable. It would be impossible to do. So

What broadband internet did to the 56 kilobit modem is what this hardware is going to do to the brain. We are trying to drastically expand the amount of bandwidth that you have access to, to have a much richer experience and superhuman capabilities. So I guess just to kind of close out and to recap today, Neuralink is working reliably and has already changed the lives of seven participants and making a real impact.

And our next milestone is to go to market and enable scaling of this technology to thousands of people, as well as expand functionalities beyond just the movement to enable sophisticated robotic arm control, speech, vision, give sight back, and even getting to the speed of thought.

I hope you got a good sort of sample of our technology stack and the challenges that we have. And I'd like to hand over the mic to Elon for any closing remarks. Well, we're trying to give you a sense of the depth of talent at Neuralink. There's a lot of really smart people working on a lot of important problems. This is one of the most difficult things to do.

to actually succeed in creating and have it work and work at scale and be reliable and available for millions of people at an affordable price. So super hard problem and would like to have you come join and help us solve it. Thank you.

Hey, thank you so much for listening today. I really do appreciate your support. If you could take a second and hit the subscribe or the follow button on whatever podcast platform that you're listening on right now, I'd greatly appreciate it. It helps out the show tremendously and you'll never miss an episode. And each episode is about 10 minutes or less to get you caught up quickly. And please, if you want to support the show even more, go to patreon.com slash stage zero.

And please take care of yourselves and each other. And I'll see you tomorrow.