This episode is brought to you by Progressive Insurance. Fiscally responsible, financial geniuses, monetary magicians. These are things people say about drivers who switch their car insurance to Progressive and save hundreds. Visit Progressive.com to see if you could save. Progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states or situations. Listener supported. WNYC Studios.
This is the New Yorker Radio Hour, a co-production of WNYC Studios and The New Yorker. Welcome to the New Yorker Radio Hour. I'm David Remnick. NVIDIA is a tech colossus. It's as potentially important to the way we live our lives in the near future as Apple or Google, maybe even more so.
NVIDIA makes microchips. In fact, it's all but cornered the market on the chips that are essential for the use of AI, for artificial intelligence systems like ChatGPT. And just recently, it was rated the most valuable company ever. But this is not primarily a business story. It's a story about the United States and China, about who exactly is building the technology that shapes the future, our future.
About a year ago, journalist Stephen Witt wrote a stunning portrait in The New Yorker of NVIDIA and its co-founder, Jensen Huang. Witt's new book on the subject is called The Thinking Machine. Stephen, in all the years we've been doing this show, I don't think we've ever sat down to talk about a microchip company and the CEO of that microchip company, and yet...
NVIDIA is incredibly important to all of our futures in some way or another. Explain what NVIDIA is and why it's so important. When you interact with a system like ChatGPT, like say everyone's rendering their image right now as Studio Ghibli kind of anime, it takes your request and it sends it back through a giant broadband data pipe to a huge data center. And inside that data center is a warehouse full of computing equipment
all of which is running NVIDIA microchips. It's all running NVIDIA hardware. Your request is processed there and then sent back to you in the form of an image or a term paper or a meme or a medical diagnosis or whatever you asked for. NVIDIA was there at the beginning of AI. They really kind of made these systems work for the first time.
We think of AI as a software revolution, something called neural nets. But AI is also a hardware revolution. And these microchips that NVIDIA designed used a process called parallel computing, which meant that they split mathematical problems up into a bunch of bits and then solve them all at once.
Now, it turned out, and nobody expected this, nobody saw this coming, this software, the neural networks, and this hardware, the parallel computing, worked perfectly together. And they needed each other to succeed. And this is really what made the AI revolution possible. So, what you're telling me, there would be no artificial intelligence. There would be no. Not on this level, not on this mass level, even in its early days now.
Without NVIDIA and without the product that they produce? Without NVIDIA, we would be about 10 years behind on AI. The first AI system that we really would consider a modern AI system, so kind of like the Wright Brothers airplane of AI, was a system that a guy built in his bedroom, a guy named Alex Kraszewski working at the University of Toronto. And when was that? That was in 2011 and 2012. And
In 2012, he built this system and used two NVIDIA gaming cards, like the ones you would buy at Best Buy, retail video game cards to make essentially a jerry-rigged low-budget supercomputer to run the training for this neural net. And this broke all the barriers in AI. So as a result, all of the early AI pioneers and scientists gravitated to the NVIDIA ecosystem and built all of modern AI around it.
So tell me about the origins of NVIDIA and its co-founder, Jensen Huang. He's a ferocious entrepreneur. He was born in Taiwan, moved to the United States when he was about 10 years old, and has a degree in electrical engineering. And when he was 30, he founded this company to make video game equipment.
because that's where they thought the market was. And in fact, NVIDIA did not have a great reputation. They were really viewed as a second-tier company for about 20 years. Second-tier to whom? Second-tier to Intel. Second-tier to Qualcomm. Second-tier to all the kind of like big microchip majors that you would have heard of. And Intel and Qualcomm weren't working on the possibility of AI the way NVIDIA was? In fact, even NVIDIA wasn't working on it.
It came as a surprise to them that AI worked so well in their system. NVIDIA was looking for something like this. They couldn't have told you it was AI specifically, but they were certain that if they made these powerful systems for computer scientists somewhere down the road, they would unlock some incredible functionality. Does Jensen Huang's success...
come from his business acumen or from technical skills that he learned as an engineer? Technical skills. His technical skills. He is a world-class computer scientist.
world-class engineer. And in fact, he runs his company like an engineer. He's thinking, what are computers capable of doing? What can I make them do that's never been done before? And then downstream of that somewhere, profits will appear. And so this is how NVIDIA works, and this is why they've become so successful. I use ChatGPT like an idiot, right? I just play around with it, and I ask it a question as, you know...
How much does this ballplayer make? Or, you know, what happened in 1965? Very simple questions. And what spit back at me is kind of wiki-like answers. Obviously, there is much more sophisticated ways to use even ChatGPT, much less more sophisticated programs. What is NVIDIA anticipating? And does it own the market?
I think Jensen is anticipating that these systems will kind of enter robots in the real world. So Jensen is building essentially a giant digital playground called Omniverse, where these robots can learn to move around in this kind of digital simulacrum. And once they've learned how to do that, he's going to download those brains and stick them into kind of real-world machines, and they're going to move around.
I think he thinks this is in the five to ten year time frame, although it's already starting to happen with with automobiles and other other kind of like more primitive robots. OK, this is what we really have to break down his vision of the world that he's seeing five years down the road. Let's let's what is life going to be like in in his terms? What is what is the world that he's seeing? So Jensen hates science fiction.
And in fact, has never read a science fiction book, he told me. I think what he's seeing today is that within the next five years, well, first, almost all sorts of entertainment will be intermediated by AI. So anything you see on a screen is going to be enhanced or passed through some kind of AI filter on the fly. What does that mean?
You know, so if I'm talking to you and I'm feeling that my face isn't looking that great today, it's going to be sort of very subtly turn on the Facetune thing to make me look better. You know, my voice will maybe sound a little different. I mean, these systems are already in place, but they're going to get more sophisticated. I think for stuff like planning a vacation, you're just going to ask the AI agent to go bring you back some options. You're going to see the one you like and you're going to click yes, and then it's going to do all the work. It's going to book all the flights. Yeah.
For something like a medical diagnosis, I think the doctor will consult with kind of an AI avatar and return with a perfect diagnosis. And then moving forward into the future, Jensen currently is trying to train robots on more difficult tasks, like washing dishes without breaking them. I think probably they're going to have something like that online within the next two or three years. And you can imagine demand for something like that will be pretty substantial. Yeah.
The dishwashing robot? Oh, yeah. So Fei-Fei Li at Stanford did a survey of thousands of people, and she asked them one question. How much would you benefit if a robot did this for you? At the bottom of the list was opening presents. So nobody...
Nobody wants a robot to open their presents for them. Okay, fair enough. At the very top of the list was cleaning the toilet and washing the dishes. What else? What else is up there? Cleaning up after a wild party. That was the other one. So if you went through a big party, you know, kind of the reason you don't do that is because the place is going to be trashed afterwards. So you're going to have...
robots like in the Jetsons. You're not old enough to remember it, but the Jetsons were a cartoon about the future, and it had a robot house cleaner also dressed up like a French maid of long ago...
pre-feminism mythology. And that's what it looked like. But what you're describing isn't all that different except for the French maid bit. I think it's going, they think it's going to be at least a multi-trillion dollar industry. And Jensen wants to be right in the middle of it. He wants to build that thing's brain. That's where AI is going? That's where he thinks AI is going. Dishwashing? Dishwashing. I mean, think about it. It's a huge...
It's a huge market. I mean, it's going everywhere. But the consumer home use, the thing that people, when you ask them, what do you really want a robot for? Domestic cleaning. Nice not to wash the dishes anymore. And what jobs will be eliminated other than those? All of them? I mean, this is the question that I kind of put to Jensen. Like, I can't imagine, David, what we're going to do.
I mean, I think maybe like live theater. I guess we'll play video games with little triangles. Yeah, we'll play video games or we'll interact with the AI or maybe like
In-person events, live theater will suddenly be more exciting. Maybe that's going to happen. You're making me glad that soon I'll be dead. Well, and it's funny because this question has absolutely split the AI community. Jensen is an optimist. He thinks this is the greatest thing since the invention of electricity. And in fact, this is a comparison. Not just the amelioration of labor, the elimination of labor. Complete elimination of almost all forms of labor.
We published a profile of Geoffrey Hinton, who is deep into the AI world. This is a piece by Joshua Rothman, who looks at this future that you're describing as a dystopia. And he's, you know, as a creator of AI, a godfather of AI even,
He is extremely wary of this future. What you're telling me is that the head of NVIDIA is the absolute opposite. Hinton is the godfather of the software. He thinks that we are in big trouble. He quit his job at Google to warn humanity full time about the risks of these systems.
Jensen is the godfather of AI hardware. He thinks Hinton is crazy. He thinks Hinton is being ridiculous, and it's as pointless to argue against this as it would be to argue against, say, electricity or the Industrial Revolution or agriculture. I'll tell you, Jensen's winning. But it sounds like he's both an absolutist and a complete utopian. Did he convince you, Stephen? No.
Yeah, so when I brought these points up to him, Jensen started screaming at me. I showed him –
You know, I don't think he can help me. He started screaming at you? Oh, yeah. He did not like... Well, I should say I repeatedly questioned Jensen on this on every interview because I thought it was such an important question. And he was very dismissive of me. But I wanted to kind of push him a little, so I found this old clip of Arthur C. Clarke at the dawn of kind of the 2001 era. A space odyssey. 1964, talking about how in the future machines may be smarter than men. And I wanted to show this to Jensen, and he...
it just made him so mad. Why? I don't... I mean, I think... But it confirmed his own prejudices and vision. In fact, Arthur C. Clarke was optimistic, too. This was the really surprising thing. But I think that he... Well, up to a point. Things don't end well. Things don't end well in that movie, as I recall. You know, Jensen...
You know, was like, I have never read an Arthur C. Clarke book. His exact phrase was, I didn't read those effing books. I mean, except he swore. He just was not having it. He's completely candid. No BS. Absolutely speaks his mind. And this is really rare for a tech CEO. What is politics? None. He's not in that kind of right-leaning libertarian Silicon Valley camp. Jensen was the most...
Jensen was the most powerful figure in Silicon Valley not to attend Trump's inauguration. As far as I can tell, he has never made a political donation or taken a political stance in his life to a candidate. Because he wants to avoid this or because he doesn't have politics at all? I think he thinks politics is tribal and irrational.
We're talking about an engineer. We're talking about a guy who moves forward from data and who reasons forward from data and is willing to change his mind wherever the data takes him. That's just not how politics works. I'm talking with Stephen Witt. His new book about NVIDIA is called The Thinking Machine. We'll continue in a moment. This is the New Yorker Radio Hour.
Thank you.
You'll hear from novelist Jane Ann Phillips, film critic Justin Chang, and columnist Vladimir Karamurza, and so much more. Season 2 of Pulitzer on the Road is out now. Listen to Pulitzer on the Road wherever you get your podcasts.
Fox News tries to diffuse the scandal over a journalist invited on a group chat where top White House officials were high-fiving about real-time bombing plans. Don't you hate when that happens? We're trying to start a group text. You're adding people and you accidentally add the wrong person. All of a sudden your Aunt Mary knows all your raunchy plans for the bachelor party. On this week's On the Media from WNYC. Find On the Media wherever you get your podcasts.
This is the New Yorker Radio Hour, and I'm David Remnick. I've been speaking with Stephen Witt, the tech journalist who's just published a new book about NVIDIA and its CEO, Jensen Huang. NVIDIA makes the microchips that are powering the AI revolution. It's so integral to AI as we know it that NVIDIA is one of the most valuable companies on the planet, up there with Apple and Microsoft. I'll continue my conversation with Stephen Witt.
Now, NVIDIA's stock market value was just above $3.5 trillion at the start of the year. That's the highest valuation of any company ever. In January, it also saw the largest single-day loss in stock market history. That's a $600 billion loss. So what happened?
That was due to a new Chinese AI model called DeepSeek, which ran much more efficiently or trained much more efficiently than any model that had come before. And people at first thought that maybe this would mean there would be less demand for NVIDIA's microchips. But Jensen has said that the market got it completely wrong. And in fact, they recouped all of that. It went all the way back up within a few weeks afterwards. So what actually happened? Because as I understand it, DeepSeek...
It seems to be a cheaper AI option for one, but it also uses NVIDIA chips. So why was there such a panic about it? There was a panic because it used an older version of NVIDIA chips. It used antiquated NVIDIA chips, not the cutting-edge ones. And so they retooled these old chips to get state-of-the-art performance, which really shocked and surprised a lot of people. And was that level of performance validated on a level that
You would believe much less Huang would believe? I think so. It seems like the results are legit. You know, and NVIDIA was the most valuable corporation on earth. And so it's going to have these kind of wild swings. For a long time, all of his manufacturing came from the Taiwan Semiconductor Manufacturing Corporation. They're the ones who really, the only ones who had the capability to build these advanced microchips. So they would outsource production to Taiwan. Why couldn't he bring it here?
Because Taiwanese engineers work 14 hours a day, six to seven days a week, and they're incredibly dedicated and incredibly gifted. Computers kind of segmented into almost two spheres. All of the hardware was going to be built in Asia, and all the software was going to be built in Silicon Valley. And each side was just going to pursue its kind of competitive advantage. And that's unchangeable. It was unchangeable.
Now, with Trump, it's starting to look a lot different. The Taiwan Semiconductor Manufacturing Corporation is coming to the U.S. They're making the single largest foreign direct investment in the history of the United States. And they're building this incredibly huge factory on the outskirts of Phoenix where it's so hot that they have to put ice in the concrete to pour it so that it sets.
The thing they're building out there is huge. It looks like an airport. And once they're done, it will probably be capable of doing most of the manufacturing for NVIDIA. And in fact, NVIDIA is banned from selling its most advanced equipment to China. Now, maybe what's happening is that people are starting to say, hey, this kind of like labor advantage that Asia had over the United States for a long time, maybe in the age of robots, that labor advantage is going to go away.
And then it doesn't matter where we put the factory. The only thing that matters is if, you know, is there enough power to supply it? And is there any geopolitical risk involved? And so in an age where robots are doing most of the work in the factory, I think the calculus of globalization and offshoring starts to look very different. Is he ultimately interested in bolting Taiwan to avoid the potential specter of China taking over Taiwan in one form or another?
Jensen loves Taiwan. He loves it. It's where he was born. He speaks Taiwanese natively. He goes back all the time and he's a folk hero there. He's in the night markets buying food, just like a normal guy. But he doesn't fear losing out.
for this loyalty? Taiwan has long benefited from what they call the Silicon Shield. I was just in Taiwan, I should mention. And it was the only thing anyone talked about was the relationship between NVIDIA and TSMC. And if that relationship collapses or deteriorates, and if NVIDIA no longer needs Taiwan, well, then what happened? What's the state of play about competition for NVIDIA? Even in the
software realm of AI, you've got a pretty rich competitive market.
You've got open AI, you've got meta, you've got a number of huge players. And the hardware system is just them. The barriers to entry for building a neural net are quite low. Actually, a student can do it. The barriers to entry to shipping several billion microchips each year are very high. Competitors who've tried to compete with NVIDIA just haven't been able to bring the juice. They can't match what NVIDIA can do. Is anybody trying? No.
They're trying. Oh, yeah, a lot of people are trying. But when they try and bring it to the AI scientists, the AI scientists use it a little bit. And one of two things happen. Either it's not fast enough or the scientists have to rewrite a million lines of code to make it work. The biggest competitor on the horizon is Huawei or some other kind of Chinese manufacturer.
Because NVIDIA can't sell its advanced equipment to China, it's illegal, this actually creates room in China for other firms to move. And in fact, this, I was recently in China, this was the question everyone was asking. How can we build, basically, NVIDIA China? We think we have the talent. We think we have the work ethic. You know, we think we have the equipment. Like, what do we need to do? What do we need to do? Some people would say that the Chinese have been very successful in
in, to be delicate about it, imitating or copying, or to be indelicate about it, ripping off technology from abroad and replicating it at home? Why can't it be done with NVIDIA? Because NVIDIA is always leapfrogging ahead. So NVIDIA, they're like the fashion business. They have a fall and spring release cycle. And they're constantly packing the latest features into their microchip.
So it's going to take you a year or two to knock off what they just built. And by that time, it's irrelevant. It's obsolete. This stuff moves so fast. Steven, I've got to ask you in closing, what's the future for people who write books in the robotic world that you described earlier? Oh, I have thought about this so much. I'll tell you something. This is going to sound weird, but hear me out.
You know, I did a ton of interviews for this book, a couple hundred hours of interviews, tons of research. I mean, you've done this, you know what it's like. And maybe 1% of what you do ends up in the book. And you're constantly having to make these tough editorial decisions about what to keep and what to toss, trying to guess or extrapolate what the kind of general median reader is going to want to read. But what if you knew more about the reader?
What if, for example, you were able to, the reader was coming to you and saying, you know, I have 10 years of microchip manufacturing engineering experience. I want this book to be more technical. Or what if they're a student and I want this book to be less technical and easier to read and more explanatory. And then the AI takes the skeleton of what you've written and rewrites it on the fly to meet the demands of the reader. That's actually possible. We could do that. And so maybe the future of the book evolves into something, at least the nonfiction book,
Something more like a knowledge database? I don't know if it's never really happened. I think narrative is very important. Stephen, you're freaking me out here. But it could happen. Did you use AI to write this book? I did not. So I actually tried because I was not going to be a Luddite. That's what I said to myself. And what I would do is I would feed it five or six paragraphs of my prose and ask it to produce the seventh paragraph.
It's just as you say, you know, reads like a Wikipedia article. It didn't sound like me. The tonal shift was immediately apparent that I had jumped out of my voice. But not even for research. Because I have a colleague the other day who said what he does is he asks AI a whole series of complicated questions.
And then has to go away for an hour or two because it takes a, you know, it's not just a Wikipedia series of questions. Comes back, series of references, then asks more questions, digs deeper. There's almost like a conversation with a
exceptionally talented research assistant. Yes, there's that. And it was quite valuable and no more or less, as it were, legit than using a good library. Absolutely. And the other thing it's really good at is taking complex technical subjects and basically, you know, dumbing them down for a lay audience. Right.
So the question I asked it constantly was, oh, explain how a microchip clock cycle works. But imagine I'm 12 years old and I don't know anything about this. Give me a very concise and simple explanation. And what it produced was fantastic. I mean, I could barely improve on it myself. In fact, I couldn't. I mean, I didn't copy and paste, but I was like, well, that's how you explain this. That happened several times. And so...
You know, I think when it comes to tough technical subjects, when it comes to research, as you say, and even when it comes to certain kinds of descriptive writing, it is a world-class tool that definitely can save the writer a lot of time. Well, Stephen— Whether or not they want to—
Open that Pandora's box. I think it sounds like the box is already flying open as it is. Stephen, we'll have you back before any robots are doing my dishes for sure. Okay, for sure. Thanks so much. Thank you. This was a great talk. You can read Stephen Witt on technology at newyorker.com. His book out this week is The Thinking Machine, Jensen Huang, NVIDIA, and the World's Most Coveted Microchip.
Now, I often turn to my colleague Joshua Rothman on questions about AI. Josh is a staff writer who's absolutely fascinated by AI and deeply informed about it. A couple of years ago, Josh was on the program interviewing the man known as the godfather of AI, Jeffrey Hinton. Josh Rothman just came back to the topic with an essay in The New Yorker called Are We Taking AI Seriously Enough?,
So, Josh, you spoke with the computer scientist Jeffrey Hinton, and he's been expressing grave alarm about where AI is going. I think you're more optimistic than Hinton, generally speaking. What are some of his main concerns? You know, if it's the case that we get way better at building technical things, we should expect that the country that has the best AI will have the best robot army.
We are worried about... We're seeing that already? We are. We're seeing it on the killing fields of Ukraine. Right, and the U.S. government has a research program into automated fighter planes, for example. If we're worried about the incompetence of government on whatever side of that you situate yourself, we should worry about automated government. For example...
an AI decides the length of a sentence in a criminal conviction, or an AI decides whether you qualify for Medicaid.
basically, will have less of a say in how things go, and computers will have more of a say, just to put it in those terms. It's not dissimilar from the phone. In the case of the phone, the algorithm decides what options it'll be presented with in terms of where you're going to turn your attention. And the algorithm has certain built-in biases towards things that are provocative, contentious, alarming. So then why do you have such equanimity about
Well, because, well, I don't really. I'm pretty freaked out about it. But I also feel in one's mental model of the future, you just, there's not one answer to this. How's it going to go? But do we have any choice? Do we have any sense of volition in this? Well, I think we should be learning lessons from what happened with phones and applying them to AI, just to put it in the broadest terms. We did nothing. We've all experienced what it is to have a technology platform
insert itself into our daily life in a way that replaces old habits with new habits. In my mind, there's a couple scenarios. In one scenario, we live in a science fiction novel and we really don't have much of an opportunity to intervene. The technology is just coming and it's coming next year or year after. Because Sam Altman says it is? Because the AI will learn to make itself better.
That's the scenario Jeff Hinton is worried about, and it's real. In my dream world, probably a functional government would step in to— A functional government, you say? But we don't have one, really. There's another scenario where the technology just takes a while, and then there is an opportunity to weigh in in various ways. It wouldn't be a bad thing.
to establish a consensus or a law that says that certain boundaries shouldn't be crossed. And we have laws that protect children online. Maybe one of the laws should be, you know, children shouldn't be preyed upon by computers that pretend to be adults. My
dream scenario is that in the next few years, we start to take seriously the need to think ahead, which we've never done before. But on the other hand, we do know what to be worried about. Does that presuppose that we have to give the job of moral philosopher and futurist to the same people like Jensen Huang who
Who are making the technology possible? Who are the scientists and the business people? I think de facto that's what's happening now. Here I really, on some level, I'm looking at myself because I'm a humanist who works in media. It makes me think I need to think more about what it is that I think this technology should and should not do in my world and talk about it like
Like, I think there's a lot of people who work in AI who say schools are a century-old institution that could just go away. Maybe it would be better. We'd learn more. But that's such a narrow aperture through which to think about what schools do and how children live. I don't want schools to go away, and I don't want teachers to be replaced with screens. We're at a point where teachers and parents need to say that. It sounds ridiculous to say against this big technological juggernaut that we just need to, like, make our voices heard. But I think right now we have an
tried. So I think right now there isn't enough discussion of AI. There's a lot going on. We haven't tried because I think people feel both helpless and powerless in the faces of the complexity of these technologies and the lack of any political agency where
They're concerned. Absolutely. I mean, in my school district, a big discussion is about banning cell phones in schools, for example. And so I don't think it's true that, you know, we're totally powerless. Like when I think about the fact that it's the year 2025 and we're now talking about banning cell phones, I have two feelings about that. One is like, how could we have only been talking about this now? On the other hand, I'm like, well, we're talking about it.
Why don't we talk about some of the other stuff that's happening now? Before, we used to kind of just let it happen and see what was happening. And maybe, maybe, maybe in some part of ourselves, we're saying to ourselves now, we've touched the hot stove of the phone. So we're saying to ourselves, let's not walk into the furnace of AI. I guess my feeling is sort of like the moment is now for these types of thoughts to start happening.
Josh Rothman, thanks so much. Thank you. Joshua Rothman's essay in The New Yorker this week is called Are We Taking AI Seriously Enough? You can find it at newyorker.com. And of course, you can always subscribe to The New Yorker there as well, newyorker.com. I'm David Remnick, and that's The New Yorker Radio Hour for this week. Thanks for listening. See you next time.
The New Yorker Radio Hour is a co-production of WNYC Studios and The New Yorker. Our theme music was composed and performed by Meryl Garbus of Tune Yards, with additional music by Louis Mitchell and Jared Paul. This episode was produced by Max Balton, Adam Howard, David Krasnow, Jeffrey Masters, Louis Mitchell, Jared Paul, and Ursula Sommer. ♪
with guidance from Emily Botin and assistance from Michael May, David Gable, Alex Barish, Victor Guan, and Alejandra Decat. The New Yorker Radio Hour is supported in part by the Cherena Endowment Fund.
Do you know about these stories? In 1933, Huey Long invented a holiday to prevent a bank from collapsing. In 1960, years before he was assassinated, someone tried to kill JFK with a car bomb. And in 2014, remember this, there was a whole news cycle about President Obama's tan suit. On the podcast This Day in Esoteric Political History, we talk about the forgotten stories that may teach us a lot about the very strange moment we're living in right now. Check it out. This Day in Esoteric Political History from Radiotopia.