We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Reid Hoffman on DeepSeek - 'necessity is the mother of invention'

Reid Hoffman on DeepSeek - 'necessity is the mother of invention'

2025/2/7
logo of podcast Washington Post Live

Washington Post Live

AI Deep Dive AI Chapters Transcript
People
R
Reid Hoffman
Topics
Reid Hoffman: 我认为技术变革虽然初期会引发担忧,但最终会增强人类的能动性。回顾历史,印刷术等重大技术进步都曾引发类似的担忧,但最终都极大地提升了人类的能力。人工智能也将遵循这一规律,成为增强智能,赋予我们各种超能力。我希望人们对人工智能保持好奇心,积极参与并塑造技术的发展方向,而非仅仅感到恐惧或担忧。我们应该从历史经验中学习,以更人性化的方式度过认知工业革命的转型期,更多地关注人工智能如何改善我们的生活,实现人道主义的目标。

Deep Dive

Chapters
The conversation begins by discussing Reid Hoffman's new book, "Super Agency," which explores the potential of AI to enhance human capabilities. Hoffman argues that AI, similar to past technological advancements, will ultimately amplify human agency and create new opportunities.
  • AI will be amplification intelligence, giving humans superpowers.
  • The book explores how AI can enhance human agency.
  • The discussion encourages AI curiosity and exploration of its potential benefits.

Shownotes Transcript

Translations:
中文

Valentine's Day is fast approaching, and you don't want to find yourself grabbing a heart-shaped box of chocolates, do you? Why not a smaller box with a heart-shaped sapphire pendant necklace from Jared Jewelers inside? Shop Jared now for select diamond fashion and Italian gold jewelry for your Valentine. After all, life may be like a box of chocolates, but love is like a box with a Jared Jewelers logo on it. This Valentine's Day, be the he who went to Jared.

Now get 40% off select fashion jewelry. Visit jared.com slash radio for details. You're listening to a podcast from Washington Post Live. Bringing the newsroom to you live.

Hello, I'm Bina Venkatraman, and I'm a columnist for The Washington Post and its editor-at-large for Strategy and Innovation. It's my pleasure today to be in conversation with Reid Hoffman, the co-founder of LinkedIn and the author of a new book, Super Agency, What Could Possibly Go Right With Our AI Future? Hi, Reid. It's great to see you, and welcome back. Thank you, and it's great to see you as well.

So I want to start, I've been reading your book and it's been really exciting to read and fascinating. And I have so many questions. I don't know if we'll get to all of them. But the first one really starts with how you framed the book and even the title of the book.

So super agency to me implied sort of at a superficial level that we'd be talking about what the buzz is in Silicon Valley and all around the world really with respect to AI, which is that AI agents are on the rise, that we're in an era of agentic AI where

We can set goals for AI and they will autonomously, systems will autonomously fulfill those goals for us. But the agency and super agency that you're talking about in your book,

is actually human agency, and you recenter the human, and you sort of draw a line from your work with LinkedIn expanding the ability for human beings to create professional identities online into a vision you have for what AI could do for human agency. So start by telling us more about that. How is that going to work?

Well, one of the things, and as you know from looking at the book, we covered a bunch of the history of how we as human beings have encountered these massive technological leapfords. And in each of these cases, what happens is we basically...

we essentially view them as a possible loss of agency, both human and society. The dialogue we have around AI, which is frequently very worried and everything else, is very much like the dialogue we had around the printing press.

But on the other side of these technological evolutions, printing press, automobile, electricity, etc., we have an enormous increase in human agency. These things give us various forms of superpowers. And the thesis is that AI will do the same. It will be amplification intelligence. It will be the kind of thing that gives us a set of superpowers. And part of the super agency part of this is not just, for example, take a car. I get a superpower

of I can go travel and have a longer distance and being able to get places. But because other people also get that superpower, like a doctor can come do a home visit for a kid, a grandparent, other kinds of things, that means I also get agency from that. And that increase of human agency is actually, in fact, one of the things we should all start focusing on. We should all start thinking about

How do we participate in this technology? How do we shape it, both as technologists, but also society? And this is one of the things about, as you know, the first chapter is, with the launch of ChatCBT, humanity enters the chat. And as opposed to having a kind of sense of fear or concern, that's fine to have concern and skepticism. What

What I want to try to help people is become AI curious. What is the way that I could get a superpower and that we could get these superpowers that elevate human agency? And in the agentic, which you deliberately saw, was a deliberate pun between this kind of agentic revolution and also why it is enhancing human agency.

So I could see a lot of people agreeing with sort of your diagnosis of past technologies that eventually we end up in a world where humans on balance perhaps

gain greater agency, but I could see people having disagreements about the trajectory, how we get there. So in your view, is it an execrable sort of we are inevitably through a natural evolution of technology going to get to a place where humans benefit? Or is it a time for vigilance and sort of the social contract to be renewed and carefully thought of with respect to how the technology unfolds so that we get that version of the future?

Well, so three things. First is, I think that these technology transformations are always painful. So you go printing press, we can't have the modern society without it. We can't have scientific method, we can't have medicine, we can't have education, we can't have middle class. And so everyone goes, yeah, printing press, clearly good. Yet the discourse around the printing press when it came out was very similar to the discourse today.

degradation of human abilities, no longer memory matter, we're going to spread misinformation within society, etc. And we as human beings adopt to this very poorly. So with the printing press, we had nearly a century of religious war as a function of it. So we tend to encounter these disruptions badly. So I don't think it's inevitable, like, tomorrow it's all sunshine and flowers and all the rest. I think there's a lot of transition difficulties.

Second, is I think that if we get through these transitions in the kind of the right ways, I do think that we end up with, you know, kind of naturally an increase in human agency. Like, I don't think we have to, like, we don't have to go, oh my God, it's going to go off the cliff unless we're doing stuff.

But I think that there's not a reason not to steer, as you were saying, vigilantly, to steer better. Try to learn from our historical efforts, because I call this the cognitive industrial revolution, and I do so with a kind of a direct parallel.

And obviously, the Industrial Revolution is also instrumental to the broad, why we have so many people who are alive today, middle class, elevation of productivity and all the rest. But the transition was enormously difficult and painful.

And we want to navigate these much more gracefully, much more with kind of humanity. Doesn't mean it won't be hard, doesn't mean there won't be pain, but that's where I would apply the vigilance is that, you know, how are we navigating to make the transition period good and also to quicker and more powerfully

recognize, kind of call it, humanist outcomes, the super agency, the evolution of how AI can make our lives so much better. This framework you're offering of what could go right, what could possibly go right, is pretty distinctive and it strikes me in this moment, a lot of people can imagine and observe what could go wrong. There's a lot of dread and despair whether people are looking at geopolitics, the climate, the economy.

And what you're offering is a sort of way of looking at things where you paint a vision of the future and you try to drive towards a different, more positive future. I know about a week ago, you announced a new startup in the discovery, drug discovery space to use AI to apply to medicine. So help us do that in the sphere of medicine. What, if everything goes right, if things go right with how we use AI for medicine, what's the vision that you hope that you're driving towards now?

So I'll start with my company and then I'll go to more broad for everyone. So part of the thing that I realized when I was talking to this, you know, amazing cancer researcher who, you know, Siddhartha Mukherjee, who has released, you know, helped create a lot of different cancer drugs, you know, some approved, some in phase three trials.

And we were talking about like, how does AI give superpowers and how might this apply more broadly than just kind of agents who can help you with kind of cognitive work, translation, kind of analysis, research, OpenAI just released deep research. And how can it help with more than all of that? And the answer was, well, there's these other areas where if we suddenly made 10X, 100X capability increases, like for example, figuring out

What are the possible, you know, kind of drugs to solve cancer? And there's all kinds of cancers. So it's not just one cancer pill. There's like, you know, how does, you know, triple negative breast cancer work? How does leukemia work? And they have similar kind of dysfunctions, but there's all the different sorts of cells and different challenges. And so, well, how do we both understand it?

How do we get possible kind of molecules that can be the right kind of get rid of the bad cancer but keep the healthy cells, which is one of the major problems in all this? And how do we accelerate this entire scientific process? And I was like, well, here's the kind of things that AI can bring into this. And he's like, well, here's the things we understand from the best of science. So let's bring the best of kind of this AI revolution with the best of science

and massively accelerate our research and understanding, our possible compounds, our possible evaluation of the compounds, and evaluation not just for might it work, but also might it work within the human ecosystem. And this is just, I think, one of many kinds of areas where we could suddenly get, cancer is the great killer. All ages, everyone around the world, all societies,

something that we could make a massive difference for the quality and longevity of human life. And this is just one area that applying AI intelligently to

could make a very big difference. The other one, to be a little bit more everybody, is today, with today's technology, there's no invention. Manas has a lot of inventions, our drug curing AI drug discovery company. But no invention, you could have a medical assistant that runs in every smartphone

for 24 by seven that is better than your average good GP. That doesn't mean put GPs out of businesses, like GPs are well overloaded. There's all kinds of things they could do. But imagine you're at home at,

Tuesday at 11:00 PM and your child has an issue, your parent has an issue, your partner has an issue, your friend has an issue, and you need to know if you have access to an emergency room, should you go there? What level of urgency should be what you might be able to do? That's buildable today.

And is again, kind of like a form of once that's there, that gives us all a form of super agency. And this is, I think, among the things that is like literally like line of sight. It's just a question of how we build it, how we navigate regulatory system, how we navigate liability system. But that's kind of like something that could be there for everyone. And obviously,

Even in the US, we have a lot of uninsured people. It's around the world that can run for simply a small number of dollars per hour. That kind of thing is the kind of thing that is kind of what is the more human future that I'm hoping for us to get to.

I, for one, am eager to see how that evolves, too. You mentioned the deep reasoning models, the reasoning models of large language models that have been released recently, which is one of the major developments in AI of late. And, of course, reasoning being a sort of partially accurate term for what these models are doing, which is taking a few extra steps and acting more like scholars in how they answer our questions as opposed to just delivering predictive text.

Of course, the most recent excitement and panic has been about the Chinese version of one of these reasoning models, DeepSeek. What do you make of, you know, obviously the stock market crashed in response to this. It caused a lot of reaction across Silicon Valley, technologists and politicians with respect to the so-called arms race on AI. What do you make of what the Chinese have done?

Well, so a lot of the story around this kind of caused a bunch of, there was, the story was radically incomplete. Like, one, they almost certainly used a large model, which required large scale compute and everything else in order to make, whether it was GPT-4, LAMA, some combination, etc. Second is when I've consulted with experts from multiple different firms, they also likely had access to a large compute cluster.

And so the statement that this was all done just kind of super cheaply was actually, in fact, required all of these large expenses in order to make it happen. Now, that being said, I don't want to undercut the value of kind of some of the areas of achievement. Like I think necessity is the mother of invention.

part of what I think they figured out some efficiencies in operation that by having open sourced, all of the US and other AI labs have learned those things too. I think that was great invention, bringing it into the general industry practice. And I think it really illustrates something that I and others have been saying for the last couple of years, which is this cognitive industrial revolution

is actually in fact, you know, kind of a, there's a competition going on for the development of these technologies.

that the Chinese and others, I think part of the thing that the Chinese effort demonstrates is that other people can also be in this, Europeans, others, because if you're working through the distillation from large models like Chachi BT and leveraging off open source like Meta's Llama, I think that those are things where you can actually then go build things

that are unique and kind of additional kind of value of these open models. Now, part of, I think you have to be careful, this is frequently discussed as open source versus open weights. And to be slightly jargonist, for example, open source is the description of kind of they have a code and all the process works. Open weights is just the, as it were, the artifact, like here's the computer program in terms of how it runs.

Open source is generally speaking always very good because you can check security, you can iterate on it. Open weights is just giving out the program to everyone. In some senses, this is very good, academics, startups, entrepreneurs. On the other hand, you have to be careful about rogue states, terrorists, criminals. So you have to be careful about that kind of thing on these. And so we have to navigate this in intelligent ways.

But I think that part of what you'll see is with a variety of these small models being broadly available, is that's part of how we get to the cognitive industrial revolution, where part of what I would say is within some number of years,

every professional, you, Bina, me, et cetera, when we deploy with our work, we will have at least one, probably multiple, co-pilots helping us with the stuff we're doing. And I think that that's part of what the revolution of all these small models will be. And I think that the deep-seek side showed that other players, not just the U.S.,

will be present in developing and deploying them. So we are in a kind of like a build the future, kind of there's a competition afoot. And it's one of the things that I think is so important for us to be dedicated on.

as Americans, because part of the thing that I hope is that AI is not just amplification intelligence, but also American intelligence, building in some of our values about individual rights and order of society and these kinds of things. I think that's all been kind of had a spotlight put on it by DeepSeek.

So Eric Schmidt wrote an op-ed recently in the Washington Post where he called into question the more closed system used by OpenAI and other of the American backed companies. Of course, you're a major backer of OpenAI and of the GPT systems. And he raised the question of whether that more open sourced, open weight model used by DeepSeek by the Chinese is a better model for innovation. What's your take on that?

Well, so I think fundamentally you want both. I do think that part of what will be important about continuing to build these very large scale models that, you know, OpenAI, Microsoft, Google, Amazon's made announcements are all building is because those actually, in fact, can help build really capable small models, among other things, solve, you know, much higher order, more challenging problems.

On the other hand, that doesn't mean that open source isn't a very good thing. I was on the board of Mozilla for 11 years. At LinkedIn, we open sourced a large number of projects, some of which have become public companies in terms of how they operate. So it's not kind of really closed or open as the dynamic, but the question is what the different blend is. And when you have these kind of open projects, they can be built upon and amplified upon. Now,

Generally speaking, when it's open source, the modifications can kind of go back to the common repository and create kind of a triumph of the commons, of the digital commons, one of the things we were kind of talking about in super agency and other contexts, but also kind of like make it more secure, more investigatable. Open weights is a little bit more tricky. It means that the technology is more dispersed

But the work that's being done by a startup doesn't necessarily get re-contributed. Academics by nature will re-contribute.

in various ways but they kind of these these open weight things are like you know software application programs they're not as easily understood and so i think it's a good thing to be having uh open programs uh in in midst of what you're doing um but i also think that the proprietary scale things are also very good and i think one of the things that it's one of the areas where

our American hyperscaler companies have some strong edge. I think it's one of the things that was intelligent about the CHIPS Act

in terms of, it doesn't prevent the Chinese from doing things, but it helps kind of give a little bit of a lead, maintain a bit of a lead advantage for our companies. And I think that will be important in the kind of contest of whose shape of the cognitive industrial revolution might help set the standards, the kind of the technology that will be the platform basis for around the world.

So we've been talking about some of the exciting benefits of AI, but a couple of weeks ago I was talking with Demis Hassabis, who is the founder of Google DeepMind, formerly DeepMind, one of the AI pioneers out there who's been calling for regulation and actually actively participating in a lot of global fora to

explore what regulation might be needed for in particular the harm of bad actors using AI in various ways and this sort of eventuality of super intelligence, intelligence that exceeds human intelligence and sort of what guard rail should be put on that. What's your view knowing full well that you make an argument for not over-regulating AI so that we don't realize these benefits, what in your view is the ideal regulation to prevent some of the harmful effects of AI?

Well, I think you want three parts. So one part is, what are the clear things that could be really bad that we must prevent in advance? And some of those things are the things I was gesturing at earlier, which is, we don't want to empower rogue states, terrorists, criminals. What are the things about like,

if terrorists are looking for various ways to massively damage societies, what are the ways we make sure that they're not overly empowered with AI agents and the kind of the superpower co-pilots that can come from this. And I think you want to say, okay, we want things like from the executive order, which is red teams, kind of safety plans, analyses, things that the government can then ask about. And then the next thing I think you want to do is say, okay,

What are areas that we should have as kind of as it were research around if we worry about things like super intelligence? Namely, well, how could we monitor for when it's on a massive self-improvement curve where it's kind of reprogramming itself or other kinds of things and make sure that this kind of safety measures are traded well across this? I mean, I think

The U.S., the U.K., the French, you know, and other safety institutes are kind of working on this and kind of making sure that the leaders building these, certainly within the Western sphere that are building these things, are kind of trading things and say, well, what happens if this, you know, kind of how do we maintain it aligned with human interest? And how do we maintain, you know, kind of the right kinds of controls around it?

And I think, again, it's lightweight and tough. And then the next thing is, what are we monitoring to as opposed to imagining everything that could possibly go wrong? What are we monitoring to see what would be early signs of things that need some correction? And then to be kind of doing the dashboards and to be monitoring that.

And that's kind of like having the companies in dialogue with governments, with journalists, with other folks saying, hey, what are those kinds of things? Are you paying attention to them? Are you doing this?

you know, safety alignment and training? Do you have safety groups? You know, what are the metrics that you're holding yourself accountable to in terms of how this operates? And so not just kind of saying, hey, we need to like have formal approval. I mean, to give you a sense of already how this is impeding, I actually know of some companies that

are shipping much worse quality AI underlying products to Europe because Europe is saying you must undergo many months of testing before you release it. So they'll go, okay, we'll release it in the US. And by the way, the product works really fine. Literally, of the companies I know of, there's been zero complaint and it's only been kind of quality product. But the worst quality product is actually shipped to

to Europe because of this kind of just like, oh, you must ask for any permission before you launch anything. And that's the kind of thing that prevents us from getting like a medical assistant in everyone's pocket. And by the way, there's a human cost to that.

If you said, hey, today we could have a medical assistant that's kind of a high quality GP in every pocket, think about kind of being able to intersect possibly very dangerous illnesses or injuries and being able to do something about it, being able to be much more cost efficient in your healthcare system, to be able to answer these kind of questions around like, okay, you

You know, I'm really nervous. I don't know what to do, you know, how to do all that. And so I think that's one of the reasons why the kind of the more lightweight regulation that, you know, we argue for in super agency is the right thing to do. Now, as you know from reading the book, and we describe ourselves as bloomers rather than zoomers, you want to be in dialogue with the risks and stuff and in dialogue with regulation. I'm not saying no regulation. It's just saying, you know, accelerate to the future, but navigate intelligently.

So, Reid, I happen to notice that you weren't up on the dice behind President Trump on Inauguration Day. Obviously, some of your fellow leaders in the tech industry were. Of course, I'm being a little tongue-in-cheek. I know you're a major backer of the Democratic Party and were of Kamala Harris's presidential bid. How are you feeling right now about the country?

Well, I mean, we're a couple weeks in, and I tend to think that the right perspective

responsibility of every citizen, including myself, is how do we help improve America as much as possible? And that's part of the reason why co-founding Manas, which is a company based here in New York, and building the future is, I think, really important. If you abstract, there was obviously a lot of negative dialogue around the inauguration, but you say, well, should the US president and the US government be in dialogue with the tech industry? I think that's critical. I think that's really important.

you know, obviously there've been a bunch of things that I, um, you know, I'm dismayed by whether it's the blanket January 6th Spartans or, you know, kind of, um, you know, like some of our close friends and allies like Canada and others, you know, kind of, you know, like it's, it's, it's better to be in dialogue and collaboration there. Um,

But nevertheless, I think that the important thing as citizens is how do we essentially say, here's where we are, how do we essentially contribute to American society, to American citizens and American industry? And so that's what I've obviously been focusing most of my attention on.

So I've been taking the call of your subtitle, what could possibly go right with our AI future, but just this framework of what could possibly go right and sitting with that over the last few days.

And I'm wondering, you know, I've been thinking about when in our history as a country have we used that framework to drive progress? And of course, the natural example that comes to mind is the moonshot when John F. Kennedy said, let's put a man on the moon. And then within 10 years, that happened. What other historical examples or present examples do you invoke to show that that framework actually can be self-fulfilling, that if you imagine what can go right, you can actually make it happen?

Well, that's ultimately all of the technological progress that we've gotten to, even through transitions. And Bina, you may or may not have seen, we actually released a video of the kind of contrasting

JFK's moonshot with Manas and what we're doing with Cancer Discovery, 'cause we like that one very much. - Okay, I hadn't seen it yet, so we'll check it out. - Yes, and we wanted to kind of inspire to the, hey, this is the kind of future you're getting to. And this can be anything from like, for example, the electrification drive,

for cities and societies, 'cause think about, like nothing works without kind of the electrical grid. You've seen these things in building out train tracks and highways,

and, you know, kind of building cities for cars. I mean, all of this stuff is like, oh, I can see how this future would really work. Let's build towards that. Because you don't get the future you want by just trying to eliminate all the ones you don't want. It's a very long list, and that's just a whole bunch of negative. You get it by building the one that

can be more human. And that's part of the reason why I say, hey, not just is super agency written for people who might be AI fearful, AI skeptical, AI uncertain, to become AI curious, but also for technologists to say, look, what

What people's worry is, whether it's job transformation or other things, is that they're worried about human agency, both themselves and within society. So take human agency as a design focus. And all of the technologies that we have developed

essentially, you know, built out, you know, all the way back to, you know, agriculture and the first villages, you know, and printing press and, and, and, you know, cars and planes, all of that have gotten to, hey, we're working towards that. There's something we could accomplish. And if we accomplish that,

We give society and individuals in it superpowers and super agency. So I think the history is replete with it. Now, the Moon Project,

is an example of something that was done by, you know, kind of government led. I think many of the times that people don't realize is these, these transformations are frequently led by industry, you know, the smartphone, et cetera. And, and that, that's not that, that, that has many good attributes. It doesn't mean that we don't need many different kind of voices and engagement, including some regulatory, but by having, you know, kind of deploying to hundreds of millions of people, it's,

you actually in fact get a kind of a very inclusive process. So like for example, people say, well, is AI going to differentially benefit the wealthy versus the poor? It's like, well, look at the iPhone as a parallel. Your Uber or Lyft driver has the same iPhone that Tim Cook has.

And so when you're building out this mass, you have the kind of inclusion to that. And I think that's one of the things that we want to have happen, you know, broadly with AI and our, you know, cognitive industrial revolution.

Reid, thank you so much. I'm told we're running out of time, that we certainly aren't running out of curiosity about these perspectives. And may your optimism become self-fulfilling for all of our sake. Thank you so much. Thanks for listening. For more information on our upcoming programs, go to WashingtonPostLive.com.

Valentine's Day is fast approaching, and you don't want to find yourself grabbing a heart-shaped box of chocolates, do you? Why not a smaller box with a heart-shaped sapphire pendant necklace from Jared Jewelers inside? Shop Jared now for select diamond fashion and Italian gold jewelry for your Valentine. After all, life may be like a box of chocolates, but love is like a box with a Jared Jewelers logo on it. This Valentine's Day, be the he who went to Jared.

Now get 40% off select fashion jewelry. Visit jared.com slash radio for details.