We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Reid riffs on AI as “American Intelligence” and a prediction from Anthropic’s CEO

Reid riffs on AI as “American Intelligence” and a prediction from Anthropic’s CEO

2025/3/19
logo of podcast Possible

Possible

AI Deep Dive AI Chapters Transcript
People
A
Aria Finger
R
Reid Hoffman
Topics
Reid Hoffman: 我认为对美国而言,保持其在软件领域的领导地位至关重要,这不仅关乎经济利益,更关乎其在全球范围内的影响力。我们需要成为一个稳定可靠的合作伙伴,通过技术合作和全球化战略来维护和加强与其他国家的联盟关系。将人工智能视为美国的战略资产,并支持其他西方民主国家的科技创新,对构建更健康、更稳定的全球社会至关重要。当前政府的一些政策,例如‘美国优先’政策,可能会损害美国的国际关系和全球影响力。我们需要避免零和博弈的思维方式,认识到市场可以增长,各方都能从中受益。 关于人工智能代码生成,我认为Dario的预测(未来一到两年内,90%-100%的代码将由AI生成)可能过于乐观。虽然AI将在代码生成中发挥越来越重要的作用,但人类仍然需要参与其中,即使只是作为辅助工具。新的工作流程将是人机协同,AI将成为程序员的强大助手。AI公司之所以如此关注代码生成,是因为它提高了生产力,并且代码质量更容易评估。这将有助于理解其他专业领域(如医疗、法律)的适应性函数。AI代码辅助工具将提高各个领域的生产力。 关于Andrej Karpathy的预测(未来大部分内容将由大型语言模型生成),我认为互联网内容不会完全同质化,因为内容创作仍然存在多种激励机制,例如SEO和塑造AI对信息的理解。虽然会有许多公司因为AI热潮而失败,但这与之前的科技浪潮类似,最终仍会诞生一些具有时代意义的公司。投资者需要谨慎,识别哪些公司具有可持续的商业模式。创始人需要评估其收入的可扩展性,并确定其商业模式是否具有可持续性。 Aria Finger: 美国如何通过软件保持其软实力,在当前美国外交关系动荡的情况下,尤其是在某些科技领导者操纵平台的情况下,是一个重要的问题。将资本主义视为零和博弈是一种根本性的误解,市场可以增长,每个人都可以从中受益。Anthropic 的 CEO Dario 预测,未来一到两年内,90%到100%的代码将由人工智能生成,这引发了关于公司如何为此做好准备的问题。Andrej Karpathy 预测,未来大部分内容将由大型语言模型生成,而非人类,这引发了关于内容创作激励机制的问题。当前的‘氛围编码’和‘氛围收入’热潮可能导致许多公司失败,但同时也可能诞生具有时代意义的公司。

Deep Dive

Shownotes Transcript

Translations:
中文

I'm Reid Hoffman. And I'm Aria Finger. We want to know what happens if, in the future, everything breaks humanity's way. Typically, we ask our guests for their outlook on the best possible future. But now, every other week, I get to ask Reid for his take. This is Possible.

So, Reid, obviously the U.S. has been a dominant force in software for the last few decades. Maybe that's an understatement. You've recently been in the U.K. launching a super agency to the U.K. audience. We know that France, of course, has a lot of amazing AI companies. We see incredible engineers coming out of Poland. But still, the U.S. is the home for software development and innovation. And so for a long time, you know, that sort of software has been soft power. People have talked about that. People have talked about that with American culture, sort of we're exporting it to the

With all the volatility, though, happening right now in American foreign relations and perhaps a particular tech leader's manipulation of platforms, how does America maintain its soft power through software going forward with all this volatility? I mean, one of the problems with the

current general administration approach of how to lose friends and alienate people, the opposite of Dale Carnegie, is that that will create substantial problems for all American industry, including the tech industry. And so having all these discussions with people about like, okay, what kind of technologists here say, well, I'd rather buy a BYD car than a Tesla. One is kind of saying, hey, I'm just trying to be a stable partner. And the other one saying, you're my enemy. And so there's a whole stack of things. And this goes all the way through. And part of what I'm trying to do here is persuade people

folks that the U.S. is actually, in fact, can be a continuing stable partner despite randomness of tariffs things, randomness of other kinds of things that the

that the business of America being business is still actually, in fact, something that we hold, you know, a bunch of us Americans hold dear and kind of try to operate and try to kind of say, hey, there's still bridges we can build here. And so I think that's critical for America to do. I think the companies will naturally try to do that because then the companies will naturally want to be globally selling their products, setting standards, having American technology be the kind of the foundation of a lot of what's happening in the rest of the world. There's reasons we want companies

AI to be American intelligence. And it's really kind of, that was a kind of keep a leadership position. I do think it's actually better for us as Americans and us as Silicon Valley folks that we have more Western democracy centers of innovation. And it's not just really good for them, whether it's the British or the French or the Polish or the Singaporeans or the Japanese, but it's also in each of these cases, I think good for us as Americans. It's part of what creates the multilateral system that

makes a much healthier, stable global society. Now, that being said, I'm not sure how deep that knowledge and perspective goes within the current administration. I mean, obviously, most of the actions and most of the gestures tends to be the, when you come over to Europe and say, it's an America, not just America first, it's an America only kind of position, that obviously breaks alliances, gets people to think about like, well, who else should I potentially ally with?

And I think that's not very good for us as Americans, but also not very good for global society.

I mean, I think it's so interesting because it's just a fundamental misunderstanding of capitalism as a zero-sum game. And it's like, no, actually, throughout history, we've shown that markets can grow and everyone can benefit. And as someone who wants America to win, sure, I live here. My family's here. Like, it's just so clear that that's not how we do it going alone. It's one of the things that I've done for a number of years is when I kind of administer from a, you know, a kind of an allied country, a country that shares our values about

kind of individual rights and human rights and the way we come together in a democracy and a bunch of other things. And it's like, okay, I will try to help them with if we can build more Silicon Valley, if we can build more technological innovation centers. I think that's actually a really good thing. We've said many times that technology happens slow and then fast. And I think a lot of people, when they think about AI,

chat GPT, and maybe for most Americans, their lives haven't changed that much. But for software developers, I would argue that it's a different story. And just last week, Dario from Anthropic said that in three to six months, 90% of code was going to be written by AI. But in a year or two, 100% of code is going to be written by AI. So do you agree with that? And how do companies prepare for this future where potentially 100% of code is AI-generated?

Well, it'll be interesting. I mean, Dario's super smart. He's one of the co-architects of the OpenAI scale hypothesis, which is all you really need to do is be scaling these kind of deep learning machines, and that actually, in fact, gets you a massive way along the lines to artificial general intelligence. He obviously believes it gets you the whole way. There's some bit-ass spread there. And Anthropic, along with OpenAI and

Microsoft and Google and a number of others believe very intensely that the right way to be kind of accelerating along this path is to be building code generators. Whether those code generators are in the Microsoft side, Copilot, or whether those code generators are actual agents of code generation themselves. There's a developmental spectrum there, which things are this quarter, which things are three years from now, and how do they operate? And even when you have a code generator, how does it work?

with human beings, if at all. We are still going to need humans in the loop. And even if it's 90 or 95%, to your point, so many more people are going to be creating code that they're going to have that last few percentage points of the sort of humans doing it, whether it's through Copilot or all sort of these new professions that they can have coding assistance. I do think that we will see a greatly increasing amount of code being generated by AI, but I also think a lot of that code being generated by AI will be in Copiloting with humans. Now, it may very well be that anytime you deploy a

Within a couple years, you deploy. Even one year, you deploy a software engineer. They're deployed with multiple coding assistants. So it's almost like imagine a person deploying with drones. Like a person deploys that way. And by the way, the agents will be talking to each other. And you might say, go research this and generate some specific code and do this, this, and this. And then come back to it as opposed to the co-pilot being the line-by-line agile programmer who is working with you. So there's this kind of whole range and scope of how these things work.

But one of the things I want to kind of conclude this particular theme with is there's a number of reasons why all of the AI companies are going after code intensely. One is obviously the acceleration of productivity, acceleration of software engineering, acceleration of coding.

Two is that it has a fitness function that is easier to determine is this code good or not. And unlike, for example, while there's stuff going specifically legal and medical, there's a lot more fuzziness there and a lot more tight of a fitness function to code. But as you get the coding right, that will also help with understanding the fitness function for all these other professional areas, medical, law, education, etc.,

And this is one of the things I think perhaps way too few people understand is a part of the amplification, whether it's for being a

a podcast co-host or a CEO of kind of the largest philanthropy network in the world or the teen philanthropy network in the world or any of these other things, you're having a coding co-pilot, a coding assistant will make any of these functions much more productive. Obviously, you have to get to the point where you're beginning to think about, well, what is the way that I would want the coding on this? But by the way, it's parallel to a management problem, which is when you say, hey, I would like to direct clients

Sarah or Bob to doing this, well, what direction would you be giving them? And as part of that direction, that's essentially how you'd be directing your coding assistant too. Well, sort of a related question that I think a lot of people are thinking about. Recently, Andrej Karpathy tweeted, and I'll read it. He said, it's 2025 and most content is still written for humans instead of LLMs. 99% of attention will soon be LLM attention, not human attention.

So again, sort of a related issue before we were talking about coding being 90 or 100% AI generated. And now he's talking about the fact that all of the data that these LLMs are going to be hoovering up on the internet will be generated by AI. My question is, we know that they can train on synthetic data. You've talked about that. You don't necessarily think that's an issue or please disagree. But what is the incentive to create new data anymore? Like if

If everything is going through an LLM, what is the incentive for the New York Times or any other website to be creating this data when the only business model is people using their AI to access that? Is it going to have to be BD partnerships between the New York Times and the LLMs, between your website, between WebMD? How will that work from a business perspective? Well, part of it, even if it is kind of a very low percent achievement attention, the question is where do the economics get generated from? So if you say, well,

the economics get generated by advertising impressions and humans seeing it, then it isn't so much a percentage question because what of the way you derive kind of the correlation to what Andre is saying is that there's this massive growth in LLM data and LLM reading data, but you still have a whole universe of humans that may be still where your primary target is for the advertising model. Now, it's also possible that the LLMs will have economic models themselves associated with data.

It could be the equivalent of kind of SEO, search engine optimization, which is one of the things that the content sites generate a lot today. And so it'd be like the, okay, well, I'm SEOing the agents or I'm providing things that when the agents are doing stuff, that has an economic loop relative to my business model. So I think that there's all of that. Now, there's, I think also part of the whole thing is actually, in fact, having shaping kind of how these agents are parsing the world, what kinds of things they think are important to do.

There's undoubtedly, just like SEO, there's undoubtedly going to be some really interesting economics around that. And so that may be one of the things that is the reason why people would be generating content with a LLM focus, with an LLM intensity, even if it's not a focus. And I think all of those kind of play out. And I do think that one of the things that's the more subtle thing that's deep about Andre's comment is that I do think that more and more content that's being generated will both A, be generated by AI, and B, be generated by finding AI. Right.

And actually, in fact, this is one of the things that you're trying to do because you say, well, if I generate the content for the AI thing, this is the way that my benefit is. My benefit could be I share in the ad revenue. My benefit could be is I intellectually shape the space of the AI such that when it goes, oh, you're looking for travel to Turkey? Here's some things because it found that's part of the data that it was trained on and everything else is operating. So it could be a whole range of different things. I do think that the likelihood that it's, call it 100%,

for agents seems very unlikely, especially when you consider that the agents themselves are broadly most valuable when they're talking to humans. And do you think, so even if it doesn't go to 100%, again, it might be 95%, 96%, do you think this will homogenize everything that's on the internet? Everyone's worried that it'll actually converge and so we won't have the outliers because it's made for AI. Do you see merit in that? Well, the question comes down to what the incentives are for the content creation. If the incentive for the content creation is

to fight for the generic stuff in the middle, then yes, then it'll be huge. But by the way, just like markets, you go, okay, well, so those big companies are doing all this stuff for the generic homogeny thing. So maybe there's an opportunity for me for breaking left when it's a little bit of the, hey, I'll create an anti-woke AI and maybe that I'll get some attention given all the attentions being given other people as kind of this thing. So I

I tend to not really think that the homogeny thesis is a danger and a worry because of these kinds of incentives. And so amidst all of this AI boom, I feel like especially in the last two weeks, everyone is talking about vibe coding. I will go on the record that I would like the backlash to the term vibe coding to begin now. So tipping my hat on that one.

but just recently people have also been talking about vibe revenue, that there's this excitement, people are trying out something for a day, realizing it doesn't stand up to what they want. They're testing out, this is an AI craze. And then in 18 months, we'll have all these zombie revenues who thought they saw a path to amazing ARR and growth, but it was just noise. It wasn't signal. So I guess I'd ask, do you think this is any different than any previous tech shifts, whether it's mobile, internet, et cetera? We're always going to have companies that

are excited and then they don't work out, but we still build generational companies amid those ones that don't work. Or is there something different here that because it's so easy to start that there might be more failures amidst the vibe coding and vibe revenue shift? Well, I think each new technological generation of which is the largest in part because it builds upon kind of the internet, cloud, GPU revolution, et cetera, like that may mean that there is a ton more in the kind of

unusual, eccentric, not very workable camp on these things. It's like the AI juice machine, this kind of thing. And there's going to be a stack of these things.

But I do think that what tends to happen is people say, oh, look, there's this foolish thing. It's a bubble. It's a foolish thing. This isn't going to work. And you're like, well, no, actually, in fact, there were tons of foolish things in the internet, mobile, and everything else. And by the way, some of them were foolish because they were just idiotic, and some of them were foolish because they were too early. Webvan was trying to do a capital play when it's too early, and now we have Instacart and DoorDash and other things like it. So I think that we will see a lot of

vibe coding, vibe means vibe revenue. And that will play out. By the way, a few of those may even go the distance. Now, one of the things you, you know, I usually say to investors, as you know, cause we've talked about this is if you can't spot the sucker, you are the sucker. It's like poker. And so you should actually have some knowledge of what you're doing. And so obviously there will be a whole bunch of investors who lose money and

by investing in nutty things that even look like they have great revenue growth for the first month. Because that first month of great revenue growth is like a small pickup, and then it kind of flattens out because it's no longer of interest to people. Or, you know, the vibe changes, or that was the level of interest, and that interest goes away. So there'll be a bunch of different money loss circumstances too. But I think overall, I don't see any reason that there's a category difference. It may just be larger because it's larger, but...

It's still playing the same general principles as the earlier investment waves in technology. And do you have a, so that's from the investor standpoint, but if you're a founder, do you have advice for how they sort of see the signal through the noise when they're trying to figure out if this revenue is real or if, oh, they really do have product market fit because they saw the revenue come in? Typically when you look at it, you just go, well, really how scalable is that revenue and why?

And, you know, just because, for example, I'm being paid by individuals and there's 8 billion individuals in the world. So it's scalable. It's like, no, no, no, no, no. It's like, who are the people buying? Because, like, for example, usually when, you know, X people buy, Y people don't. So you have some, you know, what is the percentage of X of Y?

And you kind of go, okay, why is that? And how scalable is that? And does the value proposition scale? And can you reach them with a go-to-market and a bunch of other things? And so you tend to look at revenue things as what is a scalable business model? What is the engine of scale? You know, there's some science to this, but there's also some art to it because you're predicting the future in terms of how this works. So I think it is actually, in fact, you can derive a good theory of the game, a good prediction from

but you're going to be bringing a bunch of knowledge and assumption and hypothesis to your theory. So, Reid, thank you so much. Always great to chat. Always a pleasure. Possible is produced by Wonder Media Network. It's hosted by Ari Finger and me, Reid Hoffman. Our showrunner is Sean Young. Possible is produced by Katie Sanders, Edie Allard, Sarah Schleed, Vanessa Handy, Aaliyah Yates, Paloma Moreno-Jimenez, and Malia Agudelo.

Jenny Kaplan is our executive producer and editor. Special thanks to Surya Yalamanchili, Sayida Sepieva, Thanasi Dilos, Ian Ellis, Greg Beato, Parth Patil, and Ben Rellis.