We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Reid riffs on massive AI acquisitions, robotics, and headcount trimming

Reid riffs on massive AI acquisitions, robotics, and headcount trimming

2025/6/25
logo of podcast Possible

Possible

AI Deep Dive AI Chapters Transcript
People
A
Aria Finger
R
Reid Hoffman
Topics
Reid Hoffman: 在人工智能时代,人才收购和传统收购同样重要,都是企业构建战略优势的关键工具。Meta 通过大规模收购 Scale AI 股份和积极招募顶尖 AI 人才,正努力在 AI 领域迎头赶上。我认为 Meta 的首要任务是开发出与 GPT-4 同等级别的模型,并积极参与 AI 技术的竞争。公司进行人才收购和战略投资,是为了在 AI 领域建立未来的领导地位,而不仅仅是追求短期的利润增长。我认为公司需要重新配置资源,将更多资金投入到 AI 研发和人才引进上,以适应 AI 驱动的未来。虽然这可能会导致部分员工被裁减,但整体上是为了提升企业的长期竞争力。 Aria Finger: 我观察到,在人工智能时代,企业在进行大规模收购的同时也在裁员,这似乎成为一种趋势。我关心的是,这种趋势是否会对普通员工产生负面影响,例如失业或薪资降低。即使公司强调这是为了更好地适应人工智能时代,但对于那些因此失去工作的人来说,原因可能并不重要。我更关注人工智能和机器人技术对就业市场和社会的影响,以及如何帮助人们适应这种转型。

Deep Dive

Chapters
This chapter analyzes the recent surge in multibillion-dollar AI acquisitions and acqui-hires, such as OpenAI's Windsurf deal and Meta's investment in Scale AI. It explores whether acqui-hires are surpassing traditional acquisitions as a strategic advantage and examines how companies balance workforce adjustments with long-term innovation in the AI-first future.
  • OpenAI's acquisition of Windsurf for $3 billion and Apple designer Johnny Ives' startup for $6 billion
  • Meta's $15 billion investment in Scale AI
  • Acqui-hires focus on talent acquisition rather than technology or user base
  • The need for scale compute and data in training large AI models
  • The strategic probabilities of AI as a human worker replacement, amplifier, or a tool for fundamentally new capabilities

Shownotes Transcript

Translations:
中文

I'm Reid Hoffman. And I'm Aria Finger. We want to know what happens if, in the future, everything breaks humanity's way. With support from Stripe, we typically ask our guests for their outlook on the best possible future. But now, every other week, I get to ask Reid for his take. This is Possible. Possible.

Reid, so excited to be chatting today. There is so much happening in the world of AI, acquisitions, Aqua hires, billion-dollar sums. So in the last two, three weeks, we saw OpenAI agree to buy Windsurf, formerly Codium, for about $3 billion. OpenAI also announced the acquisition of Apple designer Johnny Ives' startup for $6 billion. And then

Then we got sort of the biggest acquisition or acqui-hire yet when Meta acquired 49% stake in Scale AI for almost $15 billion. And then as of this recording, we just heard that eye-popping sums are going to Nat Friedman and Daniel Gross to join the Meta team. That is not finalized yet, so we don't have final word on what is happening. But it seems like a lot of these AI companies are making moves, whether it's acquisitions or acqui-hires.

And also there's large sums of money. They're going around. Meta is building their executive team on the AI front. So I'd love your reaction to a related statement. Just tell me if you agree or disagree and why.

In the age of AI, company building, acqui-hires are generally more of a strategic advantage than acquisitions. Well, it doesn't surprise you that my answer is going to be nuanced and not going to accept either the agree or disagree format. They're both critical tools and they both need to be used effectively. And I think what's really interesting about the last couple of weeks of news is that the

you know, basically everybody that is strategic and smart is doubling down and not just resting on laurels in the open AI case, but is betting hard on AI and what AI is going to mean in the next two to five years. And so in the meta case, you've got Zuckerberg, who is

you've always been one of the most meta-strategic CEOs. You've got the Instagram purchase. When Instagram was tiny, you've got WhatsApp, you've got Oculus. Similarly, doing this massive deal with Scale, hiring Alexander, doing the reported deal with Nat and Daniel, and the report is hiring them as well, is basically trying to get

you know, really in the game because, you know, heretofore the principal success point for, for meta has been, we're releasing open source llamas and the last llama was having a lot of real problems. And so, you know, given that there is a, you know, there should be a natural structural right for meta to be much better on AI. You know, I think that's,

a good, uh, bold move. Um, you know, I don't think that the, I think the first thing will be to, you know, kind of get into the GBD for, you know, kind of class, uh, models and to be in the mix, um, because you need to, to make that kind of stuff work. And obviously Alexander, uh,

had a lot of view about what Gemini was doing, others were doing relative to their formerly using scale AI as a training labeling data source.

But do you think like fundamentally, is there a shift in the age of AI? Like you mentioned Instagram and WhatsApp. Those were clearly acquisitions, of course, for the talent, but they really also wanted the product. They kept Instagram standalone. They also wanted the network. They wanted the user base. Whereas these sort of acquisitions or acqui-hires aren't really about...

user base. They're not really about the technology. They're about the people themselves. Is that a fundamental shift or are these just a few one-offs that are happening? Well, actually, it's a fundamental addition versus a shift. The addition is the fact that in order to be training these very large models, you need scale compute and scale data. And so

And for example, scale AI's data is not its own. It's bought by other people. So you can't buy it for the data. Meta already has a whole bunch of data. Meta has a scale compute. You know, scale does not. And so there's relatively little in the product that you're looking to actually, in fact, acquire. Because by the way, it's scale compute, scale data and scale teams. The scale effort to...

To build these cognitive capabilities is absolutely the right thing to be doing. Everyone should be betting on them. The question is, is what you should be betting coming out of them? And part of that bet is a strategic set of probabilities of like, well, there's one probability that it's just, you know, a human worker replacement on a lot of things. There's another probability and which things is obviously there's another probability and that it's a massive, massive

human amplifier. That one we already see in which ways. There's another probability in which it's capable of doing things that really fundamentally matter that are beyond what we can do now. And which are those?

And that's what we need to kind of sort through our strategies. So that's sort of talking about the technical and super intelligent side. And if we stay on the business side, if you're looking at this from the outside, you're thinking like these are some huge acquisitions, but I'm also seeing these companies trimming headcount.

Google recently declared a new round of voluntary buyouts across major divisions, search, ads, commerce, central engineering, marketing, research, comms for US-based employees. I mean, that's many, many, many departments, of course, outside of artificial intelligence. And these initiatives arrive ahead of potential antitrust ruling that may force structural changes specifically at Google if they're going to split off Chrome or ask them to split off their ad tech business. And so

It seems, again, from the outside that other companies are doing the same. Acquisitions, voluntary buyouts, you know, getting ahead of antitrust rulings. So I'll ask you again for an agree, disagree. They all are happening anyway, but proactively or reactively, these companies are taking any opportunity to become leaner, especially as they integrate AI into their business. So one of the things I think is an interesting point

point that everyone gets bemused by and drives private equity people nutty is companies actually, in fact, expand to the number of employees that they can manage to experiment with getting new strategic opportunities. You know, the private equity view of these things is you should run with as absolutely few employees as possible and

Because you should be throwing off those profits, getting high P measure and increasing my stock value is the kind of P value. But on the other hand, with the P people don't understand. And this is one of the things that like, you know, venture capitalists understand and, and, you know, startup founders do and scale of founders do and large company executives is what really matters is your ability to build new strategic franchises.

And, you know, then there's a risk discount where the people think, I don't think you can do that. And the management thinks, I think I can. But part of the whole thing is for the management to say, well, you know, do I have enough buy in that I'm allowed to do that is kind of the dynamic that plays out. So what that means in the world of employment is the natural rule for all companies that are doing well is the management is giving the the permission and sometimes encouragement to

to go be extending their franchise. So like, for example, the comment has been made about Google for a long time that they have at least two-thirds more employees than they need to be running this super lucrative ad business with the other kinds of things. And if you just trim down to just like this massively lucrative ad business, put in Chrome, some other things, YouTube, you'd have this huge value. On the other hand,

you then wouldn't get Waymo. You then wouldn't get like you, like all of these other things where it's strategically investing in. And, and actually, in fact, I'm more often, at least conceptually on the side of the managers than I am in the side of the private equity people, because actually, in fact, what builds future value is these future franchises. That is actually, in fact, what you're trying to do. It's one of the reasons why I love doing

startup investing and scale ups and everything else. Cause it's like, it's building that future franchise that creates the value, not just for society,

Not just for the markets, not just for the industries, but also even private equity. But, you know, you have to be taking those shots on goal. And by the way, if you're taking shots on goal, just like, you know, any kind of football, soccer, you know, kind of thing you miss sometimes, but you have to take structured shots on goal. So now what this gets to is the question of leaner and this kind of rumor going around that because of AI, they're becoming leaner. And I think that rumor is broadly incorrect,

except for the following nuance, which is what all of the companies recognize is they desperately need to be playing in the AI future. So they need to be refactoring for that. And refactoring that is somewhat with talent for AI.

But it's also we need to be affording prodigious amounts of compute and kind of training runs in order to make that happen. And we have to do that within a, you know, kind of a P&L envelope that's acceptable to the market. And so if you say, well, if trimming these people, voluntary buyouts, et cetera, et cetera,

is the right way that we recapture some of our P&L to be spending on compute and AI talent, then that's absolutely what we should be doing. And that's a smart thing to be doing. It's not necessarily our goal is to be leaner. It's our goal is to be AI. People will be spending more money on compute because they'll be amplifying what the employees are doing.

And that means that there's a trade-off that is opposed to saying, you know, I have a fixed compute budget per employee. It's like I have a laptop, I have some server time. There's a lot more compute that's going to be deployed per employee in order to make them much more effective. And that's compatible with the super agency thesis. And it may mean that when you have a fixed size budget,

There's a moderately less sized percentage of budget that's going to employees because of the increased compute. But it doesn't mean it's not the simple, you know, kind of like terminator of jobs replacement theory is not actually what we're seeing at all.

Well, Reid, I want to push back on you a little on that because I don't think people care whether it's because Terminator AI is taking their jobs or because a company has less money for employee salaries because they have to spend more money on compute. Like, do you think that's a meaningful distinction for the average employee who maybe doesn't get hired because the company is using compute? Ultimately, they don't care if

like you know a company moves from you know call a thousand employees to 900 um and then they're one of the hundred and i i would think that's you know maybe the you know the heuristic maximum order of magnitude in any time soon in these things um but it you know it may even be more likely 950 but i don't think they'll care about that i think they'll go well i'm one of those it doesn't matter now

I do think that's part of the job transition. That's an issue. Now, the question is what usually people think about is they think that there's a zero sum number amount of work in the world. And so they don't think that other firms will be created and that, you know, that other people will be able to be hired to do other things and other firms. And so they just go, oh, that goes to the unemployment line. And actually that's, you know, given the productivity I think we're getting with AI, I don't think that's likely to happen.

I think the fact that AI can teach, help people find new work, learn new work, learn how to use AI, be productive is actually one of the things that we need to be encouraging as much as possible because the transition actually is going to be really difficult. And that transition is if you're one of those 50 or 100 that is, you know, kind of

no longer part of company A's future, that transition is difficult. And by the way, it's hard. But by the way, that's part of what productivity and progress means. That's part of what happens when, you know, everyone goes, okay, now this industry is much more productive and automated and competitive in a global basis than

Yeah. I mean, I think I do want to go back not to just, you know, dump on PE people all the time. They're a great punching bag. But I think Google is a great example there. When you think of you think of Gmail, you think of Google Docs, like to your point, they could have cut those long ago and it would have been a blip in their bottom line. And yet for the last 20 years, they've delivered enormous social good, you know, people using free Gmail accounts and

We don't know, but you could imagine that Gmail and Google Docs are actually ripe for the AI age. And that's a place where Google can innovate on that. We don't know, but at least it's a consumer interaction that now they have a toehold to. And actually, I want to note for anyone who's particularly interested in this discussion on AI and jobs, we actually have the tech economist David Otter on the pod next week talking about how AI will

influence jobs, the world of work, what it'll do to wages, the middle class. So stay tuned for that. So Reid, I want to switch gears a little bit from software to hardware.

In Q1 2025, we saw $2.26 billion in global robotics funding. 70% of that was directed at specialized robotics in places like logistics, healthcare, inspections. And when we had Demis Hassabis on the pod, he talked about breakthroughs like DeepMind's Gemini Robotics and Gemini Robotics ER. And those are enabling robots that interpret language and adapt in unstructured environments.

And they also have Cardinal Robotics. It sells cleaning robots that are two to four feet tall. They have raised $800 million in capital to pay robot manufacturers up front, allowing them to lease the robots to businesses in exchange for a monthly fee that's more manageable for many business owners. So it's clear we might be like in the first inning, but sort of this age of robotics is coming. Like Tesla claims they're going to make huge advances in robotics this year too.

So I would love to hear your agree or disagree. These parallel breakthroughs and LLMs that can power robotics and in hardware manufacturing are unlocking access for businesses of all sizes to deploy versatile AI-driven machines in everyday operations. So people would normally expect my answer to be just a straightforward 1,000%.

agree on this and by the way obviously this is certainly where what's going to be happening and the only real question is time frame so i think the the short answer to this is in a medium time frame at least the answer is absolutely yes agree um

But the nuances are that I actually think the world of physics, the world of atoms, is much harder than the world of bits. It doesn't mean it's impossible. I mean, for example, what we're accomplishing with AI in terms of visual image recognition is extremely important. And that visual image recognition unlocks everything from Aurora self-driving trucks to...

you know, radiology, you know, in, you know, kind of cancer diagnosis. So, you know, and all of these things actually, in fact, really do play over in certain ways into the world of atoms. But the usual generalization is, and now it'll be the whole world of atoms. You know, it's like, for example, like we're going to create a simulation of the entire world, and then we're going to figure out fusion within the simulation of the entire world. And you're like,

you know, that's great. Science fiction might be true someday, not true soon. What I think will almost certainly happen since we, since this is the, the read riffs where we're dunking on private equity people is,

is that all of the private equity people think it's happening really, really soon or betting really hard. And, you know, there's a reason why I started Manus with Sid and not robotics companies, because biology is actually like intermediate between bits and atoms. It's, it's, it's, it's, it's like, it's much more like digital atoms. It's, it's, it's much more like code. It's much more, you know, complex and environments, much more amenable to that sort of thing. And so it's,

That's part of the reason why I think, okay, the robotic stuff is going to come. It's going to come in waves. What the first wave is, and people tend to be predicting this as just like robots everywhere, that one is probably the one that I'm most skeptical of not being the first wave.

And because there are these things that could be massively beneficial to humanity, surgical, you know, self-driving cars being so much more safe, you know, people not having to go into mines and so, you know, better for your health. Do you think there should be government investment or some other kind of investment there so that we can bring those on more quickly? Or no, we just status quo is where we should be? Well, there are forms of what I think government investment can really be.

And there's ways of doing smart government investment. So but in this way, you know, again, I'm kind of nuanced that I'm neither left nor right on this. So, you know, right tends to be the only kind of government investment you should do is like tax breaks and left should be as you should spin up new deal programs for the government investment.

And there's places where the left is more right, for example, in funding of basic science and all the things that, you know, right now is under, you know, a massive assault and, you know, future generations are going to pay the price in, you know, medical deaths and other kinds of things for this. And there's places where the right is correct on this, which is,

allowing free enterprise and getting, you know, kind of companies and people doing this sort of thing is, in fact, extremely important. Now, but there's also other tool sets that governments generally haven't experimented as much because it's one of the places where if you kind of took your thinking business hat on and said, OK, look, it's generally true that governments are not VCs, can't pick winners, shouldn't be trying to, you

Well, that means they shouldn't be involved is the classic right wing answer. And it's like, well, actually, in fact, if what you did is you said, hey, we're just in venture capital that's investing in things that will be creating, for example, U.S. jobs. Well, here is a format by which we identify venture capitalists who are not government employees, who have private market equity insurance, that equity incentives that are validated by people who invest in VCs a lot, foundations, universities, etc.,

And we will put in matching funds to them with a huge benefit to the fund and the LPs if they can demonstrate they're creating, you know, businesses that create U.S. jobs. Right. And we're not looking to make money off this in the public. What we're doing is we're using the fact that we have liquidity and capital to incentivize the market's creation of these jobs. So I tend to think that there are ways that governments can invest in this. It's just you have to be clever about it.

and engaging the networks of venture capital mindset, you know, kind of, you know, and how do you get like a network of entrepreneurs? And by the way, of course, you know, one of the things that, you know, drives most lefties nuts is you have to enable people to get very wealthy doing this because that's part of the incentive mechanism that then creates the value for future generations of society.

And, you know, we've run the experiment where we're absent that incentive mechanism and it never works. It doesn't work. Right. Yep. I mean, so, you know, it'll work this time. It's like, well, okay, read history. Go through multiple societies trying this and make an argument why this time is different. And then I will take your, you know, anti-capitalist position more seriously. But like, until you've done that,

You're really not credible. You're just, you're just a sloganeer. Look like one of the things that I think people, Adam Smith gets overly praised and overly criticized is,

And part of it is that he didn't just write The Wealth of Nations. He also wrote The Theory of Moral Sentiments. And that part of what I think people have to take seriously about, you know, part of the functioning of capital doesn't mean that it doesn't have issues and challenges. And, you know, it's such an efficient, good mechanism. It can create so much impetus in a direction. You go, whoa, whoa, whoa, that's making environmental changes. We need to we need to rebalance the incentive some.

But it was also a question of like capitalism is how we have service to each other.

How is it that I create products and services for you? And that that is a great transformation of human nature for, you know, how do we operate? And of course, you know, you know, in part of current political times, you know, part of what Adam Smith and others, but Adam Smith was one of the original gangsters in this, was a genius about how do we make human existence win-win?

Like, how do we make geopolitical treaties win-win?

And how do we make that? How do we organize that? And, you know, I'll close by, you know, kind of going to a, you know, one of my favorite books, which is Non-Zero by Robert Wright, because the building of systems that prefer non-zero sum, you know, kind of outcomes and preference is part of the progress we make in society. And I think it's part of the, you know, where, you know, the people who do that

are part of the good people and the people who destroy it are part of the bad people. I mean, I couldn't agree more. I was saying this to someone the other day. The idea that I could create a product that someone would pay money for...

oh my God, they liked my product so much. They were willing to part with their hard-earned dollars or I was such a good consultant that they were willing to spend tens of thousand dollars on my insights. Like that feels incredible to be able to your point, do that service for someone and for them to be excited about parting with their dollars because they're getting something more. So Reed, anytime we can dunk on the left and the right and private equity, it feels good. So thank you for joining us today. And we will continue to do that. Always fun.

Possible is produced by Katie Sanders, Edie Allard, Tanasi Dilos, Sarah Schleid, Vanessa Handy, Aaliyah Yates, Paloma Moreno-Gimenez, and Muliha Agudelo. Special thanks to Surya Yalamanchili, Saeeda Sepiyeva, Ian Ellis, Greg Beato, Parth Patil, and Ben Rallis.