We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode (Preview) Google’s Willow Chip, Drones as a Platform and Anduril Follow-Up, Building Inside and Outside Silicon Valley

(Preview) Google’s Willow Chip, Drones as a Platform and Anduril Follow-Up, Building Inside and Outside Silicon Valley

2024/12/16
logo of podcast Sharp Tech with Ben Thompson

Sharp Tech with Ben Thompson

AI Deep Dive AI Chapters Transcript
People
A
Andrew
专注于解决高质量训练数据和模型开发成本问题的 AI 研究员。
B
Ben
无相关信息。
Topics
Hartmut Nevin: 我很高兴地宣布我们最新的量子芯片Willow。Willow在多项指标上都具有最先进的性能,实现了两个主要成就。首先,随着我们使用更多量子比特进行扩展,Willow可以指数级地减少错误。这攻克了该领域近30年来一直在追求的关键挑战——量子纠错。其次,Willow在不到五分钟的时间内完成了一项标准基准计算,而当今最快的超级计算机完成这项计算则需要10个七百万亿亿年,这个数字远远超过了宇宙的年龄。 Ben: Willow量子芯片的突破在于其错误纠正机制。通过增加物理量子比特,可以提高逻辑量子比特的稳定性,从而实现更可靠的量子计算。虽然目前量子计算的突破对日常生活影响甚微,但其在未来,例如破解特定类型的加密算法和处理非结构化数据等领域具有重要的应用前景。目前Google进行的量子计算测试结果的可验证性存疑,其现实意义有限。Google在研发方面的投入巨大,其成果对社会和科技进步具有潜在的重大意义,但同时也引发了关于技术共享和反垄断的讨论。与苹果公司相比,谷歌的研发方向更偏向于基础科学研究,而非直接面向产品的商业化应用。无人机技术已经相当成熟,在军事领域具有重要的应用前景。 Andrew: 我认为谷歌是当今最接近贝尔实验室的科技公司,其研发投入对社会科技进步有益,但其垄断地位也存在一定的社会成本。

Deep Dive

Chapters
This chapter delves into Google's announcement of its new quantum computing chip, Willow. It explores the technological advancements, potential real-world applications (and limitations), and compares it to classical computing. The discussion also touches upon the implications for cryptography and the broader implications of Google's massive R&D efforts.
  • Willow chip reduces errors exponentially as it scales up.
  • A benchmark computation took Willow under five minutes, while it would take the fastest supercomputers 10 septillion years.
  • The main breakthrough is the error correction mechanic enabling stable logical qubits.
  • Quantum computing has potential applications in breaking certain encryption algorithms and improving search on unstructured data.
  • The chapter concludes that while a breakthrough, Willow's real-world impact remains in the distant future.

Shownotes Transcript

Translations:
中文

Hello and welcome to a free preview of sharp tech. Ben, how you doing? Doing well, Andrew. How are you? I'm doing all right. How are you feeling on the way into the Bucks Thunder Cup game on Tuesday night? Do you want to call your shot here? Well, I mean, you know, someone's got to be the first to win both the NBA Cup, the NBA finals. So if it needs to be the Bucks, I'm happy to support that outcome.

I love to see it. I love that energy out of you. Much better energy than we had to begin the season here. So hopefully Dame and Giannis can keep the good times rolling on Tuesday night in Las Vegas. But that's not where we're going to be today. We're going to start with quantum computing, Ben. This is going to be a little bit of a new leaf for us to turn over. Yeah, maybe we should talk some more about basketball. Let's just dive into hoops for the next hour. Chris says...

Could you take a moment to explain Google's new Willow chip and quantum computing?

There seems to be a lot of hype surrounding Google's recent announcement, but I can't make heads or tails of what the real world implications are. And Ben, just to frame this for anyone in the audience who's not familiar with Google's willow chip or what the announcement was, I'll read a portion of Google's blog post from last week. This is from Hartmut Nevin, who leads the quantum computing team out at Google and

Today I'm delighted to announce Willow, our latest quantum chip. Willow has state-of-the-art performance across a number of metrics, enabling two major achievements. The first is that Willow can reduce errors exponentially as we scale up using more qubits. This cracks a key challenge in quantum error correction that the field has pursued for almost 30 years.

Second, Willow performed a standard benchmark computation in under five minutes that would take one of today's fastest supercomputers 10 septillion years, a number that vastly exceeds the age of the universe.

And then later in the blog post, they write, this mind-boggling number exceeds known time scales in physics and lends credence to the notion that quantum computation occurs in many parallel universes in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

So, Ben, what do you think? Are we living in a multiverse here? We can start there. That feels like an unnecessary addition. I mean...

let's just say up top, we may be better placed to be talking about Qbert, this sort of arcade game from the 80s. That was a favorite of mine. It was a tad before my time, but I could fake it on Qbert. I don't know. If we're talking quantum mechanics, I didn't have time to audit quantum mechanics over the weekend, but we'll give it our best shot here. Yeah, one of my favorites on the Atari back in the day. Yeah, we are not going to fashion ourselves as sort of quantum mechanics

quantum computing experts, to say the least. My primary memory of quantum computing was I was at some sort of Microsoft press day sort of thing. And at the end, we went to their sort of quantum computing lab. And I had just arrived from Taiwan and was pretty jet lagged. And I think I fell asleep in

in the lab, which was unfortunate. But I mean, the way I think about quantum computing is, I mean, you have this crazy state where in classical computing, you're dealing with bits and the whole thing with transistors is they're either on or off. And the fact they can be on or off

is how the whole thing is built. It's built on this entire, like all of logic can be devolved down into ones and zeros. And, and the whole chain of like how that happens is, is really interesting. But the long and short of it is that the way we do computing is that computers break everything down basically into ones and zeros and run these very simple calculations that are, you know, there, there's, there's just stacks and stacks of logic on top of this to make this possible. And,

and we can sort of solve stuff. And one of the big shifts over that we've talked about in the context of AI is the shift from general purpose computing to this GPU accelerated computing.

And what that does is instead of having these relatively deep pipeline general purpose chips, we have simpler cores or an NVIDIA card has like 4,000 cores on it or something, or the actual ones used for AI have way more than that. And it does these simpler calculations and it does them in parallel. So it's doing like a bunch of similar calculations all at the same time. And this is parallelization. Quantum computing is like,

parallelization but it's like just a completely it's like a completely orthogonal it's like like it does all the calculations at once at the same time so it's kind of insane because a qubit unlike a regular bit it's not one or zero it can sort of be all of the above at the same time and right that's what i couldn't really understand is like how you could possibly build a

that are anywhere approaching reliability. No, you can't. I mean, that's part of the breakthrough here. So you have these qubits that can be sort of at all states between zero and one. The problem is like you go to measure it

And then it flattens down into either a zero or one. Right, it changes. Right. And there's all these crazy things like different qubits can be entangled. So when one moves, the other one moves sort of X, Y, Z. I think the bigger breakthrough here is you have like a physical qubit and then you have what's called a logical qubit, which is the qubit you actually do computation with.

And one of the challenges is you have physical qubits that are very hard to stay in state. And what they've done is basically shown that you can scale up the number of physical qubits such that they correct each other and keep each other in place so that the logical qubit, which sits on top of the physical qubits, is sort of stays in state long enough for you to do calculations on it.

And they've basically shown that we can add more physical qubits to make it more stable, which is a big sort of breakthrough because usually more physical qubits, they would get less stable. But it's basically they've figured out this error correction mechanic where ones fall out of state, the other ones that are part of the logical qubit correct the ones that fall out of state, and they're all correcting each other such that the logical qubit stays in its state long enough that you can use it for sort of a calculation. Yeah.

That right there, I think, is the main is sort of the main breakthrough here, because that's sort of a prerequisite to doing anything is that you have something that you can sort of operate on consistently and you have a path towards it being sort of more useful, more useful over time.

So it's a genuine breakthrough. It's also one that's like five years on from the last sort of breakthrough or whatever it is. This is a field that it has applications. I mean, the classic one everyone talks about is breaking certain kinds of encryption. One thing that like a hard problem in what you do with a lot of encryption is you multiply two numbers together and then you get an end number. But to devolve that back to the numbers that created it,

You have to factor it, right? I don't remember factoring from like middle school math or whatever, where you're sort of breaking apart a number into its different component pieces so you can figure out what the variables are. I hated factoring back in the day. But instead of calculating every single possibility, which you would do with classical computing, this idea that you could sort of do everything.

all the possibilities sort of simultaneously and figure out what it is, in theory means you could get to a state where certain sorts of encryption algorithms, you could crack them very quickly. Pretty quickly, yeah. But this doesn't apply to all encryption algorithms, just some of them. Some of them are more easily broken. There are workarounds sort of against this. It's not like encryption is going to be dead. And also, we're years and years away from this being possible because we're talking like super small.

So that's one. There's like certain search applications, particularly on unstructured data, where again, you're just like right now, there's lots of applications where we just sort of do it serially. We try every possible way.

And then we eventually find the right one. And that's where they come out with these predictions. Oh, yeah. To do this with classical computing would take seven septillion years or whatever it is, because that's assuming you're like going through and trying every single one. And what if you could try a bunch of a bunch of one sort of simultaneously all at the same time in the different metaverses or whatever it might be? So there's real applications here. This is real progress.

It has basically no implication on the world today as it is or in sort of the immediate to medium future. I mean, I guess the long term goes to infinity, so it does its applications there. Long term goes to septillion, Ben. Absolutely. So yeah, it's a breakthrough. It's cool. It's not really pertinent, I think, to anyone and not really pertinent to...

the sort of stuff we talk about on sharp tech or, or on stratechery. So, and I probably, even in this very high level, fuzzy overview got details wrong. So don't kill me for it, but I appreciate the effort. And I appreciate the question from Chris, because I I've seen sporadic mentions of quantum computing over the past few years. My wife is actually working on a project related to post quantum cryptography for her consulting work. And I,

And on Sharp China, there's always stories about China pursuing quantum computing and the U.S. trying to do quantum computing. And so I wasn't necessarily sure. I mean, I was coming in pretty green here. But it's good to know at a high level that a lot of it's related to cryptography. And in the near term, a lot of it is sort of irrelevant to the tech story that we're telling today. Right. It's not all cryptography. It's like certain ones. There's a very famous algorithm called RSA that's used –

like for web browsing and stuff like that, that's like trivially broken. And so that would have to be like solutions done to that. There's other ones like hashing algorithms that aren't really impacted at all. It's not an insurmountable problem. Like if someone had had quantum computing or there was visibility to it being a thing in the next decade, like,

Like it would be a big lift to update everything to make it resistant, but it's not, it's sometime framed as all encryption is dead and that's not the case. And so like, that's what my wife is working on updating systems to be quantum resistant. Yeah. Like, so like Bitcoin, like the mining algorithm could be solved faster, but it's like a speed up. It's not breaking the actual like keys where you hold your wallet. Yeah.

that could be broken tomorrow. So like, but so that would have to be changed. Like, so, but so there's, there's, there's different out, there's different encryption methods that are impacted in sort of different ways. But again, I don't think there's any sense or worry that, that China is going to figure out tomorrow. And we're, we thought it was like decades away. I think everyone's pretty clear. It's a, it's a long ways away. There's something here. It's certainly very compelling. The entanglement aspect of, of qubits being connected, like across space,

like even though they would be close to each other is insane. Like there's, there's quantum mechanics. Like it breaks your brain to, to a certain extent. And that's why you throw in this like multiverse sort of aspect of

I mean, sure. You know, we're recording this. It is 947 a.m. on a Monday morning. I'm probably not in the right mental state of mind to explore those possibilities. Well, I appreciated Google for throwing that in there because it made the story buzzier and more viral. And it's why we got a number of questions about Google's quantum computing efforts. Yeah, this whole test is silly because it's like it's like simulating

It's like using quantum mechanics to simulate quantum mechanics. And like the comparison to like how classical computing, it's not actually verifiable because number one, no one has tried to do it for obvious reasons. It would probably take forever. So they're comparing it to like a theoretical output. And there's actually been in the past, I don't have it offhand, but I know there's been tests they've done. And then someone's like, oh,

Actually, we could do it in classical computing. They figure out some new algorithm that does it just as fast. So I don't think that's probably not the case here. But the actual test has no bearing on like anything in real life or whatever. Again, solving real challenges that we face today. Okay. I think there is a real breakthrough here. And it's really cool. I also, my view right now is this has no real impact on sort of day-to-day life. So it's a fun story, but nothing to get too worried about.

Okay, there was one other tweet that I wanted to read here before we move on. A listener named Sam sent us this note from Trunk Fan, who tweeted this in response to the Willow freakout that everyone was having. He wrote,

Waymo, AlphaFold, the Transformer Gen AI paper, Willow Quantum Computing, chips, and triple unskippable back-to-back-to-back pre-roll YouTube ads. Ben, do you agree that if any company has a claim to the Bell Labs legacy, it's Google?

Absolutely. I mean, I think we've talked about that on this podcast before. And this is, you know, I mean, I guess it says something about me and I'm the wrong person to ask about quantum computing, where this question gets me much more excited. And I'm like, my sort of brain gears turning. Let's roll up our sleeves and talk through it. No, because it's an interesting sort of governance and philosophical question, right? Should we worry like Google OneSearch, congratulations. Is it actually...

a better net positive that instead of going after them and trying to push against a string, as I frequently talk about and undo this search monopoly, we should say, Hey, yeah, just keep shoveling your money sort of into these science projects. Should there be some sort of similar to the bell labs, like part of part of the settlements, the government had with ATT over the years is they couldn't sort of go into those industries. They can only use those inventions for their own. And it worked out because things like transistors and information, you know,

What's the word I'm looking for? I'm like, all the stuff they were working on was relevant to phone service. And so they got a benefit. But obviously, it basically birthed modern computing. And it's hard to argue that that wasn't a big sort of net win. Like Bell Labs was basically the R&D department of the U.S. This is a point that Bernal Hobart made in his book in our interview a couple weeks ago.

It was basically the R&D lab for the U.S. government. The U.S. government could basically not have to spend all the money to do like public R&D. They could just tolerate the AT&T monopoly and say, you do all the money for R&D. And again, U.S. sort of technological power is predicated on...

a lot of inventions that came out of Bell Labs. Now, is there some sort of disconnect here where what Google does ought to be shared more broadly? Transformers is a good example of what we want. It was a public paper. Anyone could pick up on it and do it, and they did. And the fact that it's not sort of patented is a real victory. Should there be something similar with, you know, with a Waymo or whatever it might be? Maybe. I mean, and is it really a...

criticism that, oh, they're inventing stuff that's not being sort of monetized or, you know, that, you know, I think that there's this, I'm not completely sure how to structure this. Like, do you just sort of back into a bell lab situation? Can you actually do it with purpose? I certainly tend to agree that a lot of the modern antitrust crusaders spend a lot of time on challenges that they're not going to solve and that they're

Unfortunately, have the outcome of not really helping anyone, including potentially killing stuff like this. So I think that that's a fair critique. But hey, it is an interesting philosophical discussion. What do you think?

First of all, yes, I think that Google is the closest we have to a modern day Bell Labs. I mean, if we could get a look at the internal projects from every company in big tech, I think Apple would be the most entertaining to see, you know, the Apple car and various failed ideas over the last 15 or 20 years. But Google, if everything they were working on was just released to the public domain tomorrow, I would guess based on what I've read over the last couple of years that their tech was

would have the most benefits to society at large and technological progress. And it's almost more interesting to have a company like that existing today than it was during the Bell Labs era because now it feels like the government...

I don't know that the government is capable of pushing forward on the technological front, whereas 50, 75 years ago, I would trust the government more in that area. And so it seems like we've sort of outsourced a lot of technological innovation and research and R&D just generally to the private sector.

And there are real benefits to that. And I don't know if that went away. I'm not sure who would be pushing forward if it's not a company like Google that's sitting on just a mountain of cash and can afford to take moonshots in a variety of areas. Like, I do think that's a real benefit to consider in this conversation. Yeah, well, I mean, governments can still kill stuff, though. And I mean, you folks seem to be going after Google. Just leave for Google alone.

Well, look, the theme of the podcast, there are tradeoffs in every area. There's a real cost to competition and innovation with Google's success. But they also are funding crazy moonshots and empowering Hartmut Nevin to do his worst out there and push forward on quantum computing. So I mean, we'll see where it all that's there are other like folks working on it. I know Microsoft's been working on it for a long time.

You meant the Apple mention is interesting because Apple generally does not do this kind of things. And this was actually a comparison that I thought was interesting between NVIDIA and Apple that stood out to me when I was reading Tae Kim's book is Apple's pretty relentlessly focused on inventing

for products like like if it's not obviously commercializable again in this like an apple car it wasn't theoretical is could this be a product and once it was determined it's not going to be a product then we're not going to just keep doing sort of research and that's for the record is why apple would be so entertaining because it's like these are things that we might have used and that might have gone to market whereas google i feel like it's

a little bit more abstract and you'd have to be like a real developer or engineer to appreciate the brilliance that exists there and could benefit the world. Right. And from a shareholder and sort of analyst perspective, it's like, what are you wasting money on? Like, what's the point here? Totally.

Yeah, well, we'll see. In any event, to keep it moving, Stu says, it seems like you're purposely avoiding talking about these drone sightings, probably because anything you say would be pure speculation. Stu, that is correct. But also, also, no, no, I know I'm going to push back on Stu. I am not a fan of assertions of sins of omission.

Like, I have not considered talking about drones on Sharp Tech. I'm not purposely avoiding anything. It doesn't seem really relevant to my topics or what I do. So no, no asserting motivations to something that was not done. That's that's that's my feedback to Stu.

Well, we recorded last Wednesday night in the United States, Thursday morning, your time. And this was a story that was still very much developing. So anything we said back then, uh, would have been a complete speculation, but in any event, the good news is today, it's still complete speculation. It's still complete speculation. Um,

That's why I didn't foreground it in any of our Andoril discussion. But he says, it got me thinking about the technology of drones.

which is apparently pretty much commoditized, parentheses, to the point that you can't really tell from photos who made a drone. And then he says,

What are your thoughts on drones more broadly? Also, feel free to speculate about these drone sightings because they are kind of weird. Ben, do you have any thoughts on that take? The main reason I included this is because I loved calling drones the iPads of the sky, at least as far as consumer tech is concerned. But do you have thoughts? Yeah, I disagree. I think drones are very compelling. I mean, the military one, I think, can't be understated.

Like drones are pretty clearly the future of military. They're not the future. They're the present.

All right, and that is the end of the free preview. If you'd like to hear more from Ben and I, there are links to subscribe in the show notes, or you can also go to sharptech.fm. Either option will get you access to a personalized feed that has all the shows we do every week, plus lots more great content from Stratechery and the Stratechery Plus bundle. Check it out, and if you've got feedback, please email us at email at sharptech.fm.