Welcome to the Quanta Science Podcast. Each episode, we bring you stories about developments in science and mathematics. I'm Susan Vallett. In the first, researchers have shown that adding more qubits to a quantum computer can make it more resilient. It's an essential step on the long road to practical applications. That's next.
Quantum Magazine is an editorially independent online publication supported by the Simons Foundation to enhance public understanding of science.
How do you construct a perfect machine out of imperfect parts? That's the central challenge for researchers building quantum computers. The trouble is that their elementary building blocks, called qubits, are exceedingly sensitive to disturbance from the outside world. Today's prototype quantum computers are too error-prone to do anything useful.
In the 1990s, researchers worked out the theoretical foundations for a way to overcome these errors, called quantum error correction. The key idea was to coax a cluster of physical qubits to work together as a single, high-quality, logical qubit. The computer would then use many such logical qubits to perform calculations. They'd make that perfect machine by transmuting many faulty components into fewer reliable ones.
Here's Michael Newman, an error correction researcher at Google Quantum AI on quantum computers. They get exponentially better as they get bigger. And so that's really the only path that we have, that we know of, towards building a large-scale quantum computer. This computational alchemy has its limits. If the physical qubits are too failure-prone, error correction is counterproductive.
In that case, adding more physical qubits will make the logical qubits worse, not better. But if the error rate goes below a specific threshold, the balance tips. The more physical qubits you add, the more resilient each logical qubit becomes.
Now, in a recent paper published in Nature, Newman and his colleagues at Google Quantum AI have finally crossed the threshold. Our quantum computers have finally passed a threshold in terms of quality.
beyond which our encoded qubits get better as they get bigger. The researchers transformed a group of physical qubits into a single logical qubit, then showed that as they added more physical qubits to the group, the logical qubits' error rate dropped sharply.
David Hayes is a physicist at the quantum computing company Quantinuum. It's really exciting to see that become a reality because the whole story hinges on that kind of scaling. The simplest version of error correction works on ordinary classical computers, which represent information as a string of bits or zeros and ones. Any random glitch that flips the value of a bit will cause an error.
You can guard against errors by spreading information across multiple bits. The most basic approach is to rewrite each zero as 000 and each one as 111. Any time the three bits in a group don't all have the same value, you'll know an error has occurred, and a majority vote will fix the faulty bit.
But the procedure doesn't always work. If two bits in any triplet simultaneously suffer errors, the majority vote will return the wrong answer. To avoid this, you could increase the number of bits in each group. For example, a 5-bit version of this repetition code can tolerate two errors per group. But while this larger code can handle more errors, you've also introduced more ways things can go wrong.
The net effect is only beneficial if each individual's bit error rate is below a specific threshold. If it's not, then adding more bits only makes your error problem worse. As usual, in the quantum world, the situation is trickier. Qubits are prone to more kinds of errors than their classical cousins. It's also much harder to manipulate them.
Every step in a quantum computation is another source of error, as is the error correction procedure itself. What's more, there's no way to measure the state of a qubit without irreversibly disturbing it. You must somehow diagnose errors without ever directly observing them. All of this means that quantum information must be handled with extreme care.
John Preskill, a quantum physicist at the California Institute of Technology, says quantum information is intrinsically more delicate. So Preskill says you have to worry about everything that can go wrong. At first, many researchers thought quantum error correction would be impossible.
They were proved wrong in the mid-1990s, when researchers devised simple examples of quantum error correcting codes. But that only changed the prognosis from hopeless to daunting. When researchers worked out the details, they realized they'd have to get the error rate for every operation on physical qubits below 0.01%. Only 1 in 10,000 could go wrong.
and that would just get them to the threshold. They would actually need to go well beyond that, otherwise the logical qubit's error rates would decrease excruciatingly slow as more physical qubits were added, and error correction would never work in practice. Nobody knew how to make a qubit anywhere near good enough. But as it turned out, those early codes only scratched the surface of what's possible.
In 1995, Russian physicist Alexei Katayev heard reports of a major theoretical breakthrough in quantum computing. The year before, American applied mathematician Peter Shor had devised a quantum algorithm for breaking large numbers into their prime factors. Katayev couldn't get his hands on a copy of Shor's paper, so he worked out his own version of the algorithm from scratch. And it turned out to be more versatile than Shor's.
Preskill was excited by the result and invited Kitaev to visit his group at Caltech. Brief visit in spring of 1997 was extraordinarily productive.
Kataev told Preskill about two new ideas he'd been pursuing: a topological approach to quantum computing that wouldn't need active error correction at all, and a quantum error-correcting code based on similar mathematics. At first, Kataev didn't think that code would be useful for quantum computations. Preskill was more bullish and convinced him that a slight variation of his original idea was worth pursuing.
That variation is called the surface code. It's based on two overlapping grids of physical qubits. The ones in the first grid are data qubits. These collectively encode a single logical qubit. Those in the second are measurement qubits. These allow researchers to snoop for errors indirectly, without disturbing the computation.
This is a lot of qubits, but the surface code has other advantages. Its error-correcting scheme is much simpler than those of competing quantum codes. It also only involves interactions between neighboring qubits. That's the feature Preskill found so appealing. In the years that followed, Kitayev, Preskill, and a handful of colleagues fleshed out the details of the surface code.
In 2006, two researchers showed that an optimized version of the code had an error threshold around 1%. That's 100 times higher than the thresholds of earlier quantum codes. These error rates were still out of reach for the rudimentary qubits of the mid-2000s, but they no longer seemed so unattainable.
Despite these advances, interest in the surface code remained confined to a small community of theorists, people who weren't working with qubits in the lab. Their papers used an abstract, mathematical framework foreign to the experimentalists who were in the lab.
John Martinez is a physicist at the University of California, Santa Barbara, and one of those experimentalists. It was very much a theoretical notion that people really didn't understand. You know, it was like me reading a string theory paper. In 2008, a theorist named Austin Fowler set out to change that by promoting the advantages of the surface code to experimentalists throughout the United States.
After four years, he found a receptive audience in the Santa Barbara group led by Martinez. Fowler, Martinez, and two other researchers wrote a 50-page paper that outlined a practical implementation of the surface code. They estimated that with enough clever engineering, they'd eventually be able to reduce the error rates of their physical qubits to 0.1%, far below the surface code threshold.
Then, in principle, they could scale up the size of the grid to reduce the error rate of the logical qubits to an arbitrarily low level. It was a blueprint for a full-scale quantum computer. Of course, building one wouldn't be easy.
Cursory estimates suggested that a practical application of Shor's factoring algorithm would require trillions of operations. An uncorrected error in any one would spoil the whole thing. Because of this constraint, they needed to reduce the error rate of each logical qubit to well below one in a trillion. For that, they'd need a huge grid of physical qubits.
The Santa Barbara group's early estimates suggested that each logical qubit might require thousands of physical qubits. Here's Martinez. That just scared everyone. It kind of scares me too. But Martinez and his colleagues pressed on anyway, publishing a proof-of-principle experiment using five qubits in 2014. The result caught the eye of an executive at Google, who soon recruited Martinez to lead an in-house quantum computing research group.
Before trying to wrangle thousands of qubits at once, they'd have to get the surface code working on a smaller scale. It would take a decade of painstaking experimental work to get there. When you put the theory of quantum computing into practice, the first step is perhaps the most consequential: what hardware do you use?
Many different physical systems can serve as qubits, and each has different strengths and weaknesses. Martinez and his colleagues specialized in so-called superconducting qubits, which are tiny electrical circuits made of superconducting metal on silicon chips. A single chip can host many qubits arranged in a grid, precisely the layout the surface code demands.
The Google Quantum AI team spent years improving their qubit design and fabrication procedures, scaling up from a handful of qubits to dozens, and honing their ability to manipulate many qubits at once. In 2021, they were finally ready to try error correction with the surface code for the first time.
They knew they could build individual physical qubits with error rates below the surface code threshold, but they had to see if those qubits could work together to make a logical qubit that was better than the sum of its parts. Specifically, they needed to show that as they scaled up the code by using a larger patch of the physical qubit grid to encode the logical qubit, the error rate would get lower.
They started with the smallest possible surface code, called a distance 3 code. It uses a 3x3 grid of physical qubits to encode one logical qubit, plus another 8 qubits for measurement, making a total of 17. Then they took one step up, to a distance 5 surface code, which has 49 total qubits.
In a 2023 paper, the team reported that the error rate of the distance 5 code was ever so slightly lower than that of the distance 3 code. It was an encouraging result, but inconclusive. They couldn't declare victory just yet. And on a practical level, if each step up only reduces the error rate by a smidgen, scaling won't be feasible. To make progress, they would need better qubits.
The team devoted the rest of 2023 to another round of hardware improvements. At the beginning of 2024, they had a brand new 72-qubit chip, codenamed Willow, to test out. They spent a few weeks setting up all the equipment needed to measure and manipulate qubits. Then they started collecting data. A dozen researchers crowded into a conference room to watch the first results come in.
Kevin Satzinger is a physicist at Google Quantum AI who co-led the effort with Newman. No one was sure what was going to happen. We knew that it was going to be better because the coherence was better, but there are a lot of details in getting these experiments to work.
And we even had a betting pool. It wasn't actually betting, but people could put in their guess on a spreadsheet for bragging rights. Then a graph popped up on the screen. The error rate for the distance five code wasn't marginally lower than that of the distance three code. It was down by 40%.
Over the following months, the team improved that number to 50%. One step up in code distance cut the logical qubit's error rate in half. Here's Satzinger. That was an extremely exciting time and something that's really great about being in this regime where the basics are really working in error correction is
is every improvement you make to the physical components at this point starts to really get some leverage and make the logical performance much better. So it was very exciting, and there's kind of an electric atmosphere in the lab. The team also wanted to see what would happen when they continued to scale up. But a distance 7 code would need 97 total qubits, more than the total number on their chip.
In August of 2024, a new batch of 105-cubit Willow chips came out, but by then the team was approaching a hard deadline. The testing cycle for the next round of design improvements was about to begin.
Satzinger began to make peace with the idea that they wouldn't have time to run those final experiments. We had limited time to pull off the experiment because the devices needed to iterate for other reasons. These systems had other purposes as well, not just running the service code.
I had even given up hope at one point, and so I was sort of mentally letting go of Distance 7. Then, the night before the deadline, two new team members, Gabrielle Roberts and Alec Eichbusch, stayed up until 3 a.m. to get everything working well enough to collect data. When the group returned the following morning, they saw that going from a Distance 5 to a Distance 7 code had once again cut the logical qubit's error rate in half.
This kind of exponential scaling, where the error rate drops by the same factor with each step up in code distance, is precisely what the theory predicts. It was an unambiguous sign that they'd reduced the physical qubit's error rates well below the surface code threshold. Here's Newman. Seeing the distance 7 data in the morning, it was pretty wild.
I, of course, work on quantum error correction and I believe in quantum error correction. But seeing that data for the first time, plotting that line and seeing it go through those points, you know, there's a difference between believing in something and then seeing it work. That was the first time where I was like, oh, this is really going to work. The result has also thrilled other quantum computing researchers like Barbara Terhal, a theoretical physicist at the Delft University of Technology. I think it's amazing.
I think I didn't expect, actually, they would fly through this threshold like this. Because their previous paper, which was very impressive, it was sort of borderline. And you don't know whether they have a lot of room to go in terms of improving the hardware. And they did demonstrate that. I think that's very impressive. At the same time, researchers recognize that they still have a long way to go.
The Google Quantum AI team only demonstrated error correction using a single logical qubit. Adding interactions between multiple logical qubits will introduce new experimental challenges. Then there's the matter of scaling up.
To get the error rates low enough to do useful quantum computations, researchers will need to further improve their physical qubits. They'll also need to make logical qubits out of something much larger than a distant 7 code. Finally, they'll need to combine thousands of these logical qubits, more than a million physical qubits.
Meanwhile, other researchers have made impressive advances using different qubit technologies, though they haven't yet shown that they can reduce error rates by scaling up. These alternative technologies may have an easier time implementing new error-correcting codes that demand fewer physical qubits.
Quantum computing is still in its infancy. It's too early to say which approach will win out. Martinez, who left Google Quantum AI in 2020, remains optimistic despite the many challenges. I remember in the early 60s going camping with my dad
and his six-transistor radio. The fact that there were six transistors meant it was really great. So I lived through going from a handful of transistors to, you know, billions or trillions right now. Given enough time and clever enough, we could do that.
Arlene Santana helped with this episode. I'm Susan Vallett. For more on this story, read Ben Brubaker's full article, Quantum Computers Cross Critical Error Threshold, on our website, quantummagazine.org. Explore math mysteries in the quantum book, The Prime Number Conspiracy, published by the MIT Press, available now at amazon.com, barnesandnoble.com, or your local bookstore.