We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode In Computers, Memory Is More Useful Than Time

In Computers, Memory Is More Useful Than Time

2025/6/3
logo of podcast Quanta Science Podcast

Quanta Science Podcast

AI Deep Dive AI Chapters Transcript
People
B
Ben Brubaker
Topics
Ben Brubaker: 我认为在计算机科学中,理解空间和时间之间的权衡至关重要。就像我整理衣物的例子一样,我可以选择将所有衣服堆在一起以节省空间,但这会花费我大量时间来找到想要的衣服。或者,我可以将每件衣服都放在单独的盒子里,并按字母顺序排列,这样可以立即找到想要的衣服,但这会占用我房间的大量空间。这两种方法代表了空间和时间之间的基本权衡。在计算机科学中,我们也在不断寻找最佳算法,即在时间和空间之间取得最佳平衡的算法。我认为,空间比时间更强大,因为空间可以重复使用,而时间不能。就像我用完一个鞋盒后可以将其用于其他用途,但2023年过去后就无法再使用一样。因此,我认为在解决问题时,空间是一种更宝贵的资源。当然,时间和空间都需要,但空间在某种程度上更具优势。最近的研究表明,少量空间可以解决所有需要大量时间的问题,这进一步证实了空间的重要性。总而言之,我认为理解空间和时间之间的权衡是计算机科学的核心概念,并且在算法设计和优化中起着至关重要的作用。

Deep Dive

Chapters
This chapter uses the analogy of organizing clothes to illustrate the trade-off between time and space in computation. Finding a sweater in a pile is fast but requires much time; organizing clothes into labeled boxes is slow but provides quick access.
  • Trade-off between time and space in computation
  • Organizing clothes analogy

Shownotes Transcript

Translations:
中文

Let's say for the sake of analogy that I keep all of my clothes in a pile in the corner of my room. It's the most compact way to store them. The pile takes up so little space, but it takes forever to find anything. In this case, finding my favorite sweater takes a little space and a lot of time.

But I get sick of digging to find what I want to wear, not to mention the wrinkles, so I decide to institute a new system. I will put each and every piece of clothing into its own clearly labeled box that I then alphabetize and stack all over my room.

This way, I can find that sweater in an instant, but now my room is full of boxes. The same process, finding that sweater, now takes a ton of space, but can be done in very little time. I traded some of the space in my room for more free time in the morning. If you can think of a better way to organize my clothes and find that sweater, I'd like to hear it. And by the way, you're now thinking a little like a computer scientist.

Welcome to the Quanta podcast, where we explore the frontiers of fundamental science and math. I'm Samir Patel, editor-in-chief of Quanta Magazine, and that clothing analogy came to me from one of our staff writers here at Quanta, Ben Brubaker. Ben wrote about this trade-off between time and space in computer science last year in one of our newsletters called Fundamentals. You should definitely sign up for it at quantamagazine.org.

The most obvious solution to my clothing conundrum, to get a dresser to balance my space and time resources, is pretty simple. But getting a complete handle on how computer scientists think about space and time, that's a little more complex.

Ben writes regularly for Quanta about developments in the world of computer science, and his latest piece tracks a major breakthrough related to space and time, one that experts have called stunning and massive and beautiful, and we'd like to talk about it. So, Ben, welcome to the show. Thanks so much for having me.

Before we dive into this, what's the big idea? Where are we going with the conversation we're about to have? Right. So, yeah, I was really excited about this story. It's about one superstar researcher who proved mathematically that space is more powerful than time in a very surprising way that researchers didn't expect at all.

Before we get to this conception of space and time and everything, your beat here at Quanta is computer science and specifically what we call theoretical computer science. Can you tell us what that is? Yeah.

Yeah, so when most people think about computer science, they think programming. And that's really not what I cover at all. There's this quote from this pioneering computer scientist, Edgar Dijkstra, that I like a lot. He said, computer science is as much about computers as astronomy is about telescopes. And so, like, what that means is just the fact that you're going to actually do things with computers is kind of secondary. Theoretical computer science is a way of thinking mathematically about algorithms. Now, that still sounds pretty computery, but I like to maybe think about it in simpler terms. It's the mathematics of doing things.

And when you say doing things, the primary mechanism for actually doing things in this context is the algorithm, right? Yeah, that's right. So when theoretical computer scientists talk about computation, what they mean is like solving specific problems. So like the dresser example, the problem you want to solve is organizing your dresser. That's a computational problem, even though it doesn't use a computer. Like planning your commute is also a computational problem. All of these things are problems in the sense that you can think about solving them with sort of step-by-step procedures.

And that's what we mean when we say algorithms. So different algorithms are like different procedures for solving these problems. So we have a problem. How do we alphabetize all the books on a bookshelf, which is another analogy I know you've used. And the algorithm is you take down a book, you look at what the author's last name is, and then you take down another one. And if it's

before his name in the alphabet, you put it in front and you take down another one, it's for her name, you put it on the other side. And that is a specific set of instructions that allows you to solve a task or a problem. Exactly. And so...

Immediately from this, you can see that there are some algorithms that are going to be a lot better than other algorithms. I could just throw all my books on the floor, put them on the shelf in the random order check. Is that alphabetical? Oh, no, it isn't alphabetical. Let me try it again. That's a bad algorithm, and there's going to be a lot of better algorithms. So computer scientists are often interested in what's the best algorithm.

That, I think, sounds like one of the fundamental questions of computer science is how do we define a good algorithm? And you mentioned earlier this idea of space and time. Yeah. And these are the fundamental resources that contribute, I think, to what makes a good or not good algorithms. But what do we mean space and time in this context?

So time is pretty similar to what we mean in everyday life. Just if you're going to do anything, like any kind of procedure, that'll take some time. And ideally, you'd like it to take less time if you can. And space is similar. It's a little bit more abstract, but you could think about it as physical space in your room, or you could think about it as memory, say. Like if you're going to do some task, to complete the task, you might need to be like, oh, I have this partial result. I have this information, and I need to keep track of that. And then I need to get some other information. I need to keep track of that. And eventually, I'm going to use all that at the end.

If I'm going through a maze, for example, I might want to write down, this is what this intersection looks like. I've been here before. But if I need to do that too often, that's also really unwieldy. It's like taking a lot of time. So space usage is important, but you might want to minimize it as well.

And theoretically, we're abstracting this, but this is almost literally what's happening in our computers, for example. Yeah, exactly. Like we know when our computer slows down, it's taking more time to do something. We know that, oh, we should see, oh, I've got a million tabs open. I'm using up a lot of memory. I'm taking up space to get there.

Yeah. So what the abstraction does is it allows you to pose these questions in a more precise way. Because if I want to say, okay, like this algorithm uses a lot of space, this algorithm uses a lot of time, which one is better? I'm going to compare a minute to a megabyte. Like, good luck figuring out what that's supposed to mean. Exactly. Exactly. Or even if I just think only about time, it's like,

I have this algorithm. I run it on my computer. It's really fast. You run it on your computer and your computer is really old. Now it's slow. I shouldn't be like counting that against the algorithm. But like, how do I know which time to use or, you know? So defining these in really mathematical terms is a way of getting rid of all this unnecessary detail of just getting to the core of the problem of what is the algorithm like? How fast is it really in some more fundamental sense?

And this factors into something that I think is one of the core computational issues in computer science, which is computational complexity theory. And so now we're going to take a next step to try and get toward what the new finding was. So how does computational complexity theory factor into all of this?

So everything we've talked about so far, you can still ask these questions on a bunch of different levels. So you could say, I'm going to ask about specific algorithms for dresser sorting, and I'm going to just compare all of them and try to ask what's the best algorithm for this problem. And, you know, it gets really nitty gritty into the details. Computational complexity is like the 10,000 foot view of the same subject. So it's like computer scientists are really asking questions about...

what is time? How powerful is time? How powerful is space? What does it mean to prove something? So they're asking these really lofty questions that are about these same kind of resources, like questions about computational difficulty and resources, but they're asking them in a much more zoomed out way. So within computational complexity theory, is there a sense of whether space or time is like a more valuable resource to completing some task in a

abstract mathematical space or in the real world? That's a great question. Let me just start by saying, like, you always need both space and time. You're never going to have an algorithm like just take zero time. But you could still ask this question, like, is space more powerful than time? And the answer turns out to be yes. And there's a simple way to think about this in real life. For example, if

If I have a shoebox and I take the shoes out of it, I can use that box for something else. But like the year 2023, I'm like, oh, cool. I'm done with 2023. Just like I was done with my shoebox. I cannot reuse the year 2023. It's just gone. This is like paraphrasing Scott Aronson, who's a great blogger, a theoretical computer scientist actually on Qantas board. He has a great blog post about this. So in this real sense, in real life, you can reuse space in a way that you can't reuse time.

And so that gives us a little bit of intuition that maybe like if I have a certain amount of space and I compare that to the same amount of time, again, we have to be like very mathy about what a certain amount of space and time mean. But if I do that apples to apples comparison the right way, I think probably the space should net me more. And equivalently, maybe I could get away with less space and a little bit of space gives me maybe the same amount of value as more time. So there is this intuition about this.

You use this keyword intuition. Right. Which is one of these things where we think something is true. We have a suspicion that it's true. But turning that into some kind of mathematical truth, like we talked about proofs last week, for example, is a leap and something that takes a lot of work. So there is a context in which

Now, this new finding, for example, has started to establish in more definite terms the primacy of space over time. That's right. Yeah. So again, it's very similar to like research mathematics where these researchers are only satisfied if they can like completely rigorously prove something. And so I said before that this result was surprising. But then I said that, you know, researchers do expect space to be more powerful at times. It's not that surprising. It's actually surprising in a different way.

So what researchers thought was that, like, if I have a little bit of space, that should actually help me solve a fair number of problems that need a lot more time. What this new work shows is that that little bit of space can actually solve literally every problem that takes a certain amount more time. And so it's more powerful in this, like, comprehensive width kind of way. Like, it's just really like a universal statement about space and time.

We got to this through, in the story, which everyone should read, through the concept of P and P space. Right. So let's take this to another level of abstraction. So what is P and what is P space? So P is basically like it's a set of problems. So it's like a set of every – includes every problem, every task that you can do in – I'm going to be very vague here – a reasonable amount of time. Okay. Okay.

P is problems that can be solved in a reasonable amount of time. And PSPACE, complexity theorists specifically, are just terrible at names. PSPACE is just the same thing for space. Okay. Things that can be solved with a reasonable amount of space. That's right. Yeah. Okay. I know there's a lot of complexity under the hood here. What does reasonable mean?

Oh, boy. Yeah. So that's a difficult question. So it's really defined in a particular mathematical way. But they have a definition that makes sense to computer scientists for what constitutes reasonable, which allows them to carve out what P is and what P space is. I mean, these are complex definitions, but

the researchers are consistent with them and find them useful for the sake of comparison, right? The reasonable is just like a term of art. Exactly. So we have these two sets of problems, one that can be solved in a reasonable amount of time, one that can be solved with a reasonable amount of space. How do these two groups of problems interact with each other?

So complexity theorists believe that P space is much bigger than P. Bigger in the sense that it contains a lot more problems, a bigger set of problems. There are more problems that we can solve with a given amount of space than there are problems that we can solve with a given amount of time. Exactly. And that just comes down to the same intuition that I was saying before about reusing space.

the shoebox, and 2023. So computer scientists are super generous with the word reasonable. And so if you wanted to say, oh, there are some things that I could solve in a reasonable amount of space, but not a reasonable amount of time, that is like still problems that take a lot of time are included in that. So what this latest result is a step towards the P versus C.

P-space problem showing that these two classes are not the same. Proving that P-space is bigger. Exactly. It's a step toward that by proving a smaller difference between space and time in a quantitative sense, if that makes sense. It's like space is a little bit more powerful than time. It's more powerful in this universal way, like it's more powerful for all problems. But in terms of how much more powerful it is, like how much extra time does a little bit of space buy you?

That's not all the way to where computer scientists want to go to prove that P space is larger than P. But this qualifies as a stunning advance in this because it is really nailing down something empirical about what to this point has been like really intuition. Intuition, yeah. It's the first real big step in this in 50 years. So that's kind of a huge deal.

And you mentioned there was kind of a singular computer scientist behind this. Right, yeah. So this is a really fun story for me. There's this computer scientist, Ryan Williams, is kind of a superstar in this field. He has a kind of track record of big breakthroughs. And it turns out he has actually a very sort of unlikely life story path into computer science. And what I really liked about telling this story was that he...

His mentor in college, this pivotal figure in his life who really guided him when he was struggling, was the guy who actually came up with these definitions for space and time in the first place, really one of the founders of complexity theory. So there's this really long story that starts like in the 1960s and goes through now. And it's just you can see the different historical figures here like passing the torch. So is it safe to say that space is more important than time for solving problems? Yes.

Yeah, I would say time is really important because you have to still spend some time. But it tells you that space is very versatile. Like, you can reuse space in a sense, like, way more than you thought you could. Like, you can pack a lot more things in a dresser, maybe. I don't know. Yeah, I wanted to ask, how does this translate back to the dresser analogy? Because...

That is a very intuitive way to think about space and time tradeoffs of resources and solving a problem, which is like finding my sweater. Yeah, exactly. So for specific problems, I should say, like we already knew kind of a lot of ways to trade off space and time. I mean, even in your introduction, you had some great examples already. Like, well, I could do this and that uses a lot more time but less space. Yeah.

But the thing is, what this is like is really weird. It's like I came up with this strategy for like, oh, you know, I'm going to do this simple trick and it will free up a little space in my dresser and use some more time. Cool. I can also use that exact same mathematical trip for planning my commute, for organizing my bookshelf, for like literally any other task I could do. That same thing will save me the same amount of space relative to time. And so that's just kind of a mind-boggling thing. It's even hard to think about how that could be possible. Yeah, and this is the utility –

of taking something like this and turning it from examples into a mathematical abstraction because it means you can compute something in a mathematical sense and say, okay, this should apply to all these problems. But do we have a sense of how this solution is a step towards something that actually could impact the way that computers work or problems get solved? Yeah.

Yeah, so it's not likely to have any practical applications yet. Okay. And the reason for that comes down to the fact that computer scientists are like really generous with their definition of reasonable, as I said. So to make these things mathematically precise, you want to abstract far enough from the real world that then if you tried to do this, you'd be like, wait a second.

Like, what's going on? But there is actually a lot of precedent for these super mathy results in theoretical computer science and complexity theory to, like, ultimately translate into real progress. There's a lot of complexity theory involved in cryptography, for example, keeping our data safe on the internet. That uses a ton of results from complexity theory. Yeah, of course. It's super useful every day. We wouldn't have modern life without it.

Thank you for joining us, Ben. I think everyone should go and read your story to hear more about Ryan Williams' story and dig into this complexity a little bit more and ideally see some other Qantas stories that you've written that relate to complexity theory as well. Before I let you go, we'd love to ask for a recommendation. What's exciting your imagination this week?

At risk of sounding really pretentious, the thing that has excited me most recently this week is Ulysses, which is this giant modernist novel by James Joyce. This is one of my favorite books, and I'm rereading it currently in a book club with some of my colleagues here at Quanta, among other people. And so that's just had some super energizing conversations about that recently. And yeah, it's really fun. I can't wait till you guys get to the end of this, then we can do a recap on Ulysses. Maybe here. Yeah, a special episode of the Quanta podcast.

All right. Well, thanks for joining us, Ben. Thank you so much. Also on quantummagazine.org this week, you can check out a math story about Fermat's Last Theorem and a Q&A with a bioengineer who advocates for frugal science or putting the tools of science in everyone's hands. And over on our sibling podcast, The Joy of Why, you can hear a conversation about string theory with Harvard physicist Kumrun Vafa.

Today, we're going to leave you with the earliest known recording of electronic music. It was made in 1951 by the BBC at the Computing Machine Laboratory in Manchester on the Mark II machine, which was designed by computing pioneer Alan Turing. The recording was restored by two scientists in New Zealand in 2016. This is Glenn Miller's In the Mood.

The machine's obviously not in the mood.

The Quanta Podcast is a podcast from Quanta Magazine, an editorially independent publication supported by the Simons Foundation. I'm Quanta's Editor-in-Chief, Samir Patel.

Funding decisions by the Simons Foundation have no influence on the selection of topics, guests, or other editorial decisions in this podcast or in Quanta magazine. The Quanta podcast is produced in partnership with PRX Productions. The production team is Ali Budner, Deborah J. Balthazar, Genevieve Sponsler, and Tommy Bazarian. The executive producer of PRX Productions is Jocelyn Gonzalez.

From Quanta Magazine, Simon France and myself provide editorial guidance with support from Matt Karlstrom, Samuel Velasco, Simone Barr, and Michael Kanyangolo. Our theme music is from APM Music. If you have any questions or comments for us, please email us at quanta at simonsfoundation.org. Thanks for listening.