We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Episode 10: What Use is Computational Theory?

Episode 10: What Use is Computational Theory?

2020/12/20
logo of podcast The Theory of Anything

The Theory of Anything

AI Deep Dive AI Chapters Transcript
People
B
Bruce
Topics
Bruce: 本集探讨了计算理论在现实生活中的应用,例如理解算法的速度和可计算性。首先,通过分析冒泡排序算法,解释了算法复杂度(Big O notation)的概念,并说明了如何评估算法的运行时间。然后,介绍了算法的复杂度类,包括线性时间、多项式时间和指数时间,并指出指数时间算法的不可处理性。Bruce还讨论了NP完全问题,解释了其普遍性以及寻找多项式时间算法的难度。最后,他介绍了不可计算性(undecidability)的概念,并以图灵停机问题为例,说明了某些问题的不可计算性。他还讨论了量子计算机及其对计算理论的影响,特别是Shor算法如何解决当前加密算法中的难题。 Cameo: Cameo主要参与讨论,提出问题,并对Bruce的解释进行回应。她对算法复杂度、NP完全问题和不可计算性的概念表示理解,并参与了关于这些概念的讨论。Cameo还表达了对人工智能和软件开发中计算理论应用的兴趣,并提出了一些相关的问题。

Deep Dive

Chapters
The episode introduces computational theory and its practical applications, focusing on how it is used in real-life scenarios such as programming and algorithm development.

Shownotes Transcript

Translations:
中文

The Theory of Anything podcast could use your help. We have a small but loyal audience and we'd like to get the word out about the podcast to others so others can enjoy it as well. To the best of our knowledge, we're the only podcast that covers all four strands of David Deutsch's philosophy as well as other interesting subjects. If you're enjoying this podcast, please give us a five-star rating on Apple Podcasts. This can usually be done right inside your podcast player or you can google The Theory of Anything podcast Apple or something like that.

Some players have their own rating system and giving us a five-star rating on any rating system would be helpful. If you enjoy a particular episode, please consider tweeting about us or linking to us on Facebook or other social media to help get the word out.

If you are interested in financially supporting the podcast, we have two ways to do that. The first is via our podcast host site, Anchor. Just go to anchor.fm slash four dash strands, F-O-U-R dash S-T-R-A-N-D-S. There's a support button available that allows you to do reoccurring donations. If you want to make a one-time donation, go to our blog, which is fourstrands.org. There is a donation button there that uses PayPal.

Thank you. Welcome back to the Theory of Anything podcast. How are you doing, Cameo? I'm doing great, Bruce. How are you? Good. In a good mood. We've got a good episode today, although it's a little more technical than some of the ones we've done in the past. So I'm going to be challenging myself to try to make a more technical, mathematically oriented subject exciting still.

You get to rate me at the end on how I do. Well, and I think you need to not be too dependent on the visuals, which gives it an extra level of complexity to have to describe complex things and not be too dependent on your video. Yep. So yeah, this is going to be a little bit of a challenge, but I think we can do it. So...

All right, here we go. So like in the last episode, we talked about computational theory, which is one of my favorite subjects. And in fact, I went back to school to study artificial intelligence because of its connection with computational theory. And I'll talk about that a little bit today, kind of how I became interested in this.

But in the last episode, we talked about Turing machines and the Church-Turing thesis. I briefly touched on the Turing principle. But today, I'm going to actually try to explain to you, like, what do we really do with computational theory in real life? What would researchers be doing with it? And kind of give you an overview of what the field is, if that makes any sense. So...

We're going to start with something fairly simple. Did you ever do programming like when you were in elementary school or high school or something? You know, right about when I was probably 9 or 10 when we had a Commodore 64. Right. Yeah, you'd make little teeny programs that would...

generate typically visuals or or you know stepping numbers up through um yeah totally yeah yeah so i had a commodore 64 and a ti-99-4a and learned to program both of those and that was actually what started me into computers so um do you remember like maybe from high school or something kind of in the back of your mind something called bubble sort um

No, not necessarily. I mean, I've learned about it since then, but I don't remember learning about it. Okay, so, but you know what it is? Yes. Okay, so we're going to talk about bubble sort. So we've got this list of integers, which is really just zero to nine, but they're out of order, and we want to put them in order. Okay, so we're going to come up with how to do that, the series of steps. That's what an algorithm or a program is, is a series of steps. Okay.

to put them in order. So one way we might go about this is bubble sort, which is really conceptually very easy, but not necessarily the fastest way to do it. And what you would do is you would simply compare the first two digits and then swap their positions so that they're in order. And then you just kind of go through the line and you just keep comparing two digits at a time and swapping them if you need to.

Okay. So you just kind of keep doing this. I'm doing this on the screen. I've got the numbers that started out out of order. And by the time you get to the end, they're still not in order. They're more ordered now than they were before though. Okay. They're still not in order. Great. Okay.

So we ran through each of those digits and compared the two next to each other, each one at a time. And there's there's 10 digits so there's nine comparisons because you're comparing two at a time. And so the question I have for you is, how many comparisons would you need to sort the whole list.

And to help you with this, I want you to actually think of this in two ways. Think about it if the list is already sorted, how many comparisons would it take to sort the list? And in this case, to be sure you've sorted the list. And how many comparisons would it take to sort the list if it's in reverse sort order to begin with? So think about that for a second. Well, and you

You know, my initial gut response is it shouldn't matter what order the numbers are in to as far as how many comparisons I need to sort the list.

the comparisons have to happen to either validate the numbers or to do the comparisons. But I might be being dense. Okay, well, let's start with that. And let's pick the one that's ordered. Okay, so zero to nine. Do it in your mind. How many... So they're already in order. So obviously, the correct answer is it takes no comparisons. But the computer wouldn't know that. It would have to go through and do the bubble sort comparisons to figure out...

that they're in order. How many comparisons would it need before it would realize, oh, they're in order and I can stop? Seven. Good guess. It's actually nine.

So basically it's the exact same nine that I just showed on the other side of the screen. Right. Run through it all once. And once you've done it once and it didn't end up changing any of them, then it knows it's done. And it says, okay, good enough. Right. Okay. So if the list is ordered, it takes nine comparisons. Now take a wild guess how many comparisons you need if the list is opposite ordered. So this is literally the worst case scenario for it.

Okay. 27. Good wild guess. Thank you. So if you really think about it carefully, what it would be is you would have to run through the list doing those, the nine. So there's nine comparisons to run through the list once. You would have to run through the list nine times. Yeah. So it'd be 81 comparisons in this case. Okay. Yeah, that makes sense. All right. I should have thought about that a little more.

So, what does this work out to be then? Well, this is actually the number of digits minus one, because the comparisons is always one less than the number of digits, squared. Okay, that's the number of comparisons it takes in the worst case scenario. Okay. Now, on average, it's already partially sorted. So maybe it takes half of that. Okay, but...

it's going to be close to, you know, that's the worst, that's the bound. That's the worst case. And it's usually not that much better, right? I mean, half of 81 is closer to 81, closer to 81 than nine, right? It's rare that you get the best case scenario, if that makes any sense. Right. So we might list this as X minus one squared, or the minus one, we can kind of drop it.

Okay, in computational theory, we're not really sweating the little details because if there's a million digits and you're trying to sort it, the amount of time the algorithm is going to take isn't going to be different

if it's a million or a million minus one, right? From a human perspective. So what they do is they do something called big O notation where they say, okay, X minus one squared, we're going to call that the same as X squared. All right. So for the purpose of trying to evaluate this algorithm and what its speed is, we're going to say it's an X squared algorithm. Okay. Does that make sense? Yeah, absolutely. Okay.

Okay. So now what can we do with this? All right. So I've kind of given you the basics for how we would come up with the quote speed of an algorithm. Okay. Now there is something a little strange about this that may not be obvious at first. We don't normally talk about, I mean, we normally talk about like if we're layman, we'd probably talk about the speed of a computer, but here we're talking about the speed of the algorithm. Okay. Now what's the relationship between those, the speed of a computer and the speed of an algorithm? Well,

Well, we talked in the last episode about the church Turing thesis. And what I said at the time is I said, okay, what this really means. And remember, it's not a, it's not proven, but it's a scientific theory. It's the best tested scientific theory, I would argue. Okay. Right. And it says that,

It's not possible to come up with any sort of computational machine that can perform a logic program that a plain old Turing machine can't, and that Turing machines and their equivalents are the most powerful possible type of computational machines, and there are no more powerful ones out there. And we even talked about the fact that this isn't even, this is not quite true because we've since discovered quantum computers, but we're ignoring that fact for the moment.

If these two statements are true, what else is true about this thesis? Well, one of the things that is true about this thesis then is that all computational machines are functionally equivalent. Okay. Ignoring quantum computers for the moment. Okay. Which means that if you know how many computations it takes for a Turing machine to do a computation, then all computational machines will acquire just as many.

Yeah. Okay. Because literally a Turing machine is a universal computer. Okay. And that's what the church Turing thesis is saying. So when we talk about the speed of an algorithm, this is a meaningful statement, right? Yes. You might have a computer that's faster or slower and yes, that's going to make a difference, but the speed of the algorithm is independent of the

Any specific computing device. It's the number of computations given the input size. So in this case, the number of digits. It's the number of computations, you would need to complete that algorithm. Okay, which if you have a really, really fast computer or a really, really slow computer, it will obviously do the computations faster, but it's still going to require that many number of computations.

So let's take this for a specific, by the way, for those who can see the screen, I've got this, this image I grabbed from Wikipedia where it actually shows bubble sort going. I thought that was pretty cool. So I decided to include it so you can kind of conceptually see what it's doing. But if we're doing a bubble sort and the input is X, the number of digits and the number, number of digits we're trying to sort and number of numbers, I guess we're trying to sort. And, and,

X squared is the number of computations. Then how many computations, this is easy. How many computations would it take to do 10 digits? I'm sorry, I got lost. I was watching that little animation.

So, you know, we already said the answer to this. It's 81. Okay, I thought so. Okay, but remember, we're approximating with big O notation. So if it's an X squared algorithm and it's 10 digits, you know, the input size is 10, then how many computations are we approximating it to at X squared? 10 squared. Still watching the animation. It's really awesome. Yeah.

It's obviously 100 computations, about 100 computations we would expect it to do. Okay. Now, because of that square, this is going to grow exponentially. So if it's 100 digits, then we're talking about 10,000 computations. If it's 1,000 digits, then we're talking about a million computations. Okay. So...

The thing that's interesting about this is as the input size grows, yes, a faster computer is doing these computations faster, but it doesn't, you can quickly outrun the speed of any computer, right? You can quickly get to the point where you have a computation that is just simply requiring, you know, algorithm, so many computations and,

that it just does not matter how fast your computer is. Right, right. And this is actually what computational theory is really about. It's the study of what are we actually able to compute? What do the laws of physics allow us to compute? And for certain types of algorithms, they're just intractable, right? Some are, as we'll see, some are actually incomputable altogether. But I'll

there's a great many algorithms out there that are just intractable that unless you're doing a really, really small number of size of your input to the algorithm, you know, in other words, you're trying to only sort a few numbers or something like that. The algorithm is just too slow. No matter how fast your computer is, it's just too slow to go out and compute it too far out. And in fact, this is actually like, you think about like,

computers playing chess, okay, deep blue or something like that, right? Um,

chess relies on the fact that the game is, is intractable, right? In theory, we know how to solve chess. We know that you just simply look forward and you make every possible move in your mind, you know, in the computer's mind, say in its memory. And then you make from every one of those moves, you try every move your opponent might try. And then you try every move after that. And you just keep going. If you, in theory, if you had a computer that was,

had enough memory and, um, and it was fast enough, you could actually solve chess and then the game would be uninteresting, right? You would, from the very first move, you would know you were going to win, you know, or you would know that it was impossible to win. Um,

So the fact that the game is intractable, that no computer will ever be able to actually solve it, makes it, you know, no human will ever be able to solve it either, makes it interesting. Okay. So there's a great deal in life that actually relies on the fact that there are many, many, many things that are intractable that you just simply cannot compute. Okay. Now, how does Deep Blue actually play? Well, what it does is it transmits

tries to look forward as many moves as it can. And as we add to the computer more memory and more speed, it can start to look a few more, you know, it can slowly start to look a few more moves ahead. If it could get to like, say, six or seven moves that it can look ahead, it's going to beat any human, right?

But that's like the most, and it's more complicated than that. They actually use little tricks so that they figure out how to not look at certain paths forward and only concentrate on the ones that are the most interesting. And then they try to look ahead 10 or something like that. Right. But even under the best circumstances, the computer just can't look that many moves ahead. It just can't. Right. And because it's an intractable problem from a computing standpoint. So yeah,

Now, we were looking at bubble sort. Now, bubble sort's not the fastest sort. There's actually something called quick sort that is quite a bit faster than bubble sort. And the amount of computations doesn't grow as fast as the input size grows. But we actually have algorithms that are a lot worse than x squared. In fact, x squared would still be considered a fairly fast algorithm, even though it grows exponentially on the input size. In fact...

Even if we had like X to the 100, which would be an awful algorithm, admittedly, that would still not be the worst case scenario. The worst case scenario is when you flip

the constant and the input size. So right now we're dealing with an input size of x to the squared, which is a constant. Two is a constant, right? Imagine if that's flipped so that the algorithm has a constant two and then the input size x is the power. So two to the x instead, okay? That is what we call an exponential time algorithm, okay? And there's a lot of those. And

Just looking at my screen here, if you can see the screen, a big O X squared with an input size of 100 is 100 squared or 10,000 computations with a

a two to the X. So now we've flipped the exponent and the, the exponent is now the variable and the constant is now the thing that we're raising the power. Let's tune it to the hundred. Look at the size of that. I can't even pronounce the name of that number. It's so large, right? Not even a word for that. Yeah. So, I mean, it's,

We have algorithms that are so intractable that past a certain amount of input size, there is zero hope we will ever have a computer that can solve it, right? And this is what an exponential time algorithm is. And in fact, there's a great deal of, and this is actually philosophically very interesting. There's a great deal of things we want to do in life that are actually exponential time algorithms.

So for instance, just to give you an example, we could do like a separate podcast about this because there's a book about this that I found really fascinating. You're a very busy person, Cameo. You have lots that you want to accomplish in your life at work and in your career. So what are the problems? Yeah, amen. One of the problems that you're going to face is what's the next thing I should work on that's going to get

allow me to get the most done of the things I want to accomplish. Okay. That's a, that's an algorithm. Like you may not think of that as an algorithm, but it is, you're taking steps to pick it, decide this is the next most important thing for me to work on. Okay. To compute what's really the next most important thing for you to work on so that you can accomplish the most, that is an exponential time algorithm. So there is no hope that you will ever get it perfectly right.

Right. No, I, you know, actually we should put this on for a future episode because I also think within kind of a technology development product management kind of prioritizing features thing. It's a super interesting discussion in that realm as well.

Yes, yes. And I think a lot of this ties into software development, by the way. In fact, there's deep ties, obviously, between computational theory and epistemology and what we do in software. So just some quick terms now.

If we're talking about like big O X, so the size of the input size, the number of computations is exactly the same as the input size grows with the same, grows at the same length as the input size. For the sake of argument, big O, remember it drops things that we don't care as much about. So maybe that's 10 times the number of digits or something like that, right? Of the input size. We don't care because 10 times isn't enough for us to worry about. We would call that linear time.

If it's X to the constant, X to the squared, X to some constant, we call it polynomial time. If you remember back to your high school algebra, polynomials, they were those where you were trying to like write X squared plus Y to the third equals zero or something like that. Right. We're called polynomials. So that's called polynomial time. That's where the exponent is a constant. Right.

And then exponential time is where the exponent is the variable. Okay. Does that make sense? Yeah. Yeah. So exponential times are the hard ones. They're the problems that...

Um, you know, maybe we wish we had an answer to these problems, but there's no way to compute them perfectly except on really small sizes. Okay. Um, so they actually create classes of algorithms. And if you can, if you're watching this on YouTube, you can see the screen. Um,

And the, imagine this, the outer circle here, imagine an outer circle doing a Venn diagram that represents all possible algorithms. So the infinite number of those, but all possible algorithms are represented by the outer circle. Okay. Now there's a subset of algorithms that are decision problems. So a decision problem would be an algorithm that you get, it comes back with a yes or no answer to some question. Okay. So you're trying to compute,

something and it says, you know, you know, yes or no, as, and gives you the correct answer. You input something and it comes back. You ask it a question, the algorithm runs, it gives you a yes or no answer at the end. Okay. That's called class NP. All right. And, um,

So that would be all decision problems. Actually, it's all decision problems that can be proven in polynomial time. What I mean by that is if you have an answer in, can you in polynomial time prove that this answer is correct? Okay. If you can do that, then that decision problem is considered in class NP. Okay. Now there's a subset of,

of class NP, or as we'll see, we believe there is a subset in class NP where we'd say, okay, these are decision problems where not only can you prove that the answer is correct in polynomial time, but you can come up with the answer in polynomial time. That's called class P. Okay. Now I'm drawing, I'm drawing this as P is a subset of NP, but there's kind of a funky thing here, which is,

How do you know that a decision problem, so like if we had a specific algorithm, we can tell that algorithm is exponential or that algorithm is polynomial or it's linear, right? But how do you know if that decision problem is exponential, polynomial time or linear, right? Now, you can't just simply say, oh, well, I know an algorithm to solve this decision problem and it's exponential, thus this decision problem is exponential. Because you might in the future find

find a different algorithm that answers the same decision problem that's actually faster, right? And if this sounds a little familiar, it's because this is the same as the epistemology that we talked about in our first four episodes. If you have an algorithm, you know what its speed is,

But if you have a decision problem, you don't know for sure what the maximum speed of that decision problem is because you don't know if an algorithm exists or not that you might discover in the future that makes that algorithm faster. Okay. I see. Okay.

So one thing that might be possible, at least logically possible, is that there is no class P that's separate from class MP. It may be that all decision problems can actually be run in polynomial time, and we just don't know it. Okay. Well, nobody really believes that.

Okay, then it like theoretically that could be true. Right. But it's really hard to believe right I mean there's a lot of interesting decision problems out there that people have been and this is what you're doing if you're working as a researcher and computational theory you're trying to come up with the fastest possible algorithms to solve these problems and

Some of these decision problems are just really tough to improve on and to get non-exponential time algorithms for. Right. So they actually, and this is, I always thought this was so clever. They actually came up with evidence, not a proof, but evidence that,

that P and NP are not the same class. And I'm going to try to briefly explain that to you because I thought this was very cool. Okay. So there's another class of algorithms that is very strangely named NP hard. And it's defined very loosely as the problems that are at least as hard as the hardest problems in NP. And that probably sounds almost meaningless, but what I'm done here, it'll make a bit of sense.

So class MP is actually not a subset of MP. The way I drew it on the screen where it's partially inside of MP and partially outside of MP, that's correct. Right, okay. Because it's outside of MP, that means that there are problems in MP hard that are not decision problems. In fact, a lot of search problems where you're not trying to come back with yes or no, is this true? But give me the optimal thing I need to work on next. That wouldn't be consistent.

consider like you're trying to pick what thing to work on next. If you're asking the question, is this the next most optimal thing for me to work on? That's a decision problem. If you're asking out of my list of things I need to do, which is the right one for me to work on next? That's a search problem. Okay. So NP-Hard includes search problems. It's not just decision problems. Now, the subset of NP-Hard that's in NP, that would be

the hardest class of NP problems. What they've kind of decided is the hardest class of NP problems. And they call that NP complete. Okay. So there's, they're part of NP hard, but they're also part of NP. That's NP complete. Does that make sense? Yes. Yeah, it does make sense. Okay. It hurts my brain, but it makes sense. Yeah. Yeah.

So NP-complete is a very special class of problems. And the reason why is because they are universal problems. Okay, now we talked about how like a Turing machine is a universal machine, a universal computer. Right. It's a similar sort of concept, universality here. The idea here is that...

Any algorithm that's in class NP can be reduced to be an NP-complete problem. It's possible to reformulate them as any of the NP-complete problems. That may seem a little weird for me to say that.

And this isn't too different than the example I gave you in the last episode where we talked about deterministic finite automaton machines being equivalent to regular expressions. And so we show we can convert one way and also back the other way. Back the other way, yes. Okay, so it is possible to show that all algorithms in NP can be converted into an NP-complete problem. You can't necessarily convert it back.

what they really did, and this was, this was kind of clever is they demonstrated that NP problems can be turned into, um, basically logic circuit gates, right? They, they,

They can show, oh, every NP problem, it's possible to build a little electronic circuits that could solve it using ands and ors and using logic. And then using a satisfiability algorithm that says, okay, what's a way to satisfy these circuits so that they're all true? That would give you the answer to the original NP problem. By doing that, they came up with the first NP-complete problem. And then from there, they can use reducibility to start to show other problems are NP-complete.

If you were to NP complete, the fact that there's, so basically if you have an NP complete problem, let's say you had an NP complete problem, traveling salesman is an example of the NP complete problem. If you could come up with a polynomial time algorithm for it, then every single algorithm in the class NP, all decision problems,

would then be polynomial time algorithm. Because what you could do is you could take the NP problem, you could convert it into the traveling salesman problem, you could run your polynomial time algorithm on the traveling salesman problem, get the answer and then convert it back into the answer for the NP problem. And so basically every NP problem from that point forward would now be a polynomial time algorithm and would be far more tractable. Okay.

because it's really hard to believe that that is possible. And so this was created as evidence that there are certain problems in the class MP that will simply always be intractable, right? There is no way to actually make their algorithm polynomial time, okay? Did you follow all that? This is a little bit technical, and this is probably one of the hardest things we try to do in the podcast,

A little past just a little bit technical. It's pretty technical.

So, but NP complete then, if basically the summary of this is, if you ever find a polynomial time algorithm for any single NP complete problem, you now have a polynomial time algorithm for absolutely every decision problem in class NP, right? Because of that, we're not expecting that we will ever find such an algorithm, polynomial time algorithm for any of the NP complete problems.

This kind of... We just assume right now that we'll never be able to find one because... Yes. So our best theory is that P and NP are different classes and that there are some problems that are simply going to be intractable forever. This is where things tie in with artificial intelligence. Okay. Artificial intelligence is maybe more than one thing, but one of the main things that it is the study of is how do we actually deal with

a problem that we want to solve that is NP complete, that, that there's, there is no way or NP hard, that there is no way to solve it tractably. Okay.

Okay, so artificial intelligence is, they define rationality and this goes at odds with the epistemology we learned in the first four episodes, but they define it the same as economics. They define rationality as you're trying to accomplish something, you're trying to make a decision, and you want to make the optimal decision. Okay, you want to make the optimal outcome.

So rationality is trying to optimize your benefit in economics, right? Right, wrong, or indifferent, that's the way economics defines it. So artificial intelligence is how close can we get to the rational outcome, the optimal outcome, given that this problem is intractable?

And then there's a whole bunch of different techniques that you can use to try to deal with an intractable problem. And then it just takes us back to deep blue, deep blue tries to solve out a number of moves. And then it had a really clever board evaluation algorithm. So instead of having to solve to the end of the game to see who wins, it had a really strong ability handcrafted by the programmers in this case, uh,

strong ability to look forward a number of moves and then say, this move will lead to a situation where I can get the board into this state, which will be an optimal, optimal state for me. Right.

And that is how it solved that problem. That's how it got to the point where it could play chess better than any human was simply by trying to come up with an optimal solution, optimal possible solution, closest, close as possible optimal solution to a problem that's known to be intractable. Right.

So this is actually why I find artificial intelligence very fascinating. Machine learning is a little bit different. It's part of artificial intelligence also. It's kind of more the study of how do we come up with an algorithm that we don't know how to program. But you can see that there's kind of still a relationship there, right? That we're trying to have the computer come up with solutions for things that we don't know how to, that we can't accomplish ideally. So we want to get as close as possible.

One last concept on computational theory for today is the concept of undecidability.

So undecidability. So there are some problems. The laws of physics do not allow us to compute an answer to at all. So we're not talking. So intractable is we know how to get the right answer, but no computer can be built that is fast enough to get us the answer we need. Okay. Undecidability is there is no way to compute it at all. Okay. I remember, um, uh, uh, had a friend at work, um,

asked what would be undecidable? What would be incomputable? He couldn't conceive what that would be. There actually are a lot of things in life that are incomputable. Now, the example, the first example of incomputability that we ever had came from Alan Turing, and it

it was what they call the halting problem. And so basically imagine you have an algorithm and you want to compute an answer and you start to run the algorithm and it doesn't come to an end, right? It's just, it keeps churning, it keeps churning, it keeps churning. You may not know if the algorithm is going to run forever or if it's just going to take a billion years to run, right?

It would be really nice if we had some way to take that algorithm, input it into another algorithm, and then say, okay, is this algorithm eventually going to complete and come to an end and give me an answer, and it's just taking forever? Right. Or is this algorithm actually just in an infinite loop, right?

So, there is, Turing proved that there is no general purpose algorithm that can solve the halting problem, and that thus it's what they call undecidable. Okay. And

it's actually, it's interesting though. It is possible to determine the answer to that question on maybe some individual cases. So the, the, the word general is important here. There's no general algorithm that can solve this problem, but using creativity and you may be able to come up with some way to solve it. So like, let's say that the algorithm is just an infinite loop, right? It just says, you know, line 10, go to line 10 or something like that. Well,

You could come up with an algorithm that knew how to solve that halting problem, right? A specific case. All right. So really there are specific, there are individual cases that may be solvable, but there's no general algorithm that can solve every single case. And Turing was able to prove this.

And then from there, what they do is they use reducibility to show, okay, this problem here, if you could, what they do is they start with the assumption that we'll take a problem and they'll say, okay, I have this problem. So an example I'm using here, imagine that you had a program on a Turing machine and you wanted to know if it would also run on a finite automaton machine.

Okay. That would actually be useful. Is this, is this program, is it possible to move it to a finite automaton machine or is it using things that make it so that it's impossible for it to run it on the finite automaton machine? What you would do is you would start with the assumption that, okay,

this, that you had an algorithm that could solve that problem. And then you demonstrate that if you had that algorithm, you could solve the halting problem. Well, we know you can't solve the halting problem, thus by reductio ad absurdum, you've proven that that new problem is also undecidable. Okay. So another thing they do in computational theory, besides studying intractability, is undecidability. Is this an algorithm that is just impossible to actually come up with an answer to? Right.

Now, yeah, no, no, let me blow your mind here for a moment because I've been ignoring quantum computers. All right.

I thought we had decided at the beginning of the episode we were going to continue to do that for a while. So quantum computers aren't necessarily hugely different than Turing machines. So David Deutsch's work that made him famous was demonstrating that a quantum computer is Turing complete, that it can run any program that a Turing machine can run.

And I told you also that a quantum computer, it can mimic analog, whereas a

A Turing machine has to always do digital, but it can do it to any level of approximation that we desire, right? So that is another difference between the two. There's one other really big difference between the two. Quantum computers, they've come up with an algorithm called Shor's algorithm. And I think I've mentioned this to you before. This is like mind blowing. They have an algorithm that can solve problems

the problem of trying to find... So the way...

the way security works and cryptography works is they'll take really large prime numbers and they'll multiply them together. And then they'll use that as the key, as the, as the secret key, right? They then use to encrypt things with, if you, when you have that secret, you know, you have that public key. If you could take that number and reduce it to its secret

original prime numbers, you would then be able to crack the code of the cryptography and read the secret message. Right. But that problem is known is, is in class NP right now. And so it is not possible to make a tractable algorithm right now to be able to solve that problem. Okay. It's not NP complete. However, it's NP, not NP complete.

And a quantum computer using Shor's algorithm can solve that problem, even though it requires an exponential number of computations.

So a quantum computer, it's too bad it wasn't an NP-complete problem because then it would take everything in class NP and it would make it equivalent to class P. But unfortunately, it's just a very specific problem, prime numbers. But it can solve that exponential time problem in polynomial time. And Deutsch...

that the reason why it can do it is because it's solving the problem across the many worlds. There's an infinite number of them. Therefore, there's enough evidence

ability to do that computation quickly in polynomial time. The thing that's interesting about Shor's algorithm is right now we can't build a quantum computer. We don't know how to engineer one, but they're getting better and better at it. We're eventually going to be able to do this. And they actually can build a quantum computer. They're just too small, right? They don't have enough qubits at this point to be useful, but they're getting to the, once they can get it to where they can do a large enough number of qubits, every, everything that's using current RSA encryption is cracked and they're

in a sense, that means it's already cracked. So if you're the Russians, you're already collecting all of the U S secret messages, storing them on a computer, a computer somewhere, you're waiting for the quantum computer to be built. And,

And then you, as soon as it is, you run Shor's algorithm, you read it all. So in a very real sense, all of our current encryption is already cracked. And it's just a matter of time before it can all be read. Because a quantum computer somehow is able to, in this one specific case, run an exponential time algorithm in polynomial time. Okay, I think it hurts my brain again. Yeah.

The other thing that they can do with a quantum computer is they can do a quadratic speed up to, quadratic is not exponential. So this isn't, I mean, it's just too bad. I can't do an exponential speed up because that would, that really would be something amazing, but they can do a quadratic speed up to basically any exponential time algorithm, which means we'll be able to take all exponential time algorithms and run them a bit further than we've been able to before. So to use the chess analogy, you know, maybe we'll be able to look ahead eight times,

moves now instead or something like that. Right. So it's going to be very generally useful, but it won't actually resolve the, in the intractability problem for most problems. What are we talking about next time? Next time we will talk about the Turing principle. I kind of mentioned this briefly, but I want to go into a little more detail about, um,

what it means and what its philosophical ramifications are. And I have a friend on Twitter who's been actually discussing this with me, and I promised him that I would give him a better answer to this. He is someone who's doing some research into artificial general intelligence, and he believes that the brain is doing something that's incomputable to be able to do artificial general intelligence.

And one of these times you're going to do an episode just on that and talk about Roger Penrose, who also believes that. And so I said to him, you know, of course, I don't know for sure if you're right or you're wrong, but there are ramifications to you assuming that, that I don't think you're accepting yet. And he goes, really, what are those? And he was very interested. I mean, most people get very defensive about their ideas. This guy's awesome. He wants criticism.

Right. And unfortunately it was, it was not something I could explain via Twitter. So I promised him I would do, uh, either, you know, a blog post or, you know, a podcast about it in the future. And we would talk about, um,

The Turing principle, which is the reason why it's our best theory. And it's the reason why I believe that artificial general intelligence can be done on a computer, a Turing machine today, and that it isn't going to require some new incomputable laws of physics, if that makes any sense. Okay, well, that should be a fascinating conversation. I'll come prepared to argue with you. All right. I'm teasing. I don't know if I will or not. Okay.

All right. Thank you, Camille.