We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Episode 12: Artificial Intelligence vs Artificial General Intelligence

Episode 12: Artificial Intelligence vs Artificial General Intelligence

2021/1/3
logo of podcast The Theory of Anything

The Theory of Anything

AI Deep Dive AI Chapters Transcript
People
B
Bruce
C
Cameo
D
Dennis Hackethal
E
Ella Heppner
T
Thatchaphol Saranurak
Topics
Ella Heppner:人工通用智能(AGI)能够创造任何人类能够在脑海中创造的知识,而人工智能(AI)或狭义AI只是将人类创造的某些知识嵌入其中,利用计算资源快速查找模式。AGI能够完成人类能够完成的任何心智任务,例如进行对话、创作艺术、进行科学发现等。 Cameo:狭义AI具有局限性,其应用范围很窄,不像AGI那样具有普遍性。 Thatchaphol Saranurak:如果一个程序没有潜力实现任何你想要的目标,那么它就不是AGI。 Bruce:人工智能是一个总称,包括许多方法,其中机器学习是人工智能的一个分支,机器学习算法试图找到人类无法自行编写的函数,并依赖于归纳偏置。所有机器学习算法都具有归纳偏置,没有归纳偏置就无法学习。归纳法是一种不好的知识论,而机器学习算法将自己强行归类为归纳法。真正的AGI应该能够选择自己的理论并从多个理论中学习,而不仅仅是被赋予一个单一理论。 Dennis Hackethal:自然选择是复制体的非随机差异繁殖,并非所有机器学习算法都涉及自然选择。

Deep Dive

Chapters
The discussion explores the fundamental differences between AI and AGI, highlighting how AI is specialized and limited in scope, while AGI is capable of performing any intellectual task a human can do.

Shownotes Transcript

Translations:
中文

All right, welcome to the Theory of Anything podcast. Today we're going to be talking about artificial general intelligence, and we have a number of guests here today, so let's go around and have everybody introduce themselves. So, Cameo, why don't you start? Of course, you're a regular on the show. Yeah, I'm a regular on the show, and I'm Cameo. I've known Bruce for years. I

I manage a software company in Salt Lake City, and I know very little about actual artificial intelligence, even though it's a word that gets thrown around a lot in technology right now. All right. Well, this will be a great place to kind of at least understand the difference between artificial intelligence and artificial general intelligence. Ella, why don't you give us an introduction?

Hi, so I'm Ella Heppner. I am a software engineer. I have just a bachelor's in computer science from Virginia Tech. I've been interested in AI and AGI for as long as I can remember. And it was four or five years ago that I read David Deutsch's The Beginning of Infinity. And since then, I've been sort of obsessed with AGI.

coming up with some critical rationalist theory of AGI. And so that's sort of my, how my interest in the subject started and that's what I've been, that's been one of the main topics on my mind for the last few years.

Thanks. And Ella, what's the name of your theory? My theory, I'm currently calling it CTP theory, which is an acronym, which doesn't really stand for anything anymore, but that's the name that I'm sticking with because I have nothing better. All right. So yeah, so Ella has her own AGI theory that she has been actively working on trying to solve problems for and improve. So that was one of the reasons why I invited her to the show for this topic. All right, Dennis, why don't you introduce yourself?

Yeah, hi. So my name is Dennis. I'm a software engineer, and I've been fascinated with epistemology and AGI for years. And it started when, in 2015, I read The Beginning of Infinity by David Deutsch after he was on Sam Harris' podcast. And I've been fascinated with the topic ever since. All right. Thank you. And Dennis has a book called A Window on Intelligence, and that was why I knew that he was obsessed with AGI, which is why I invited him to the show. Yeah. Yeah.

Tachipol, why don't you introduce yourself? Sure. So I'm Tachipol. I am now a research assistant professor in TTIC. But I'm going to move to Michigan and Arbor soon as an assistant professor there. So I work as a

I do research on algorithmic design, like designing fast algorithms for big data, something like that. But...

But I got interested in this estimate, how do you call it? This Popperian... Epistemology, right. Epistemology, yes. Because of this book by David Deutsch as well. But after that, this interest got back to me again because of Dennis, actually. I listened to his podcast and then become...

Very interested. Right on. That's great. And I kind of joined this group of you guys and then tried to learn more about it. Well, it's probably Bruce's group.

Yeah. For the four strand. So yeah, probably not a lot of people who listen to the show even know about the four strands group, but that's, that's a email group and now also a blog where a bunch of people who are interested in these sorts of subjects just kind of converse. So it's a secret society.

Okay, so Cameo, at the beginning of this, before we started the show, she wanted us to make our first topic be what's the difference between artificial intelligence versus artificial general intelligence? So that's actually a really important distinction that needs to be made. Does anybody want to take that topic and explain what they understand the difference to be between? By the way, I've got my own opinion on the subject, so I'll share my opinion also. Anyone want to grab that subject first?

Can I start with a joke? Sure, go for it, Camille. Maybe not quite a joke. What's the difference between artificial intelligence and machine learning?

The difference between machine learning and artificial intelligence is artificial intelligence is a bullet on your PowerPoint presentation for your pitch deck, and machine learning is something that actually exists. Yeah. All right. That was my joke. So does somebody want to take this topic first?

I can go ahead and give my thoughts on it. So there are a few different ways that you can look at this, obviously, but the way that makes most sense to me is that an artificial general intelligence is a program that is capable of creating any kind of knowledge that a human being would be capable of creating in their mind.

And whereas artificial intelligence or narrow AI as opposed to general AI is just a program that has some knowledge created by humans sort of embedded into it. So, you know, machine learning algorithms like deep learning would be narrow AI, of course.

And those algorithms have sort of knowledge that humans have created about what kind of patterns exist in data. And the AI's job is essentially to, you know, use computational resources in addition to the human knowledge programmed into it to find patterns in a really fast way. Whereas a GI, general artificial intelligence,

would be able to do everything that human would be able to do. So it wouldn't just analyze data sets and find some pattern for you. It would be able to hold a conversation and, you know, make art, make scientific discoveries. It would be a person in, you know, the true sense of the word. It would be capable of doing any mental task that any human could do. Yes. I think that's excellent, Ella. That's a good description. Anybody else want to add anything?

Okay, just my two cents on the topic. AI or narrow AI has a tell too, and that is that there are so many different versions of it and different applications of it. Like you have speech recognition AI, you have chess playing AI, you have self-driving cars that are powered by AI, and all those things couldn't do what the other one does. And that gives it away that this is not a truly universal system because an AGI could do all of those things and then some.

So that's not to say that it wouldn't want to do any of those things. That would be up to the AGI, whereas a self-driving car can just be coerced, if you will, to drive a car. So that's a separate issue. But the fact that narrow AI has very narrow applications and isn't universal, that gives it away. That's a tell that we're not talking about AGI. All right. Thank you.

Actually, I have some rule of thumb to separate them. And this is quite rough, but I would like to know actually the opinion of you guys as well. So the rule of thumb is if the program do not have the potential to create any goal that you want to do, then it's not AGI. Oh, interesting. Yeah, I agree with that.

Yes, I think that'd be true. Any goal would be, you know, sort of part of a process of, you know, human cognition. And I think an AGI will be able to do anything that, you know, human cognition could do. So I'd agree that they could have any goal that a human could have. I would agree, but I would just point out that the focus would be on the AGI's goals, not the human's goals or whoever created the AGI. You wouldn't want it to...

You wouldn't force it to go down a certain path just because that's what you wanted to do. Right, right, right. Yeah. So let me give my take on this. I'm going to be...

I'm going to try to actually use machine learning theory itself to explain what I see as the difference between AGI and artificial intelligence. First of all, artificial intelligence is an umbrella term. It includes things like minimax algorithms, search algorithms, things that in no sense at all are doing anything that could be called learning.

Machine learning is a specific branch of artificial intelligence where the machine is actually seems to be learning something. Specifically, it's trying to formulate a function that does something. And often it's a function that humans don't know how to write on their own. So it's trying to approximate a function that humans don't know how to write. The obvious example here would be like

a face recognition algorithm. If I were to ask even a human expert, write a program that would recognize my face, they would have a very hard time doing that because they don't have a good theory for how to write an algorithm to do that. But they do know how to write a machine learning algorithm that can do that, where it takes examples of my face and examples of not my face and then learns to differentiate between them.

So that's the difference between artificial intelligence and machine learning. Now, machine learning then, that's kind of the bigger name in artificial intelligence. It has its own theory that's been developed. And Tom Mitchell worked a lot of this out. I had to read his textbook as one of my classes for my master's degree, which is my specialization is in machine learning.

And one of the things that he points out is that there's something called, that all machine learning algorithms have something called an inductive bias. And that word induction is kind of important here. And in fact, he even produces a proof that if you don't have an inductive bias, there is no learning at all. It's impossible for the algorithm to ever take examples and learn from it.

An example of an inductive bias would be like linear regression, where you have a bunch of data and you draw a line, you try to fit a line to it. And so maybe you're trying to on one axis have how many square feet of house you have that you're trying to sell. On the other one, you have something about the location or something like that. You would use those features or those observations, they would even call them, to come up with a line and then you would use that line to make predictions.

Well, what you're really doing, what an inductive bias really is, is it's a hypothesis or it's a theory or explanation about what the data is going to look like, the shape of the data. And all machine learning is based on inductive biases, basically. And so...

What they're trying to do is they're trying to force fit machine learning and they even call it induction, right? But we've talked about how inductions and cameo and I have talked about how induction is a bad epistemology. It's a bad theory of knowledge.

They're force fitting machine learning into induction when really the theory that they start with is whatever the inductive bias is. That's the whole basis. Well, obviously, if you've only got a single theory you're working with, you're not a general learner, right? You're only going to be learning things very specific to whatever your inductive bias is.

This, I think, fits into Tadripol's point of view about goals that a real AGI would, at a minimum, there's probably more to it than this, would have to be able to select its own theories and be able to learn from multiple theories, not just be given a single theory by a human and then learn from that. And even worse, it's almost always about the shape of the data, which is obviously going to be very limited in what you can learn using such a theory.

So I actually agree with this point and like even in the theoretical research, the mainstream thing that people call machine learning, like that the main framework that people study is called pack learning and pack learning is exactly this. It's a theory that

study the following: that you start with a set of hypotheses, which means some theory is fixed already, and then given a set of data, how to choose among this set of hypotheses, which are kind of the same. Choose the one of the hypotheses that kind of explain the data the most. But

the thing that is very important is that this hypothesis is given to you. There is no theory how to explain how to come up with this set of hypothesis in the first place, which is something that is missing from in

in the theoretical research, I think. Oh yeah, excellent. So just to clarify a few terms there, he's talking about a hypothesis space. So each machine learning algorithm beyond having an inductive bias, it also has like a hypothesis space, a space of possible hypotheses that it can learn from. Some machine learning algorithms actually have infinite hypothesis, hypothesis spaces. So like, yeah,

Genetic programming, in theory, has an infinite open hypothesis space. But most machine learning algorithms have a very limited hypothesis space. They can only learn hypotheses within that space. And then what they do is they look at the observations, they look at the features, the data, and they eliminate hypotheses.

hypotheses from the hypothesis space. And what you're left with is called the version space. And so they're trying to narrow in with the version space on what the best hypotheses within that space is to explain the data. That's the way they would, this is to use the lingo of machine learning.

And I think that's your polls completely right here. In most cases, not only do you have an inductive bias, but you're upfront being given what the set of hypotheses are that you're allowed to choose from, which obviously is an injection of human creativity. And there's not,

the machine's not really doing that much, right? It's the human doing all the heavy lifting. So I think that's true of all machine learning today, right? I mean, we're getting better at letting the machine do more and more heavy lifting, but all of it is based on this really narrow concept of an inductive bias and a hypothesis space and narrowing it to a specific version space.

So it was interesting, Bruch, you mentioned that genetic programming is one of the few sort of subfields of machine learning that has an infinite hypothesis space. And I've thought about genetic programming a lot before, and I just sort of want to get other people's thoughts on this, which is that it seems to me like genetic programming is probably the closest

field in sort of contemporary mainstream machine learning, the closest field to actual AGI. I don't think that, you know, it's, I would be surprised if it made its way to AGI. I still don't think the chances are very good. I think there's still a lot of sort of inductivism in that subfield, but the basic method there is sort of hypothetical deductive rather than inductive.

It's about guessing and then narrowing down the results, blind variation and selective retention, rather than sort of trying to build in some inductive method. So I'm just curious if other people here have the same intuition that genetic programming is probably closer to what we need to be doing than other machine learning fields.

Yeah, I have the same intuition or actually I would say it's more than an intuition, but I share what you're thinking is here because what I find promising about genetic programming is that it's actually about the evolution of computer programs rather than the evolution of just parameters, say, like in genetic algorithms. Right.

And that's exactly why the hypothesis space is infinite because it can represent, you know, it's Turing complete. It can represent any computable function. Right. And I think that is one of the necessary conditions for something to be AGI is it would in principle need to be able to evolve any program. Right. Agreed. So I actually, I agree with you. I,

So let me just say that in reality, all machine learning programs are actually various forms of natural selection. Okay, so they all can be cast into that light once you know how to get them out of the

the wording that makes them sound like inductivism, it's possible to take all of them and make them look instead like natural selection. So I suspect that's why all machine learning algorithms that work work is because they're all doing natural selection of some sort. But most of them are only doing so in some sort of very limited hypothesis space. Genetic programming is special in that it's unlimited.

However, I think we all know that in practice, it's actually quite limited today. There's something missing with the way we do genetic programming that makes it so that what it's able to actually search over is just too small a space. Like in theory, it could find how to create a word processor, but in practice, you would never expect that to happen. I would be careful.

Talking about natural selection in conjunction with machine learning generally. I don't think what you said is true. I think natural selection is the non-random differential reproduction of replicators and it's really an evolutionary phenomenon. So I don't see how unless you have a system that has replicators and unless you have

like, unless you're talking about an evolutionary algorithm or an evolutionary program specifically, I don't see how, for example, a facial recognition algorithm that is built on the gradient descent mechanism, how that has anything to do with natural selection. Yeah, I thought you might say that, Dennis. In fact, certainly it's not neo-Darwinism, but

Even a gradient descent algorithm is trying out lots of variants of things and then seeing which one actually works the best according to some sort of selection criteria.

And so that's what I really mean. It's certainly an incredibly narrow view of natural selection. I mean, I see in the widest possible sense, I see what you mean there if we're talking about very loose definitions. But the problem is with the gradient descent, there's no replication going on. There's no replicator. There is no replication going on. You're correct.

So I find that problematic because it might miss and no bad intention on your part, but it might miss it might be misleading to some to say that natural selection is happening and official recognition algorithm. Yeah. So just sort of to clarify the terminology here. I think that, you know, the

um dennis is correct you know it isn't it isn't strictly speaking natural selection but bruce you're right that there there is a um core of variation and selection that's going on there and the way that i think about this is that the broadest um sort of process which can create knowledge is um and i use this phrase a lot um blind variation and selective retention and this is a phrase that i'm borrowing from um donald campbell who is a contemporary of poppers who wrote a lot about a

evolutionary epistemology and blind variation and selective retention basically just means, you know, you have some set of objects, you make variants of the objects, and then you select, you know, a subset of the objects in some way. And so neo-Darwinian evolution is one instance of that process, but it's not the only conceivable instance of that process. And so you, I think what you're trying to say in this terminology, Bruce, is that some, you know,

Some or most machine learning algorithms involve some process of blind variation and selective retention and natural selection is also an instance of that process, but they're, but, you know, natural selection is a different kind of instance. Oh, no, I see. I see the distinction you're making. That makes sense. Yes, that is what I really mean, Ella.

So just to kind of sum up, like machine learning does involve blind variation and selective retention, but it doesn't really involve natural selection is sort of the way that I would. Unless you understood natural select the term natural selection to be equivalent to blind variation. Yeah, I agree. Yeah, it's not. It's not doing something like genetic programming, which is probably closer to although

As we can probably talk about, genetic programming probably is missing something important also compared to what Darwinism is doing. Just to play devil's advocate, I'm not sure I see how machine learning generally implements blind variation even. Can you give an example of that?

Yeah, so gradient descent, what it's actually doing is there's a number of next steps it could be taking. All of those are slight variants of wherever you're currently at in the algorithm. And then it has some way of measuring which of those variants to select between. Yeah, okay. Again, I would think that's stretching the...

what a variation is. I would consider a variation just a copying error. For gradient descent, to me it seems, yes, there's a, you know, there's a pool of points it could go to next, but there is a criterion of truth or an optimization criterion that it's following mechanistically. So there's, I wouldn't consider that a variation, but again, in the widest, I think Ella summed it up very nicely, like in the widest sense, you could consider it variation and selection. Yeah.

Yeah, so in blind variation selective retention, the method by which you're selecting things matters a lot. And so in the case of gradient descent, the, you know, criterion by which you select, you know, you selectively retain the variations is very, very simple. It's just, you know, pick whichever has the, you know, lowest energy or whatever, you know, the highest utility. And so it...

It is blind variation and selective retention. It's just that the selective retention part is very simple. And so it doesn't produce anything like, you know, what neo-Darwinism is capable of producing because the selection criterion there is much more nuanced. Before we lose Cameo and some of the audience entirely. Yeah, it's gotten pretty technical. I should probably explain just quickly what we're talking about. So, yeah.

Basically, if you were to look at the machine learning technique called artificial neural networks, right? And you probably hear about this all the time as deep learning or whatever the current buzzword is. Well, and I've gone through your individualized course on machine learning. Yeah.

So the idea is, is that there's a whole, I mean, like, it's very, very popular in machine learning, a technique called gradient descent, which basically you try to imagine the hypothesis space as kind of this, imagine like peaks and valleys all over the place. And then you're trying to figure out how to get to the bottom of a valley or to the top of a peak and

And then you're hoping that whichever peak or value, you know, depending on whether you try to minimize or maximize, whichever one you get to, that what it comes up with is a good enough predictor that you can use it in real life. And that's actually how machine learning works for, you know, artificial neural networks, but other types of machine learning also work.

And it's not very much like a neo-Darwinian evolution, which is, you know, how knowledge got created in biology, right? And so it's, that would explain why the fact that we're using gradient descent to go do things, that's just a very poor way of going about it compared to whatever it is that biology is actually doing that was able to create all these different species that are adapted to many different environments and things like that.

This is probably a good place to jump into the next question since we've been dwelling on this one about what is the jump to universality? I just gave an example of that. This is one that we don't understand really well, but

evolution using DNA and you know, whatever processes is that we currently have, it can create all sorts of adapted species and it can create lions and fish and jellyfish and bacteria and it seems to be really highly open-ended in terms of what it can create. Whereas machine learning seems to be extremely narrow in what it can do, right? In what it's able to do.

So one of the hypotheses David Deutsch comes up with is that biological evolution has made some sort of jump to universality, although he doesn't define what that jump is because he's not sure. So let's talk specifically about universality and the jump to universality. What is universality? Why is it relevant to AGI? And what is the jump to universality? Does anyone want to take that question?

- I suppose I can go ahead and give my thoughts if nobody else wants to go first. So basically universality is when you have some, well, so say you're working in some domain and there are certain things within that domain

which only certain types of processes can get at. A jump to universality is when one process goes from being able to handle only some small subset of the objects to being able to handle all of the objects in the set. An infinite set, generally, is the context in which the word is used.

And the reason that this matters and why this is something worth studying and not just some arbitrary, you know, hypothetical is that whenever there is a system which, you know, can either have finite reach or sort of a, you know, limited set of things that it can do versus an infinite set of things that it can do. The change to being able to do an infinite things must happen in an infinite leap. You know, you must be you must at some point just do one thing that causes infinite progress.

And that is sort of the jump to universality. It's, you know, moves the system from being non-universal, only having a few things that it can explain or be used for, to being universal in the sense that it can encapsulate everything. And so jumps of universality are the point at which that transition is made, broadly speaking.

Thank you. Anyone else want to add to that? Yeah, a good example of that is the printing press, the Gutenberg Invention. Excellent. Yeah. So basically the way that people printed books before was they either copied them by hand, which was incredibly laborious and slow and costly, or they devised these printing plates.

So for each page in the book, you would create, I forget if it was wooden or if it was metal, but you would create a mirror image of the page and then you would ink it and you would press it onto a new page. And the problem with that approach is it only works for the books for which you have created printing plates. It's very rigid and it's not customizable.

So when Gutenberg invented the printing press, I don't actually know if he did it. I mean, sorry, he didn't invent, he invented a particular printing press that was powered by movable type. The focus should be on movable type, not printing press.

So movable type is customizable because now you've reduced the process of printing to the smallest unit that is universal within that system, which is the letter. Because books are all made of letters and every word is made of letters. So if you just rearrange letters, now you can print any book. So to illustrate what Ella was saying with this example, basically before movable type, you had only...

very narrow applications of printing plates for specific books and only the books that you had already made printing plates for. If you had made printing plates for the Bible, you couldn't then go and print, I don't know, the beginning of infinity. You would have to create the printing plates for the new book first.

But with movable type, you can simply rearrange letters. And not only can you now print the beginning of infinity, but you could print any book whatsoever. You could even print... The movable type system is already able to print books that haven't been written yet. So that's...

I don't know if Gutenberg was actually after that kind of universality. David Deutsch talks about how many of the inventions that ended up being universal weren't actually created for the sake of being universal. It was kind of accidental. That might have been the case with the printing press as well, or the movable type printing press as well. It may have just been the case that Gutenberg thought that this was cheaper, which it was, and faster, which it was too. But

Whether it was a scholar or not, he happened to make something universal and it was universal in the domain of printing any printable book. You could narrow it down a little bit. You could say, well, only books that contain words and letters. He couldn't print. You can't use movable letters to print images, say. But within that domain of printing books that were based on letters, he could now print any book. All right. Thank you. And great example. I think that's an example that everyone can relate to.

Now, not all of you have been part of this podcast. Cameo and I have been doing it for a while. The episode that would come out before this one was actually a discussion about computational universality. And I was explaining computational theory. So the jump to universality to relate to that

was that you had certain types of computers that were limited in what types of algorithms they could create. And then suddenly you have the Turing machine, which is this jump to universality, where suddenly every possible algorithm, at least that is allowed by classical laws of physics, is now possible on that Turing machine. And there are no other machines out there that can run algorithms that the Turing machine can't.

So that's how we kind of relate this to back to what we've been discussing previously on this podcast, which is the jump to universality for computers. But could somebody maybe explain to me now the relationship between the jump to computational universality and algorithmic universality and really universality of simulation or virtual reality or something along those lines? Does anybody have comments on that? I can try. Okay. Yeah.

I actually, when I looked over the notes in your email for this podcast and I saw algorithm universality, I wasn't quite sure what you were referring to, but I can make a guess and you can tell me if I'm wrong. I'm guessing what you're referring to here is actually, I mean, maybe it's the same thing as competition universality. I mean, an algorithm is just something that can run on a Turing machine. Yeah.

A Turing machine is computationally universal when it's a universal Turing machine, which means it can just run any algorithm any other Turing machine could run too. So there's no qualitative difference anymore between that universal Turing machine and any other Turing machine. Yeah. Now to get to the what it means to be universal simulator is basically you say, well, the laws of physics are computable. And because everything that

happens in the universe follows the laws of physics, that means everything that happens in the universe is also computable. And so therefore, any process that happens in the universe, like I like to say, if it happens out there, anywhere in the universe, you could simulate it on a computer. And by simulate, David Deutsch points this out in one of his interviews, simulate does not mean that it's sort of fake or not genuine. It still means the information processing is the same. So you could, for example, simulate

solar system on a computer. And it really is a simulation of the solar system. It's approximately the solar system because it's not the same size. And, you know, there may be some quirks that you didn't take into account, but it might be a pretty good approximation. And so now you have done is you've simulated the solar system on a computer, which is really an amazing feat if you think about it. I like to think it is.

And so universal simulation just means, well again, you've built a simulator, a computer that could simulate anything all the other simulators could simulate. And it also means that it could simulate anything that happens in the universe because everything that happens in the universe is governed by the laws of physics, which are computable. So this deep connection between computation and

simulation and explanation of whatever happens in the universe is I think explained very well by David Deutsch and the fabric of reality.

All right. Thank you. Actually, that was exactly what I was hoping someone would say, Dennis. That was better than I could have described. Let's talk just a second about this. I actually do think this is something people tend to get confused on. Sorry, can I just say one thing? I do want to add a grain of salt because this is not my wheelhouse really. When I say things like the laws of physics are computable and everything follows the laws of physics and so therefore everything else is computable too. I

I know this because I've read it in David Deutsch's books, but it's not my wheelhouse. And I would want people to take it with a grain of salt. Like it's possible that I missed something or that David means something else by it. Oh, excellent. Okay, good fair point.

So I've noticed that people tend to get confused as to what a simulation is. And so you explained that well, but let me throw out there kind of some of the confusion I've seen in the past from talking with people. I was talking to a friend who said that a simulation is not the same as reality. And then he used the example that a virtual chair is not a real chair. That would be silly to think a virtual chair is a real chair. And I've also heard that

As an example, if you simulate a tornado inside a computer, the chips inside the computer don't get torn up by a real tornado. It seems to me that this view of simulation might be a little bit, is misunderstanding something important here. It's not that what they're saying is wrong, but they're trying to equate things that don't necessarily make sense, right? So,

Douglas Hofstadter, he points out, well, yeah, of course, if you simulate a tornado in a computer, it doesn't tear up the chips. That's the wrong level of emergence. But if you had simulated buildings in the simulation with the tornado, it would tear those up. And so it's easy to get kind of confused at what we mean by simulation. And then Hofstadter goes one further.

and he says a simulation of intelligence is just intelligence right they're the same thing there is no there's no difference at this point if you have a simulated intelligence that goes and writes a math paper it's a math paper it's not a simulated math paper right so there's some things where it makes sense to differentiate between simulation and reality but there's other things where it doesn't make sense to and that a simulation is reality

And I think that, Dennis, that was really kind of what you were getting at in terms of simulations aren't fake. They're an actual thing that exists. And it's important that they do. And they've got a lot to do with how we understand the world.

That's right. I think there can be a sort of instrumentalism that can sneak in when it comes to simulation, where people say, well, they're just useful models or something, but they don't actually tell us anything about reality, which I think is wrong. You could fix the specific example with a tornado. I think you could fix that by saying, well, yes, the computer doesn't get torn up by the simulation of the tornado that it runs, but you could, on that same computer, you could run a simulation of a tornado

destroying a computer. Right. And then it will work again. And if it's a good simulation, it would tell you how it would destroy the computer and why.

The Theory of Anything podcast could use your help. We have a small but loyal audience, and we'd like to get the word out about the podcast to others so others can enjoy it as well. To the best of our knowledge, we're the only podcast that covers all four strands of David Deutsch's philosophy as well as other interesting subjects. If you're enjoying this podcast, please give us a five-star rating on Apple Podcasts. This can usually be done right inside your podcast player, or you can Google The Theory of Anything podcast Apple or something like that.

Some players have their own rating system, and giving us a five-star rating on any rating system would be helpful. If you enjoy a particular episode, please consider tweeting about us or linking to us on Facebook or other social media to help get the word out.

If you are interested in financially supporting the podcast, we have two ways to do that. The first is via our podcast host site, Anchor. Just go to anchor.fm slash four dash strands, F-O-U-R dash S-T-R-A-N-D-S. There's a support button available that allows you to do reoccurring donations. If you want to make a one-time donation, go to our blog, which is fourstrands.org. There is a donation button there that uses PayPal.

Thank you.