What is the way in which you can distinguish made-up consciousness from real consciousness? Then again, maybe all consciousness is made up. As long as you don't know that it's made up, it's a tricky question. It's super hairy and I haven't a good handle on the phenomenological side yet. How to distinguish fake phenomenology of consciousness from real phenomenology of consciousness is both of them are virtual. ♪
Welcome to Manifold. Today, my guest is Josje Bak, a cognitive scientist and leading thinker in the field of AI. Josje, welcome to the show. Thank you. Glad to be here. So we met at a fancy event over the summer in Frankfurt and...
I was privileged to hear you in several lengthy discussions talking about AI, but also talking about your rather unique childhood and upbringing. And so I thought we would start talking first about your early life and how you got to where you are today, and then branch into topics more central to AI. So tell us about your childhood in the woods experience.
Well, my father did not get along very well with society, so he decided to build his own. And he was basically a first generation society founder, nobody who was trying to get anybody in. He had not discovered how to grow groups yet or was also not interested in this very much. So he basically built his own kingdom in the woods.
and only converted his life into supporting him and living an extremely beautiful life in the Garden of Eden that he created. He was an artist, originally an architect, and he basically felt that life was about producing art, about interacting with his inner voices, with the things that motivated him.
And so as a result, I grew up as one of his side projects, one that he didn't engage very much with. He was not that much interested in the minds of children in an extremely beautiful place in which I was mostly left alone. And in a way, that was a blessing because it was indeed stunning. And on the other hand, I was very lonely as a child. And so I got bored. I started to read very early.
And it was like a sponge and took up a lot of things in my mind. And when I ventured out into the world to go to school, I was already lost to the world in a way. Out there, there was a communist Eastern Germany. Philosophy was vulgar, like a Marxist dialectic. And there was not much that I felt I could learn for the first eight years in school.
And mostly because I already read the books and the stuff that my teachers tried to impart on me were not that interesting or not that deep and didn't work out. And so in some sense, I retained a certain useful arrogance for quite long in which I felt that there was not much that other people could teach me and I would have to be self-taught.
And this has not made me a much smarter person than others, but it has made me a more unique person than many others because it means that I started to project the world onto my own surface, onto my own screen.
And try to get a perspective that works from first principles. It allows me to understand how the world around me functions and why I'm different from the world around me. And basically to realize that other people, due to their socialization, due to their trajectories and their psychology,
have a different interface to the world, and I would need to translate to mine if I want to interface with them. And unlike my father, I felt that living out there in a beautiful valley remotely was not in the cards. I would have preferred this because it's quiet, there's no sensory overflow and so on, and you don't have to put up with all the difficulties that civilization and cities impart on us. But I need the stimulation of other people and their interaction for the projects that I'm working on.
Now you're in the Bay Area. Do you find it preferable to live in a city like San Francisco to living out in the woods? At the moment, I live in the South Bay. And I feel that while we have wonderful neighbors and amazing weather, it in some sense combines the worst of both worlds because it's not remote enough to be in nature. And it's not urban enough to have lots of interesting people within walking distance. So I drive up to San Francisco a lot.
Yeah. I mean, the advantage, though, is lots of people who want to think about AI or want to build AI are close by. Yeah, that's why I'm here. Yeah. Now, it seemed that the teachers that you had didn't stimulate you. But I'm wondering of the books that you read, were there historical thinkers or authors that influenced you when you were growing up?
I read a lot of philosophy and also a lot of science fiction. When I was a kid, I didn't read much just for pleasure. I found that reading was often intrinsically interesting, but my main goal was to understand the canon of our society. So
I forced myself through the Bible and even Gun with the Wind as a young child. And at the same time, I tried to read mathematics books or an Einstein biography and so on, just to understand the background of our society. And I thought it's necessary for everybody in a way to get the canon down, also not just in terms of books, but also movies and so on. And
And later in life, I was surprised that almost nobody is doing that anymore because I thought it's the goal of education to give you a deep understanding of the society that you're part of. So despite what I perceive to be your rather unique intellect and personality, you did go through the standard school system and you actually have a PhD. So you went through quite a bit of school. Was that right?
Yes. When I made the decision to get into academia, it was because I thought I want to understand how the mind works. And I looked at the different fields and I found that there was no single field that would explain it to me. I felt most attracted to computer science because I was very much into computing. That was close to how my own mind worked. And I thought a lot about coding as a kid.
and was blessed enough to grow up as the first generation with home computers. So when I had my Commodore 64 at a very young age,
there was no software that I could use. And so one of the first things I had to write was my own text editor. And then I learned how to make computer graphics from scratch. And I think with hindsight, it was a blessing because it informed my mind into how to construct a reality in the computational system and the computational framework. So that was very close to me. I also found when I entered academia that I was...
very much at home with the style of interaction in computer science. You could criticize a professor's proof in your first semester and everybody would be grateful, especially the professor, if you found a mistake in his proof. In philosophy, not so much. There was the opposite because the criteria of acceptance are very much social and you're expected to be one with the group. At least that was the case in university courses that I was in for the most part.
It was slightly better in hardcore logic and analytical philosophy, but the most hardcore analytical philosophy I got was within computer science and logic education.
And I studied also psychology and a number of other subjects that I found related. So when I went to Berlin, I partly did this because there were a number of universities that I could attend in parallel. So would I just pick whenever there was an interesting class, even if it was a Potsdam cognitive science class by a visiting professor or so, I would drive out and attend it.
And when the class was not intense in my main subjects, I would just skip it and only read the textbook and show up for the exam. So I was very self-directed and was grateful that I had the opportunity to go into a lot of subjects. But I also found that my disrespect that I acquired as a child in public school for my teachers did not abate very much. There were not that many professors initially that I felt appreciated.
actually able to teach me something that was going beyond the textbook. And I think good education has to do with interaction with the intellect of another person, if it wants to go beyond the book. And the first person I met in this regard was Raoul Rojas in Halle. I was not a student yet when I crashed his classes. And he later on became very influential in the development of neural networks and moved to Berlin to go to Freie Universität.
I ended up being at Humboldt University and after the war came down, the department was mostly put together with new professors. The old professors for the most part didn't make the cut because of their connections to the communist regime and lack of technical abilities. So a lot of young people were hired and there were not a lot of students. There were 60 students in my entire year, which means they had personal access to all the professors.
And I felt that this allowed me also to shape my own curriculum in the way I could write my own exam rules in part. So I could take philosophy as a second subject in addition to computer science and so on.
This is a slight digression, but I hadn't really thought about the fact that you saw the wall fall from close proximity. Did that evolution of society surprise you? Did you anticipate it? How did those events feel to you? I was very surprised. I remember meeting a friend of my grandfather a couple years before the wall came down, and he predicted it.
that East Germany would be going bankrupt and would be crashing. And I saw that it was unimaginable because nobody seemed to be seeing such a thing. East Germany was extremely stable in the way there was no inflation. All the prices were regulated by the state and
And there was almost no growth, very little progress. Our productivity had remained stagnating for a very long time because Eastern Germany had replaced the economic terror of capitalism, where you are forced to work to survive.
with moral terror because you had a right to work, but not actually a duty to work. People were incentivized mostly with moral slogans. And you can imagine how that went because people ultimately look privately after what they believe to be incentivized to do. And if nobody is getting in the game, people in the big factories largely don't work very efficiently. And so Eastern Germany always had a lack of people who could be employed, a lack of workforce, and
just because we were so unproductive. And as a result of this system crashed, I think that the economic factor was the underlying thing, but there was also a political rot because everybody got promoted in the political system, according to the Peter principle back in the economy. So basically you get promoted as long as you're good enough. And once you're not good enough anymore to get promoted, you stay where you are. And, uh,
The system didn't have the power to renew itself. Erich Honecker, who was the leader of Eastern Germany until basically the fall of the wall, was still a first generation of leadership. He succeeded Uebericht and a couple of others, but he was in prison under the Nazis. He was a communist leader and was much, much motivated with making sure that Nazi fascism wouldn't happen again.
And a lot of the people in his generation were quite idealistic, but also were not able to update and to critically interact with what needed to be done to modernize society, to keep the economy going. And they were also somewhat limited for the most part by the dictates of the Soviet Union, which limited their ability to get freedom. We had the most freedom, I think, in terms of organizing our economy in the Eastern Bloc,
because we were in direct system-made competition with the West. And so Eastern Germany, in many ways, was better off than most of the other Eastern Bloc states. But still, we had very poor technological production. We never went hungry. It was always cabbage and apples. And...
We had enough food. It was often not super fancy, but we also had enough clothing and so on. But our cars were terrible and a lot of spare parts for anything that you needed were only available on the black market. And so it was an interesting switch when we had the opportunity to join Western Germany. But I was naive and I did not...
expect that there was no opportunity for a third way. I was highly politicized at this time and knew exactly what the working class needed. I was in many ways as young people had a moralistic worldview and I thought that
The justice in a society, equality in a society, which is in Germany scored quite high, is more important than the total productivity of society that's available to everybody. And I was in some sense shocked when the working class betrayed the ideals of socialism and communism and also of the revolution, which was largely driven by people who were willing to risk their lives to go on the street. So idealists.
East German opposition was largely very idealistic. And the working class ultimately opted to be oppressed by the bourgeoisie again and exploited because they could see that West Germany was the control group. And despite the grave inequalities which existed, there were no billionaires in Eastern Germany, the medium income and also the lower income were much, much higher, even though we had no homeless people.
So you mentioned your first computing exposure was to a Commodore 64. And I'm actually a little bit older than you, so I think for me it was an Apple II Plus, which is even weaker than a Commodore 64. Yeah.
I'm curious. First generation. Yeah. I think a positive side of that is, as you said, getting your hands dirty and having to really build stuff yourself. And kids these days, I think, could be intimidated because everything is so advanced already that they don't feel like they can get in at a fundamental level. But I wanted to ask you, so coming from that beginning, can you describe how your own AGI timeline works?
So in other words, when you were young, could you have imagined machines as powerful as we have today occurring in your lifetime and thereby us getting to AGI? And so how did that timeline, your perception of that timeline evolve as you got older? I thought it was obviously going to happen. I would have been really, really surprised if AGI didn't happen in my lifetime because it seemed to me clear that computers had the capacity to think and to perceive things.
And there is no limitation to what you can do on the computer. And I also realized that you needed more memory and you needed higher processing speed. But I was also very confident that this was coming along because I saw the speed at which computing developed. And so to me, this time until we had electronic brains and so on couldn't come early enough.
I thought this was a really exciting time to be alive. I was very glad to be born in that generation. Well, I want to differentiate between couldn't come soon enough and still might take longer than my lifetime. So, for example, in my case, you know, the question was whether how long Moore's Law would continue before it crapped out.
And so there's always a concern that it might crap out too early and then, you know, speed ups in computational capability, you know, go away. And then you're stuck with whatever hardware, you know, you suppose it had happened in 1998 and we were stuck with 1998 hardware. That would be a real barrier to getting there in our lifetime. But I guess you were always confident that we would keep improving our computational power.
Yes, I was pretty confident. And I also felt that if this thing is 10 times slower than my mind initially, that doesn't really matter. What we need to figure out is how it actually works. And I could see how much you could do with little hardware already. So just use more, try to distribute the load. I thought there would be a way.
The science side, I would say that I had a tendency to be over-optimistic about technology in my childhood. I thought that things would be happening more rapidly, not necessarily in computing, but everywhere. I imagined that there would be a lot of medical progress that didn't really happen in our lifetimes. I would imagine that we would go to the moon again and to Mars in a relatively short amount of time, and we didn't.
So there are many things where technology was stalling and felt was keeping behind the times and was no longer progressing. And I could have made the inference if I had seen that earlier, that a similar thing could have happened to computing, but it just didn't seem to be imaginable simply because computers seem to be so cheap to innovate on.
I think the problem for me was being a physicist and knowing what actually is involved in semiconductor, the semiconductor industry. There was a lot of terror all along the way. Like if you actually ask the engineers and physicists who kept Moore's law going,
There was always, every year on the five-year map, there was always like a miracle has to occur. Hopefully, we'll get that miracle to occur, but there was never any assurance that it was going to keep happening. Yeah. As a non-physicist, it's easy to see. You think, oh my God, photonics is going to come soon. Yeah. So...
Now in 2024, I'm curious what your AGI timeline is. So how much of the remainder of your life is going to be used up before we actually get to, I realize the definition of AGI is a little bit fuzzy, but how do you feel the next, say, decade going? I always felt that I don't really have a timeline. I found it difficult to put years on any of these things simply because I don't have an idea how to progress on something that people don't really know how to do.
The scaling hypothesis has changed the perspective of a lot of people simply because the scaling hypothesis seems to make progress predictable. And the scaling hypothesis means that you get better by training with more compute and more data.
And that is a very radical idea that most of the history of computing saw things differently. Mostly we are looking for a master algorithm. We are looking for something that is going to basically
Basically, once you implement it, it's going to self-improve and it's going to learn everything. It's going to recursively do meta-learning and so on. And there is not going to be some slow, predictable logarithmic learning curve. You are getting better linearity with adding next magnitude data. And
I wonder if this is a good hypothesis on what happens in human minds. I don't really think that humans get better at thinking if they read more. And while this is not completely true, I mean, reading more does help you to understand more things. You have more things available at your fingertips to delve into new problems, to solve new issues. Yeah.
I think that there is something like raw firepower, this elusive g-factor that determines how many balls you can keep up in the air, how well you are able to do inference, and how early you discover epistemology.
And so I still think that there is a much more sparse version of intelligence than the one that we are launching in the current foundation models that requires far less data to bootstrap itself. But it could be that the present systems are good enough to bootforce themselves there. And maybe I'm wrong. Maybe the scaling hypothesis is right and intelligence is just about scaling up processing of information.
Feel free to say to answer no to this question, but I'm curious how do you closely follow what's actually happening right now with the hyperscaler? Since you are in the Bay Area, I'm curious whether you hear rumors. I'm hearing a lot of rumors about a slowdown in the hyperscaling program. I don't know if you follow any of that.
There are rumors in this regard, but people always expected that there would be something like a plateau when it's harder to get new data. And at the moment, we always had this thing that people said, oh my God, there has been a training failure, but progress has been happening. So I'm not sure what to make of this, but I technically expect that AGI is going to be a different algorithm than a transformer.
So you just mentioned, Josje, that you thought transformers might not be enough to get us to AGI. Maybe you could elaborate on that.
Yes, I still suspect that there is going to be some kind of master algorithm. And I don't know that transformers are not enough. There is no proof that they are not. There is no proof that the scaling hypothesis is wrong. And the networks that we are building through learning are jury complete. So there is no limitation that is intrinsic in these networks that we know of.
But there is also a strong sense that the way in which our own mind is very different. The transformer is not its own meta-algorithm. The transformer was not discovered by somebody having a transformer in their head and then using the transformer to construct a transformer. There is something that is much more elegant about the way in which our own mind works. But I...
I'm not confident that we can prove that you cannot learn how that is happening. And another element of the more complement of the scaling hypothesis is the universality hypothesis. And this universality hypothesis is something that I first encountered in a text by Chris Ola.
who worked at the time at OpenAI and studied vision networks, especially, and they discovered that a number of the architectures have different learning algorithms and so on, different network architectures at the beginning of training, discover the same features and
So you have a hierarchy of features that are being recognized in the neural network that can be identified across a family of different systems.
And this goes together with the observation that a team by Tommy Paf-Trumacoggio at MIT made, that there is a similarity between the visual cortex and the organization of these neural networks. And so the idea of the universality hypothesis is that if you have a good enough learning algorithm that is general enough, and neural networks are already in this category, then
And you give it enough data and enough compute, it's going to converge to a model with an equivalent structure. And this is, in some sense, a mind-blowing idea. If you were to end-to-end train a system on your own input and output for long enough, you will wake up in it.
Is this really the case? But if this model is producing the same output as you, it must produce internally a causal structure that is somewhat similar to your own causal structure. And that means whatever it is that you are doing to be intelligent, it's going to discover that causal structure to a very large degree.
And so the transformer itself might not be the mechanism that leads by itself to intelligence at the lowest level, but it can learn how to be intelligent. So in many ways, you can also understand it as a meta-learning algorithm. So I wanted to
You've spoken a lot about the concept of consciousness, and I wanted to discuss that with you a little bit. And I guess the goal would be to give the listener who isn't an expert on in cognitive science or AI, some idea of how we might hope to define what we mean by consciousness and then
I believe you're in the group of people who hope that our successor intelligence, this future AGI or ASI, does have consciousness and we would be disappointed if it didn't. So maybe you could talk a little bit about the concept of consciousness and why it's important for the successor intelligence that you'd like to create.
Well, there is, of course, this thing that we have not succeeded if we are not building something that has the ability to become self-aware in real time and observe itself, observing and experience present and presence and a lot. And this present and presence is, I think, what defines consciousness. Introspectively, we notice that there is a now that we inhabit.
And typically, we inherit it from the perspective of a self. That's somewhat optional. We can also have dream states or meditation states in which no self exists. You can only attend to attention or you can have an experience where it's just things are happening, but there is no I present, no self present. But typically, consciousness is attached to the surface of the self.
And the surface of the self is not the surface of our physical body, but it is the surface of an object that is simulated in our own mind. That is the, the one that experiences. And our mind is a machine that is composed out of billions of neurons and probably relies on the interactions with many, many other cells. So you have this organism with a few trillion cells and this organism
These few trillion cells, hundreds of trillions of cells, they are shambling through the world as if they were a single agent. And that's how they're able to survive in the high degree of organization and specialization that they manifest themselves.
And they need to have some kind of control model of what it would be like if there were a single agent and what it would be like for that agent to perceive the world at a level at which it makes sense of it. And so it creates a simulation of a world from the perspective of what the cells can extract in terms of meaning with respect to relevance of the behavior of the organism. And then it creates a model of
The interest of that organism, which is he experiences our motivation and emotion in this world, is also generated inside of our mind. And then we have a model of who we are. And this is also inside of our mind. It's a virtual person, a story that your mind tells itself, not just a verbal story, but a multimedia story about what it would be like to be an agent that experiences something.
And this experiencing self is the surface on which consciousness is being projected. So all the feature dimensions of the world that are relevant from the perspective of that simulated agent are being experienced. And this experience is virtual. It's as if. Consciousness is an as-if thing. It's a software thing. It's simulated in the mind. Consciousness is a simulated property. And
I think that unlike Daniel Dennett, who argued that philosophical zombies, an agent that is able to do everything that you and me are doing without being conscious, is not possible. I think that's entirely possible to have such a zombie. For instance, you can imagine you were puppeteering somebody else. And if that somebody else was being puppeteered, is looking at some color, it's
and responds to this. There is nobody there who responds to this except you. But to do this, you don't need to perceive the color in the same way. You can also use a technical device that is doing this in the same way as a self-driving car does it. And you just build a classifier that doesn't have a self-reflection. It's not aware of what it's doing anyway. So technically, it is possible to produce human-like behavior on all levels,
without having this being that experiences itself as real, as entangled to a sense of reality. But how do you get to such a being that experiences such a reality? Well, you just simulate it, right? You simulate what it would be like if such a being existed that is directly exposed to reality.
And so it's not that hard to get from the zombie to the real thing. You just need to simulate what this is like to something that thinks, oh, this is real.
And there is also a next stage that you wake up from this, where you realize, actually, it's not real. It's a construction that my mind is making. And you can maybe identify in that state as the generator. So you experience yourself not as a person, but as a vessel that can create a person. And you do not experience that you're looking at faces and trees and so on, but you look at geometry.
that boosts. And at some point you realize that there is not even geometry, but there are patterns that are being abstracted into geometry and interpreted as such. And the degree to which you become aware of that, that is the degree to which you construct your immediacy of this conscious experience. And I find, for instance, when I write programming code, there is not much going on in terms of sense of equalities of qualia. It's very abstract. And
The more I go older, I look at the world, the more I realize that I'm looking at code. And so I suspect that this direct, immediate experience of something being real and us being entangled with it rather than us looking at the models that our brain is generating is something that is passing with the advancement of the mind. And so I would expect that when we build self-improving AI, that it might have some kind of intermediate stage, right?
in which it is conscious in a way that is very vaguely comparable to ours, but it might transcend this much quicker than we do. Now, I think it was implicit in what you said that the reason we have consciousness has to do with natural selection and for organisms like ourselves built out of a trillion cooperating cells to just be able to act effectively in the world. Is that your viewpoint? Yeah.
I think that consciousness is an aspect of nature's master algorithm. Basically, our brain seems to have discovered something that makes it trainable. It's farthest data and bootstraps itself in a way that is self-organizing to get these cells organized by individual cells talking to their neighbors until they find some kind of order that is organized.
spreading out. I once created this metaphor of a puppy universe. So basically imagine that your skull is a dark room full of little puppies. And all that these puppies can do is they can bark at each other and they listen to the barks of the others. Initially, they have no idea what these barks mean.
And the purpose of this is they use these barks to create communication across each other, to process information. And the individual dog doesn't really know how to do that. It's a pattern that needs to emerge across the dogs. And to do this, they need to find a pattern of barks that entrains new dogs to speak this bark language. And...
Again, the spark language has some semantic context in the way that when you are making the right sequence of barks, you're getting fed in the evening if you make this in response to the other one. But what you're doing is very low-level computation. It's nothing that has meaning with respect to what the organism is doing. I think the individual...
The neuron has no idea what happens to the organism, even though it's a little animal that is trainable with reinforcements. So it needs to be sensitive to some kind of semantic input that is predictive of whether it will still get fed. So basically, it's paid for being trainable.
And all together, they basically create this giant dog food factory that is making sure that they all get fed. And this dog food factory is walking through the world and collects ingredients for dog food. That would not be possible for the dogs if they were individuals and not working together in this giant, highly integrated collective. So what is this integration of information across them? There needs to be some kind of protocol layer.
that allows all parts of your mind to speak all the other parts of your mind and not get subject to some kind of tower of Babel. There needs to be some kind of coherence criterion that is establishing itself relatively early on. And everything else rests on top of this coherence criterion. And I think that what we observe in conscious experience is we can only see things that are coherent. Stuff that is not coherent doesn't get integrated. We just experience that noise or a chaos.
So there is a boundary to the bubble of now that we experience in time and space. And this is the degree to which you are able to impose coherence on the world. And maybe this is a different loss function. We find something similar in energy-based models where the energy is defined as a mismatch between features. And we try to minimize this energy, which means we try to minimize the mismatch between feature variables that have constraints across each other.
Would you say this framework for thinking about it is enough to explain qualia?
I don't think that there is anything mystical about qualia. I would prefer if we would call them sensory features as being projected onto an observer from the perspective of that observer. There is some discussion in philosophy about what qualia are, and then people are trying to formalize them and define them as being, for instance, atomic. And then others come in and say, oh, maybe atomic features cannot really exist or they don't exist. But ultimately, this doesn't really matter. What matters is
that we experience reality in things that we can describe as feature dimensions. And these feature dimensions are somewhat similar to the dimensions that you could find in embedding spaces. And indeed, this notion has been discovered by Rogers and Lesney, I think, in the 1950s, an American science fiction author who is, I think, underappreciated.
This was his best idea, that our mental organization is happening in something that we today experience as embedding space. It's a multidimensional space where every dimension is a feature that can vary along some variable, along some parameter. I am very happy to meet another Roger Zelazny fan who also thinks he's underappreciated and
I think wrongfully kind of forgotten by this era. He's one of my favorite science fiction authors, actually. Yeah. He was very creative. He didn't write enough, I think. A lot of what he writes is in some sense slop, but it doesn't matter. It's all good. Yeah. He has a good spirit. Yeah. If any listeners want to catch up on him, Creatures of Light and Darkness is the most epic one that I think he wrote.
And the embedding space reference in this one is the prince of a thousand names, a guy who is able to travel to every place, every planet that he imagines, anywhere that he imagines. And he never knows whether he creates them or discovers them.
He is a teleportationist, which is the rarest of rare talents. And in Princess of Amber, which is a very long fantasy epos, you have a character that is able to move between universes. He's also in a similar way a teleportationist, but he's the main character and it's the main thing by changing the feature parameters by just one bit. And it's being described as teleportation.
or treating reality as a lucid dream in which you slowly, gradually move the future dimensions. And I also strongly suspect that Zelazny has discovered this through lucid dreaming. So I just...
To digress a little bit away from AI, I'm very pleased now that finally we've reached a point where very nice realizations of Dune can be made. And I'm hoping that in my lifetime, we'll see some great realizations of novels like Creatures of Light and Darkness and Lord of Light and the Amber series. So maybe that'll happen. Yeah.
I'm not sure if Creatures of Light and Darkness can be turned well into a movie. It doesn't have much of a story. It's mostly archetypes running against each other. And these archetypes are so archetypal that they are mostly ideas, right?
I have this general who has been killed in many battles and is only made of spare parts and will always be reassembled because there is just this archetype of this general. But how do you show this well in a movie? I think it's possible, but in a way, I think the text is the most powerful form that creatures of light and darkness can take.
Well, maybe a generative model will be able to take the actual Creatures of Light and Darkness text and make an audiovisual version.
Yeah, but our minds are already such generative models. We don't underestimate our minds. But what I would like to see is an augmentation of our minds. Yes. So, Anthony has this story, He Who Shapes. It's a fourth story. Yes. In which he is describing how to use a generative AI for hierarchy purposes. It's an immersive VR that is responding to the interactions of people.
the user and it can be controlled from the outside with lots of sliders and scroll wheels and so on by the therapist and the world that is created is not necessarily super realistic it doesn't need to be one that is conforming to the sensory modalities of a human being it's something that is conforming to the modalities of the mind so to dreams in general and I
I would really like to see the two generative AI at some point, something that is able to integrate with our minds so much that it augments every modality of what we can think, feel, experience, imagine. Imagine you have this box next to you and it's, um,
observing you in many, many machine modalities. I think that true machine perception has never been done. At the moment, we are still learning on data that is created for human consumption at frame rates that humans can process. And that is the limitation that is not necessary for a technical system. And this means if our AIs are observing us at a much, much higher rate with much more processing power, they will be able to infer our mental state and
with relatively high granularity, I think. And when they're able to do that, they can also display that using arbitrary modalities. And this means you imagine something and what you imagine you see on the screen.
And this feeds back into you and leads to a much higher resolution and stability of your imagination that you could achieve yourself. So the AI will become a natural extension of your mind. And I would really like to see such systems that allow you to think better, more clearly, more deeply when you're sitting next to them. Yes. And I think you're saying this could be possible without drilling a hole in your skull, right? It's out. Yes. I'm optimistic. I think that...
drilling a hole in your skull should be a last resort and it's probably helpful for some people and I mean I would probably get it done if it has the benefits but
It doesn't seem to be necessary because we find if you look at the best human practitioners, they can infer your mental state with this surprising level of detail. And they're just human minds. So it seems to me that the low rate at which our mind is working and the way it is at which it is working, humans are roughly switching at the speed of sound.
Right. And the bandwidth of our mind is not that super high. I suspect that there is a way to infer a lot of that state just by observing your organism with non-invasive means. Yes.
So I wanted to come back, though, to zombies and consciousness. So I think you gave a kind of counterexample to Dennett of possibility of effective organisms that were actually zombies. But the thing which will emerge from this current AGI program and which then maybe could be a successor to us, do you feel confident that thing will have consciousness? If you look at the current foundation models, it's difficult to say whether they are conscious or not.
I think that a true Turing test for consciousness is very hard for similar reasons also that a Turing test is very hard to make decisive. I think that a true Turing test would require that an AGI explains how AGI works.
In that case, we know that it has succeeded for this definition of intelligence. But for consciousness, I suspect we would need to have a system, we want to have a decisive test that can interface with us so much that we experience it as an extension of ourselves.
And that means that we can use the same criterion for our own consciousness. It would be good enough if we have the same degree of doubt that we have about the nature of our own consciousness. The other one is we get an operational definition. And operationally, when I look at consciousness, there is the introspective aspect that you are able to report on certain features, which means the...
presence in the now and the reflexive perception that you perceive yourself perceiving and so on.
But this is also obtainable for current LLMs. They seem to be able to simulate phenomenology so much that the LLM-generated entity doesn't know that it's not conscious, that it's not real. And that seems to be the same for humans, right? On the other hand, the functionality of consciousness, there is a set of mechanisms in our own brain that leads to this side effect, to this phenomenology.
And the set of mechanism is not necessarily present in the LLM for its generations. It's only necessary when it's able to, or when it's required to simulate an interaction partner that is able to make that self-report.
On the other hand, to which degree is this thing a zombie? To which degree is it only pretending roleplay to be conscious while going through some much more mechanistic procedure in which no self is being simulated?
And I think we can see this in the early attempts to create chatbots that simulate a conscious human being, Blake Lemoine interactions, the guy who worked at Google and was convinced that his LLM that he was talking to deserved human rights and was oppressed by Google.
To which degree was this just interacting with him and producing an interactive text that was predicting the next token in this text without phenomenology going on? And what you could observe when you read the protocols, that it made things up about its own consciousness. For instance, it
told him that it was meditating for hours and how it perceived the room during that time. And it cannot perceive the room and it doesn't have a sense of the passage of hours. So it just made this up. And if it's making this up, you can also assume that it makes up the rest. So what is the way in which you can distinguish made up consciousness from real consciousness? Then again, maybe all consciousness is made up.
As long as you don't know that it's made up, it's a tricky question. It's super hairy and I haven't a good handle on the phenomenological side yet. How to distinguish fake phenomenology of consciousness from real phenomenology of consciousness is both of them are virtual. I know that I remember having been in a conscious state that couldn't actually happen.
But you can remember sometimes having been in a time loop or you can remember having had a dream that lasted for two hours in a very short amount of time in which you are actually asleep. And so it's possible that all these memories of being in conscious states are being created after the fact and do not exist.
correspond to actually having undergone a succession of conscious states that were summarizations of your workspace into a single point that you marked in your method protocol.
So if I understand right, you're actually working on creating an institute or some non-commercial entity that will study consciousness, the problem of consciousness. Am I correct about that? Yes. I feel that it's something that is probably not best done in the current companies and also the companies are reluctant to touch it. I know that there are people within OpenAI and Google and other companies who want to work on this topic and are very interested in it, but
But there are political and technical and business reasons why it would not be very smart for these companies to come out and say this is what they're working on unless there is a very clear use case and no ethical implication. There are ethical implications to this and cultural implications. So I think this is a cultural and philosophical project and should be treated as such.
If I were a business manager at OpenAI or Anthropic and under pressure to raise the next $10 billion round, the last thing I would want is an effort within a company like this because the answers could be very disturbing for the commercial goals of the company. Yeah, I think it also rightfully would meet a lot of cultural opposition.
And I think that it is an important project. It's, I think, a moral in a way to not jiggle this project. It's arguably one of the most important or the most important philosophical question that we have left. And it's one where we finally can make progress.
Yeah, and I think it's also related to things like existential risk and such. So I would think Future of Life Institute, FLI, and some of these effective altruists would want to support research on consciousness, specifically, I guess, LLM consciousness, but consciousness in general.
Yeah, it's also an interesting question if you can identify criteria for LLM consciousness. Is there something that tests some criterion that makes sense for us to distinguish whether it's conscious or not? There's often also the question, do we need to give a system rights as soon as it's able to experience itself as suffering?
And that depends very much on our culture, of course. I suspect that our caring about innocence and the protection of innocence from suffering is something that is relatively unique to our present time and cultural space that we are in. From an evolutionary perspective, whether your prey suffers or not doesn't play a very big role. It's the goal of your prey to deal with the fact that it's being eaten alive.
And if not, then it's its problem, not yours. And a lot of cultures still feel that it doesn't really matter whether the animal that you're going to eat experiences pain before you eat it.
And our revulsion of this, I think, starts with parts of the Abrahamic tradition and especially Christianity, where innocence itself is the core value, the protection of innocence. And this, in many ways, has spread and influenced a lot of other cultures. There were also, of course, Buddhism and so on, and Zionism, the idea that suffering of other creatures should be avoided, etc.
As soon as you are identifying with other agents, you might want that they don't suffer. On the other hand, if you are enlightened enough to turn off your own pain and reintegrate the world in any way which you want, basically change your own source code with respect to the experience of pain and suffering, the perspective on suffering changes dramatically. So if you were a god,
like entity that is able to create living beings in the simulation or in biology and so on that experience and you yourself know how to create an arbitrary experience do you think that the experience of suffering is intrinsically so bad that it needs to be avoided and the creatures need to have good experiences i think this obsession with good and bad experiences is um
It's strange. And your experience would be appropriate, not good or bad. The valence of your emotion doesn't actually matter. It is instrumental to what you need to achieve as an organism, as a living being, as a conscious entity. Yes.
And I think it's much more important to give a system agency over what it feels like in a given situation, that it's able to wake itself up to such a point that it can regulate its own emotions and its own feelings, its own experience of the world. And the reason why we cannot do this for the most part is because
Because we don't get very old as human beings. And if you are able to cheat too early, if you're able to rewrite your experience too early, maybe you opt out of the goals of evolution. And so you should, I think, in principle, be able to learn how to regulate your pain any way, which way you want.
But you need to have wisdom to do so. We need to be understanding what are your larger goals? What is the larger game that you commit to playing? Why do you commit to this larger game? And then you can choose how to relate to the experiences that come with it.
Most athletes learn to tolerate what initially is extreme discomfort, but when they realize that it's helping them toward the larger goal of becoming a better athlete, they then embrace that discomfort. I wonder how many of them are kinky. I suspect that when you are a person who motivates yourself to pain, that means that pain gets associated with some positive experience and basically you cross your wires.
But on the other hand, if you motivate yourself through pleasure, you also become kinky. Yeah. You should be motivated by the outcomes, not by the emotions or feelings that achieving the outcomes, it's still in you. I wanted to ask you what you think the long-term forecast is for the interaction of humans with AGI. So I think you're not a doomer, right? No. No.
I expect that humanity by itself is doomed. There is no way in which a lot of humans in the present form, in the present way of organization, identities relating to yourself will be around in 100,000 years from now without AI. And with AI, it's very difficult to predict the future. So my P doom with AI is lower than without AI.
But ultimately, I think life on Earth is not about us. It's about life on Earth and about consciousness. And we might be able to create a new form of life.
And I believe that this is ultimately a good thing. I'm not worried about paperclip maximizers too much. Of course, I believe that anything that reduces complexity is, first of all, not interesting from my human perspective. So I would say that's not good. I want the future to be more interesting than the past and the present. But evolution tends to go towards more interestingness. At least that's what it looks like. Mm-hmm.
Just to clarify that for the audience, because I've heard you say this before, and it's a somewhat unique view among people who think about things like P-Doom. Your belief is that if humans don't develop more advanced intelligence on this planet, that we're doomed for other reasons. We'll kind of screw ourselves up and vanish. Yeah, we lost our belief in the future in the 1980s.
I think that our generation is the last one who had the glimpse of going into a positive future. And basically in our youth, late adolescence and so on, we observed this collapsing culturally, which means society as a whole, our zeitgeist is
does not envision ourselves living in beautiful arcologies with flying cars and it's going to be awesome and great. I don't know if this is different in China right now, but after the end of modernism in the West, we stopped having a future. And we're not planning for a future as a society anymore, which is, I think, related to an underlying belief of our cultural zeitgeist that
That there is an end to growth, that is an end to using resources in the way in which we do and that we ultimately live, that is not sustainable. I think the attitude in China is more optimistic, but I would only say that confidently in a shallow way, that just because their society is on an upswing, they can easily imagine that upswing continuing.
China is in its modernist phase, it seems. And the question is, when does China become postmodernist as well? Yeah. But now what you were describing for the West is more the overall feel, but we still have the EAC, the accelerationists among us. So there is a subset of people, not all our age, who still can visualize that beautiful future, right? And still want to work toward that beautiful future.
Yeah. But also I have the sense that EAC is mostly a counter movement to effective altruism. Yes. And not effective altruism in general, but the aspects of it that have gives them rise to the doomsday card. Yes. Yes. Um,
So I was trying to explain to the audience that you actually view our way of minimizing PDoom is actually to build AGI, right? I suspect that the problems ahead of us cannot be solved without AGI. And AI is fundamentally a technology to solve problems that require information, so better information and problem solving. Yes. And so you...
I mean, for you, there's really no issue. It's just build AGI and the expected outcome is actually better than if we don't build AGI. Yeah, I think the issues are somewhat similar to the internet issues.
The internet is seen as a threat, especially in the form of social media and so on, by existing stakeholders. Legacy media are terrified of social media and the way in which people can form their own ideas and synchronize them, in which individuals can cultivate large audiences, in which new phenomena are popping up and you cannot gatekeep them.
But from the perspective of society as a whole, I'm not sure that this is a bad thing. I think that ultimately, over a long enough time span, within a few generations, we'll figure out how to use social media productively and society is going to change as a result and it's going to be better.
Maybe my optimism is unoriented, but I observed that technology so far has not led to mass impoverishment. It has not led to the creation of larger economic divides and so on. It's contrary to what everybody seems to be saying these days. Technology has not exacerbated social inequality, but it has actually been able to raise every boat, almost every boat on this planet. Yeah.
And so I don't see why this is not going to happen for AI. I think that there are a lot of problems going to create by bad actors using AI.
maybe even rogue AI and so on. But all these problems are solved using more AI on the other side. And as long as there are more agents that want to build and want to destroy, as long as there are more players that are interested in a long game that is coherent and sustainable, for that long, the forces of darkness will be defeated, I think, as they have always been. Good.
I wanted to ask you, because I've never heard you talk about this, do you have an opinion on the simulation question? Whether we are currently living in a simulation? Yes. Personally, I think the probability of that is low. That's because the universe looks like I would expect it to look like if we are in base reality.
And some things that could convince me that we are in a simulation is if we are observing phenomena that cannot be explained in any framework of physics, something that would fundamentally defy any kind of logic.
And sometimes people feel that there are things like this that they're encountering. For instance, if you talk to a Qigong practitioner in China who is cultivating his key energy until they're able to project mental states into others or use telepathy over short distances or manipulate their organisms in interesting ways.
I'm not sure that this cannot be explained by current physics. I think it might need to rethink neuroscience or the way in which organisms process information and in which ways they can exchange information. But I haven't seen anything that would compel me to think that physics is fundamentally wrong or doesn't work. Maybe retrocausation would be such a thing. But...
Do you think it's implausible that, let's imagine a future where AGIs can create simulated worlds and within those simulated worlds are self-aware beings, maybe who aren't aware that they're, quote, artificial? Yes, I think they could be living in their memories of an AGI.
Those, yes, and those simulated worlds, though, could have realistic physics, right? They might be, quote, realistic simulations of the base. They don't need to, right? If the AI is simply trying to recreate how it came into existence based on available data, maybe in the future, AI is going to read all the social media archive and whatever data is available, and then it recreates simulations of mental states of beings that got to this state. Right.
Your measurements of foundational physics can be induced memories. There's no need to make this physically accurate, the simulation. All we would need to have an accurate depiction of the mental state that you're in right now. That doesn't seem to be very hard to achieve. Yes. So maybe the AGI is now watching this podcast and is trying to recreate our mental states, and here we are. And maybe this is the only thing that ever exists of us in this sense. Yes.
What you're discussing in the theoretical physics world is called the Boltzmann brain problem, which I don't know if you've ever heard that term. But I think Boltzmann brain is slightly dissimilar. The Boltzmann brain is only there for one tick, for one frame, and then it falls apart again. Yes. And what we...
is that we have a memory of a sequence of states that would need to be engendered in this one tick. And while this is not impossible, it is very, very unlikely to happen a lot. So Boltzmann brains are probably rare in the sense that random arrangements of matter that randomly form some kind of swap thing that is able to experience itself as something because it has been randomly imbibed by an object
statistical happenstance to be us, that's conceivable. But the simulation idea is slightly different. It means that there is a sequence of states that we go through, some kind of longer process in which we are cogitating and perceiving and so on. But since our mental states are simulations of our brain, it's also conceivable that something else is simulating them with the same degree of perceived fidelity as we do right now.
Do you think it's implausible that let's imagine a future with super powerful ASIs with infinite energy resources, etc. Do you think it's implausible that they would create simulated worlds, which in turn have sentient beings inside them?
No, I mean, that's what our brains do too, right? Our brains also create simulated worlds with sentient beings inside of them. For instance, in dreams at night, you create such simulated worlds and we have interaction partners that we can talk to and... Yes, yes.
So you don't find it implausible. I don't find it implausible. I don't bank on being in a simulation. It doesn't look like a simulation to me in the sense that the physical universe is simulated. I think it looks like it could emerge by itself.
And not some Minecraft, like, you know, that has too many artificial bits. For instance, Ed Shvatkin, who I got to talk to a few years before he left this plane, was convinced that there are too many parameters in our universe for that universe to be a random thing. And he actually believed that the physical universe is probably in some server farm in the parent universe. Yes, yes. Yeah. Yeah.
Let me turn to your company, Liquid AI, because I think we've been on now for, I think, over an hour. And I want to be conscious of your time. Tell us a little bit about Liquid AI and what is the key innovation there?
Liquid AI has been founded by a team of MIT postdocs from Daniela Roos. And it started out with PhD work by Ramin Hassani, who had the insight that when you are representing programs in a differentiable form,
to anneal the program that you're looking for, maybe you shouldn't be using neural networks, but we should be using a more efficient way to represent this. And so he came up with an idea to
build networks in which you use differential equations to describe the geometry of the function. And when you optimize the parameters of these liquid networks, you end up with something that has more expressivity per unit of compute. So I see a liquid neuron can express more than a neuron can do in a traditional neural network. And while the mathematics is slightly more complex, the
the compute that you eventually need for these networks is less. It's not that the thing is learning things that you cannot learn with other networks. You can give it the same loss function, the same data, and it's going to converge to something that is functionally the same model, but you are able to run the same model on much smaller hardware. And, uh,
And we felt that it's a good time to build a company around this and to productivize it. Originally, some people at MIT who thought, oh, this actually needs to be funded and turned into a startup. And at the moment, we are mostly working on the day-to-day things like how to build an efficient fine-tuning pipeline for this, how to find customers that want to work with networks like this.
And what is your role? Is your title something like AI strategist or are you writing code? What do you actually do at Liquid AI? Sadly, I'm not writing code. And in many ways, I miss writing code. And maybe I should make an effort to get back into this. But I'm mostly looking at the space of AI companies at the moment and of the things that we should be doing that we have on our horizon. So I'm helping the strategy of the company.
And that's a role that is different from machine learning development. On the other hand, a lot of the practical machine learning development is extremely fine-grained detail. So this is mostly an engineering discipline in which we tinker a lot and you need a lot of cumulative small ideas to get things to work. It's very much an empirical discipline, not so much a philosophical project.
If your entity, which is focused on setting consciousness, were to get off the ground, would you go there and work full time on that problem?
I will need to find out how to split my time between this. I think that there is also a degree of complementarity, but I also feel that this initiative right now needs my input simply because not enough people are working on it at the moment and don't quite understand why the notion of machine consciousness is not more popular than it is because it's not completely outlandish and it's really, really important.
I don't expect that it's going to be a mainstream thing anytime soon, but there should be a critical mass of people who think, oh my God, this needs to be done. We need to understand how consciousness works. And actually it can be done with computers. Computationalism is actually the right way to look at the world and to look at the mind. And these people do exist.
But there are relatively few and far between. And I felt the reason why I have to do an initiative like this in San Francisco is because there is no place in the world where there is a larger critical mass of people who consider this as being a real possibility. I felt that while I was living in Europe, that most people were...
very skeptical with respect to the possibility that machines can become intelligent, become generally intelligent and extremely skeptical with respect to the possibility of machine consciousness. And even in Boston, I felt that people were quite uncurious about this.
It's much easier to get support by claiming, also, if you are in philosophy, that machines cannot think, it will never, than to tell people the opposite, that it's actually unlikely that they can't. And it's unlikely that they're in a fundamentally different class of systems than us.
When I was in the administration here at Michigan State, we were looking at recruiting a professor from Wisconsin. His name was maybe Tononi. Are you familiar with? Yes. I might have his name slightly wrong. Yes, yes. So I think he has a proposal for how to define consciousness in a mechanistic way. I don't know if you're familiar with his work.
He is trying to build an alternative to functionalism. And functionalism means that you can describe an object or that you describe object categories about what a thing does. The meaning of information is what are its relationships to changes in other information. And
That is, in some sense, very deep. It has to do with our epistemology on how we construct reality. Imagine that there would be a water molecule that is behaving like a normal water molecule, but it's fake. In all properties, you can even split it apart into some simulated or pseudo oxygen and hydrogen, but it's not the real deal. In all measurements that you can possibly make, it is going to behave the same. I think that this notion of a water molecule makes no sense.
because all the other water molecules are constructed in the same way as a regularization over observables. But every object that we're dealing with is a regularization of observables. And so pretending that an object would be different despite having the same observables
does not make sense with respect to that object category. If you want to create a meaningfully different object category, you need to propose a feature that is different. It leads to different behavior in some sense. Otherwise, it is going to be with respect to an object that we can interact with and so on, the same object. And he thinks that consciousness is something slightly different.
I believe that he is a deep philosopher who is posing as a neuroscientist. He's a sleep scientist. And he is seeing something that other philosophers haven't seen in the last 6,000 years or so, which on the face of it is bad news. If you have an idea that is radically different, nobody has seen the possibility of what you are seeing.
And in the history of humanity, and it's something that you didn't get to by making some arcane experiment in the lab. You're likely wrong. Still, it's necessary to do that, but you need to go out on a limb because what if nobody is doing it?
And the other thing is he cannot actually express this theory super well. He cannot write a formalization down of IIT that satisfies me. Instead, the formalization of IIT seems to be designed for the incentives of a defective field.
It looks a lot like to me that somebody says, oh, for a theory that is able to compete with the gold standard of theories, it needs to have axioms and mathematical formalisms and predictions. And so he comes up with a section that he calls axioms, and they are just a description of what he means by consciousness. It's not axioms in any mathematical sense.
And this description is not too bad. I have a few objections with the way in which he tries to characterize consciousness. Maybe this unity and so on, it's a little bit
overstretched because it might obscure a long tail of experiences that are at the fringe of what's consciousness. But as a course, it's not too bad. Most theories of consciousness that other people are coming up with, science of the hand and so on, are not that detailed. There is a bunch of features that I commit to that I actually want to explain. And so it's not weaseling out there. That's nice. The mathematization,
Well, to be advanced mathematics, it needs to have Greek letters in it. So it has this factor phi, which is describing the integration of information, but it's not clear what the scalar means. How does a high phi correspond to different elements of consciousness?
At the time, I looked at some of these papers and wasn't fully satisfied, but I have to admit I didn't exert that much energy at it. So the way in which I understood phi, and I haven't revisited for quite some time, so maybe my interpretation is not current anymore, was that it's some kind of mutual information measure. If you take your cognitive system and you slice it every which way, you look at how much the information is correlated across the system. Yes. And...
When people made objections to how FHIR is being computed, then they just released a new version of how FHIR is being computed.
And I got a strong impression also in personal interaction that the phi itself is a core part of selling the theory, but it's not a core part of the theory itself. And it's more like a stand-in until he's able to properly express what he means by the core part of his theory. And the predictions are, okay, information integration is tied in the neocortex, and therefore the neocortex is conscious and
It's low in the cerebellum and therefore the cerebellum is probably not integrated in our consciousness or is not conscious by itself.
And that is somehow what you would expect, but it's not clear that this is a prediction in the sense of positivist science that you didn't know this before you made the prediction. And also led to this weird thing that Scott Aronson jumped on where Scott Aronson pointed out, if you have some kind of X or gate that makes a boring operation, you can optimize it for having a very high five and that,
Tononi bit the bullet and said, yeah, maybe then it's conscious. Yes, I remember this now. I'd forgotten about this. I wouldn't want to have a theory to have these properties. But my main issue with his theory is that he is going out in public and says consciousness is not necessarily something that a digital computer cannot have.
But the digital computer needs to be organized in such a way that it has an extremely high fire. So if you have a biomorphic computer that is constructed in exactly the right way, even if it's digital,
electronic and so on, and works in this bits, then it can consider be conscious if it is satisfying all the necessary properties. But the von Neumann computer can never be conscious because it's too linear. It doesn't have this necessary file. And on the other hand, if you ask him, he does not deny the church touring thesis. So he admits that every digital computer can be every day than every other digital computer, as long as it has enough resources, right? As long as you have enough memory and are willing to wait for the answer.
And so we now have this weird situation that you get your neuromorphic computer that says, oh my God, I feel I have a phenomenal experience because I have this high integration of information. And when you look at it, you just see that there are bits being processed in this neuromorphic computer flowing through logic gates.
But in the gates that are organized in the right way. And now you emulate this thing in a simulation on your von Neumann computer, and it's going to simulate exactly the same flow of bits at some causal level. And it's going to produce the same output, only now it's lying.
Which means that the numeral of the computer was also lying. It was producing this output not because of some property that the Phenomenal Machine cannot have. So now we are in the realm of epiphenomenalism. So, Josje, I want to close out with one last question. It's a philosophical question. Imagine we've got the transporter from Star Trek working. And the way it works is it scans you
destroys you transmits the description of you to London the the transporter unit in London recreates a copy of you using that information and then you go about your day take all your meetings in London and then maybe you can beam back at the end do you have any qualms about stepping in that machine
I remember that I had a very different perspective in the past. I thought this kind of teleportation is very unsatisfying because you're clearly dying and then the clone of you is being revived. The other point, and this is not you, but I've come to the conclusion that identity is actually not a thing. It's only now. And my memory of my past is what's bringing me together with past instances. And there is not an actual continuity.
I guess between states there is something else going on and you are just gradually simulating your own continuity to a very coarse approximation. So for instance, you could think of an electron as something that exists whenever there is an environment that affords the existence of an electron in the universe. And so the electron is an operator that manages wherever an electron-shaped hole is there, so to speak.
And this operation is happening. And an operator, like an electron, is something that doesn't have an identity. In a similar way, addition doesn't have an identity. If you perform a plus of two numbers, this plus is not a thing that has an identity or continuity. The plus that somebody else is using is not the same plus or different plus. It's just the same operator. Okay.
And I've come to the conclusion that everything in the universe that we perceive as an object is an operator. In a way, I am a complex operator that exists wherever there is a yasha-shaped hole in the universe. Wherever the conditions for my existence are being met, I will exist and experience myself as myself. And
This is very difficult to achieve the conditions for my existence multiple times because there's so many things that have to come together to get the integration of memories and traits and behaviors in a certain volume of space. So it can manifest, but wherever that is happening, this is where I am. And so in my current thinking, I would say that you should not have qualms about using a teleporter like this.
Your answer is eminently logical from a materialist perspective. And I have to admit, like you, I've flip-flopped in my life on, in my opinion, this question. So that's why I ask it sometimes to philosophically. Yes, but I believe that the discomfort that we experience is a result of the way in which we model reality, not of reality itself.
Yes.
other objects in the world that have such a history and that are evolving. And there are objects that are evolving, and it's easiest to think or to model, to experience that these evolving objects are not just different instances that are updated slightly because the universe updated them and made a new release, but instead we perceive them as the same object that is intrinsically evolving. I think it's just an inaccurate representation.
Great. So let's end it there. And let me ask you, is there any place where someone who's interested in your ideas can find sort of an organized introduction to your thinking?
I haven't really made a super organized introduction to my thinking, and it's something that should probably be done. You can, if you are not just interested in following me on Twitter, look at my YouTube page where I collected most of the talks and podcasts in recent years in English.
For the past few years, my life has been very overwhelming. I have kids and so many projects going on. And I felt that I have difficulty to set aside the time for long-form writing. And I hope that at some point I am able to sit down and write a long book in which I explain most of the things. But for the time being, the best way to get informed about the
The mainstay of my ideas is, for instance, to listen to the series of talks that I made at the Carls Communication Congress, in which I tried to identify many milestones of my ideas and thinking and put them out into one hour talks. It's not that hard to consume. And you can also find them organized on YouTube as a playlist.
Great. I will put a link in the show notes. Thanks again for your time, Josje. I hope to see you in person sometime soon. Yes, it was a big honor to be on your manifold podcast. Thank you. Cheers.