So it's not so much about, you know, having a social well-being economy. It's more about focusing what technology could theoretically do and then emphasizing this. And I think that the word sustainability will increasingly be removed from our vocabulary and it will be all about efficiency. And if we tackle that, then we will also have a greener planet.
Good morning, good afternoon, or good evening, depending on where you're listening. Welcome to AI and the Future of Work, episode 342. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. Our community is growing. We've eclipsed a million listeners, thanks to you, our loyal listeners. If you like what we do, please tell a friend. Give us a like and a rating on wherever you listen, Spotify, Apple Podcasts, etc.,
And to help us grow the audience, we lost a podcast newsletter recently. If you're not already subscribed, do it. Join us. We share tips and tricks and some AI fun facts that don't always make it into the show. There is a link in today's show notes to subscribe. If you share a comment...
I just may share it with our audience in an upcoming episode like this one from Preeti in Burlingame, my neighboring city here in Silicon Valley. Preeti is a developer for an AI startup in stealth mode. She listens while exercising. Her favorite episode is that great conversation from way back in season three with Giselle Mota, popular TEDx speaker and future of work authority about how she broke barriers to enter the tech field.
We learn from AI thought leaders weekly. And of course, the added bonus, you get one AI fun fact. Here it is for today.
Annalisa Novak writes on CBS News about the recent interview between Jeff Hinton, inventor of neural nets and Nobel Prize winner, and Brooks Silva Braga from CBS. In it, Hinton says the pace of AI development has increased faster than he expected two years back. He says it's likely machines will be super intelligent or capable of doing any human task better than a human. In the next 20 years, his previous prediction was 30 to 50 years later.
Hinton says humans will benefit from better healthcare education, access to cures for rare diseases, and possibly ways to eliminate global warming. Yet like most tech leaders,
He expects all mundane jobs, including secretarial work, call center work, and paralegal work, to be eliminated. He says there's a non-zero probability that AI will overcome guardrails and seek to harm humans. But he says we don't have nearly enough information today to claim that's a likely scenario. Of course, we'll link to the full interview in today's show notes. My commentary...
Jeff Hinton famously quit Google years back over concerns that they're focused on AI profits over safety. He continues to criticize all of the major AI labs for their cavalier attitudes and lack of investment in low probability but potentially catastrophic AI scenarios.
When Jeff Hinton speaks, we should listen. He was the most vocal advocate for AI before the recent period of acceleration. We can create a world where AI is both ubiquitous and safe for humans. Let's require everyone developing AI, whether it's Google or a startup, to build ethics and guardrails into their development and testing processes. Now shifting to this week's conversation, which I have been waiting for,
Anders Inset is a Norwegian philosopher, author, and tech investor and sought-after speaker who is often referred to as the business philosopher. In 2012, he founded the Global Institute of Leadership in Technology. He has served on supervisory or member of the German Tech Entrepreneurship Center. He was one of the early investors in Swiss deep tech company TerraQuantum, and he's the founder and chairman of the board of Njordis Group, a venture capital and advisory company.
He got started in 2022. He's since launched the Quantum Economy Alliance to initiate projects at the intersection of humanity and exponential technologies. He was also, and this is a first on this podcast, an Olympic handball player for Norway, thus making him both more accomplished and, gosh, more interesting than you, or certainly me. How in the world, Anders, has it taken us this long to have you on the podcast today?
Without further ado, it really is my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that illustrious background and how you got into the space. Yeah, thank you for having me, Dan. That was quite a bit of an introduction.
Yeah, to add to that, I mean, I was a former hardcore capitalist. I built companies and I lived a lot on the perception of what other people think of me and was very driven by that kind of lifestyle. I tuned it back and started to write books. I published seven of them and got back more recently into investing. I had investments over the years, but I've spent...
a decade or so speaking and thinking and writing about the technological topics that you also referred to in your introduction with Jeffrey Hinton. And yeah, I mean, I was born with the initials AI. So I think it was kind of natural to circle back and, and,
The last couple of books that I have written, the Ex Machina, the God Experiment, look at the simulation hypothesis, and most recently Singularity Paradox, Bridging the Gap Between Humanity and AI, where my co-author, Dr. Florian Neukirch,
quantum physicists out of San Francisco, Austrian born. We write about artificial human intelligence. We will look at some of these implications that you were referring to on the continued progress of the exponential pace that we are seeing. So, I mean, I've always been curious about technology. Norwegian born, spent the last 25 years in Germany, father of two princesses.
And I guess just a very curious mind in general. So we opened up with the intro about Jeff Hinton and some of his perspectives on AI safety. When you think about as a philosopher and a futurist where you think the world and AI are both headed, do you share Jeff's concern or where are you at on that spectrum?
Yes, I wrote a book back in 2019 titled The Quantum Economy. And I didn't play with, you know, doing some kind of quantum voodooism and referring to entanglement and superposition. But I looked at, you know, the outcome of an economical model. So as I look at the economy as the operating system of our society, and therefore it needs to be kind of sort of stable so that we as a species can navigate and organize within that.
And I was looking at how that would unfold when we started to hand over authorities to algorithms and how AI would emerge and at the intersection of quantum AI, biotechnologies, and so on. So when I wrote that, it was an internationally recognized book, but it was kind of sort of futuristic and the theories were radical. Looking back at it, it was too conservative.
So I'm actually in the midst of rewriting that book right now. And I think that 20 years
I'm referring to your point on Hinton coming from 30 to 20. There's obviously a challenge. How do you define AI? How do you apply AGI? Those are very fuzzy terms. I mean, they played with him back in the 50s and AI sounded cooler than Kubernetes or cybernetics. So it's like, first of all, it's not necessarily artificial because it was created by human beings and intelligence is a very complex concept. So I think that's
The outcome, the likelihood of continuous progress with an increased speed is much higher than a decrease or a stagnation. So we have looked at an exponential growth over the past 80 years, and no pandemic or war have slowed that down. So if you look at large language models that a lot of technology people are using,
I would say, kind of sort of getting bored of because we have recognized some limitations and some challenges and we're looking for fundamental different approaches to build world models and the engineering on how to mimic the brain or whatever. So I think 20 years is extremely far out
when it comes to the potential upsides, but also challenges, if we look at these technologies. So in the short term, I would agree with Hinton that we are speeding up, but I would
almost come to a to a statement to say that i'm much more aligned with ray quits files predictions to look at um agi um by by 27 and and some kind of technological singularity by 2045 that would be 20 years down the road um so so i think you know if if if if
Technological progress is possible, which it seems. It's a matter of speed. How do you increase speed? You add smart people and you fuel it with a lot of capital. And this is what we're seeing today. You know, amounts amount of money. The brightest mind are moving from academia to the corporate environments. And corporate environments drive that change. Alongside, interestingly enough, if you look at history, wars happen.
It's a big driver of technological progress. And now the geopolitical tension makes a lot of people invest in drone technologies, AI-driven combat systems, and so on and so forth. The investments now in defense and in technologies will most likely speed up the quote-unquote...
drive towards some kind of super intelligence so yeah i mean a long answer to a short question but it's a complex topic and i think you just have to understand the differences between the limitations of large language models and the challenges and engineering challenges with computation and so on and energy and on the other side the immense impact of these technologies
Roger brought up Kurzweil and the concept of singularity. If it's okay, we'll put a pin in that and come back to that. Something you said previously, I love this element of your philosophy. It's woven throughout a lot of your work about the economy as a global operating system. It's a neat way to merge kind of the concepts of technology and society. And one of the ideas that you espouse is that we could potentially create an economy that instead of focused on technology,
profit motives. It's focused on societal well-being. I don't mean to be cynical, but it seems kind of naive. Is that possible given where we are as a race? Well, yeah. So I think the thing is here that the ideology of a society that would be driven by a
Social ecological market economy is a result, an output or an outcome of the input that I'm referring to. So let me explain what I mean by that.
So we have had a very, very strong focus on an ideology towards creating a better world or saving the planet or some kind of future where we tackle climate change.
First of all, it's very difficult to grasp all of these concepts, what it actually means. You know, the climate, you look at everything, you cannot fix everything, right? And the other part of that is that although I fancy engagement and I admire youth that take to the streets and have an opinion,
Greta Thunberg and what we had in Germany was Fridays for Future are quiet at the moment, and they did not really create the impact. So what is it that creates impact? That is actual progress in technology. And David Deutsch writes a lot about that in The Beginning of Infinity, and it's a philosophy and a way of thinking that I have resonated very much with. There was a very inspiring...
back 10 years ago who passed away way too early, Professor Hans Rosling of the Gapminder Foundation. And he spoke about not the naive optimism that the answer to everything is technology and not the dystopian negativism, but he spoke about a possible-ism. So we need some kind of re-enlightenment, if you like, a fact-filled built society where we understand things
the implications of progress. And here, my argument is that if the operating system of society functions, so there is profit and growth, then there is something to divide. So if there's something to divide, we can make it more social, right? And the other argument is that if we understand that economy and ecology are synergistic,
then we will understand that reducing the marginal cost of energy through technological progress will lead to a hyper-efficiency and a hyper-efficient usage of resources in theory would lead to what we would understand today.
as sustainable right so this is basically the argument that when a humane capitalism driven by reason and a much better understanding of what technology could do then we would have hyper efficiency and we know that consuming and owning stuff wears us down so an efficient
uses of time and any resource is in the interest of human beings. So this is basically the argument that I follow. It's not so much about having a social well-being economy. It's more about focusing what technology could theoretically do and then emphasizing this. And I think that the word sustainability will increasingly be removed from our vocabulary.
And it will be all about efficiency. And if we tackle that, then we will also have a greener planet. And this is basically my argument for what I refer to as the quantum economy, as we mentioned before. That's exactly why I've been excited to have this conversation. I love everything about that philosophy. The thing I got to challenge you on is,
We aren't in the current version of human species that we are. And I say current intentionally because that may change. But we're not rational actors. And we have just an innate tendency toward call it imperialism versus globalism. Everything is a zero-sum game. And unfortunately, I mean, you mentioned maybe there's some
bizarre, you know, you said perverse economic benefits of war, but, you know, there's a tendency to disagree as opposed to seek agreement for the benefit of all. Um,
I mean, weave this back into the threat of the singularity and humans and convergence with AI. Is there some version of human in the future that will embrace everything you just said that will potentially create this version of the future that you envision? And that's a very good question. And if I can play with that, I think that
If we would have a system that would take the basis of what we as human beings understand as knowledge and optimized, packed information, then at least theoretically speaking, we could have a conclusion on truth.
But the challenge is if the will to truth gets lost, and this is a path that we are on right now, then it's a difficult challenge. So from that perspective, wiring us up to some kind of technology would be
certainly be helpful so you could envision any statement made by a human being to be validated in real time and taken out or error corrected in the moment of the speech you could go further and you could error correct on the neural level and have some brain implants and people only rely to that base reality so you would have some kind of world model of a common fact-based reality um
That has a lot of philosophical implications and challenges. But I don't think that there is a given state of a potential mensch, a human being in future, that has this baked into it off the bat. But we argue in our latest book, Florian and I, that...
why we should think about an artificial human intelligence. Because what we think is that the current approach is to externalize and build a machine that we don't understand and do not know the
output and the implications of it so we are creating a uh you know we want to have bliss divinity and immortality out of the machine deus ex machina you know that's that's we small humble individual want to create a super intelligence that is externalized from the human being we think that the only uh
you know, promising path is to start with the human being, with the mensch, and then hack biology and chemistry, and then enhance the human from a biological entity from life itself, and then enhance that with a artificial human intelligence. And we write at length about how that could be, but we think that it's
It kind of sort of romanticizes the understanding of the human being, of the mensch, that there is something that makes us special, if you like. There is something that differentiates us from a machine. And, you know, if that is true...
I think the argument should be we want to keep it. If it is not true and we are based on information, we are a computer, then we need to at least have our conscious experiences of our reality aligned with a future that is worth striving for, if that makes sense. So this is basically, I don't think this is given that there is some kind of future human experience
you know, evolved part or a new mens human upgrade 2.0 that has any of this baked into it. I think it's hard work, a lot of philosophical thinking about what kind of future do we want to build. And I think that's what it boils down to. I'm glad though that you're romanticizing that vision of human that we could become. I think that's really, it's very eloquent.
You said something at the beginning of that answer that I just had to come back to. This kind of version of a new global operating system, a more enlightened economy, is predicated on maybe, as you indicated, some kind of external objective truth. And a really complicated part of it makes sense when you say it, obviously, but who's the arbiter of truth?
We're ready, arguably, you could say in a period in a post-truth society where it's really hard objectively to say what's true anymore. And I think that will continue to be the case in the future. So how we determine, even if we were to say that's a requirement for this future that we aspire to, what's
What's the role of AI and does it become the arbiter of truth? Right. No, I mean, I could speak at length about this, but when we say there is a potentiality of things that we can come to a common understanding of. So take a photo image that has been manipulated and written on.
So if we perceive reality as real and we could not go back and change it based on our own perception and we are not some kind of simulation chain, so there is a reality that we as human beings perceive and all of us have some kind of consciousness, then at least in theory, we could conclude to some base ground reality.
things that we could hold on to. So either this image had a manipulation of Photoshop or something that was changed, or it didn't, right? And this is something that if you have all the data and you look at it and you said, you know,
This is how it was, and this is how it was afterwards, or this is how you look at the image right now, and this is how it panes out. Do you see the difference? Can we talk about a common ground to stand on? There are quite a few things that we at least, without a lot of...
psychological damage, we could come to some common ground to stand on. But I think the base argument here is that we have detoured or detached ourselves from the will to truth. So we have an optimizing society that is a binary way of looking at it. Zeros and ones, your truth and my truth. And this got detached based on, I think, a very simple way how we started to communicate.
So we lost a lot of trust in science and we lost a lot of trust in practice. And we got stuck in our own self-evident truth. And that's the binary way. It's the outcome of social media. If you want to simplify it, vary and dumb it down, you could say it's a thumbs up, thumbs down society. So we act binary. And everything that is rewarded in this society is reaction and not reflection.
So whatever I do, if I say something that is fully out of line and it's terrible and people feel offended about it, then the reaction, the quick reaction, the scandal, the headlines is much more important than the reason around that topic. And I'll give you a very concrete example of this. And I spoke to a very good friend of mine, a founder of one of the leading advertising agencies.
He is now in retirement and a genius to look at communication. And there was a scandal in ski jumping during the world championship. And this was in Norway, a value-based country. They don't cheat. And it turns out that they have cheated on their equipment. So they had some suits that could fly a little bit longer. They were manipulating it.
The industry had known about this for years and almost all countries are doing it, but they got caught. And that was in all the media in a sport discipline that is dying where people are losing interest of this discipline. So all of a sudden, ski jumping was all over the media.
Now, a couple of months later, if you go back, a lot of people have read about ski jumping. When the snow come back in the winter, somebody interested in looking at ski jumping, they don't know why because the scandal is wiped out. It was just a reaction and no reflection on the foundational level. And this reaction is how we operate. So you have a new scandal every day. You have topics that has nothing to do with reason. And there are so many examples of this.
I think one of the strongest ones that I think most can relate to is the attempted assassin of President Trump that happened during the election. So,
There is very little reason to this situation, and it has not been concluded to anything how it came to be, what was the person, there is no information about it, we just move on. So there's a reaction, left and right, zero, one, up and down, and then we move on. And this optimizing society eventually leads to division, because we optimize towards some kind of
either followers, likes, shares, or economic principles. And you see the economy is driven by reaction and not about reflection. And this is how the algorithm optimizes. And therefore, I think a lot of the challenge that we have is to get back to the will to truth, get back to the art of being wrong, namely the philosophical aspect of life. Fascinating. So
Building on that example of an algorithm generating something that's factually not true, let's say the late Pope Francis in a puffy jacket, right? Factually, Pope Francis did not wear a puffy jacket, but that combination of pixels that made it look like he did at one point, that's real. That is just an algorithm generated that. And so if you know that an algorithm generated a picture of Pope Francis wearing a puffy jacket, that's
true, even though the Pope never in fact wore the puffy jacket. And likewise, you know, to your point about reaction versus reflection, can you consider everything on that spectrum true at its atomic level, even though it may not be the best thing for society to have truth be a spectrum? Yeah. But I think that the challenge is, you
So we had a fatal information society where everything was chaos. You wanted to sort that out. You wanted to hand it over to algorithms, and we built the LLMs, and we all have. So we were taught how to be knowledge agents, experts. It was our educational systems. Our brains are wired not to question and not to reason, but to be an expert. Right?
which is basically to close out education, to get a grade and to be something, to have a category and be defined as something. We are very acquainted with solutions, right? And we don't really solve anything. If we identify a problem, we have the potential to make the problem better.
So we can have progress, but we don't have a finite solution. And we were not trained to think like philosophers or to reason. We were trained to be exactly with these answering machines and knowledge and experts. And therefore, I think, you know, the whole essence and what we have to look at it, how do we educate starting with youth and kids first?
but also in executive education in the companies, that it's not about learning about facts, it's about learning how to learn. It's not about mastering the art of being right, it is about practicing the art of being wrong. So if there is an argument
You know, it's not to win the argument, but it's to generate progress. And if we understand that we are here, equipped with two possible thumbs to build better tools to make progress for humanity. If progress is our common dominator and our goal, then I think it could be also better for us to create a society of reason and not a knowledge society. So building on that concept of the future economy,
One of the things I think about a lot is our work defines our worth in many ways. It's just, it's such a fundamental element of the human condition. And as I think you'd likely agree that the future of work is different than the present of work. The notion of traditional jobs is probably going to change maybe fundamentally. And I got to ask you what, what will replace what we,
typically think of as work, defining our worth, when kind of this new global economy emerges or, you know, AI intervenes and makes us more productive? How do we replace that part of the human condition? Yeah, I'll just circle back to again to your introduction. And I think, you know, all these activities will be automated. So work was, I mean, the role of work was, you know, to provide structure, some kind of identity,
And we call it purpose, right? And then we start to divide work and life. So we call it work and life as two different entities. And you see that in countries today that countries that are more reactive and detached, right?
don't identify with the jobs, whereas other countries, I think the US is very strong here, a lot of people identifying with their jobs and their tasks that they're doing. And I think that this is important because this is
with the relationship to how we live our life. So if our job or activity is a reaction, we would just react and function. We get worn out and tired and depressed. If it's an act or an activation, we do it intrinsically motivated and
then we give or put purpose to life and we are not just driven by the reaction. And I think that this is the fundamental part that we need to understand that the activation of human beings becomes crucial because if we just run around and react and function, then we're in that dystopian scenario, not of like a controlled 1984 Orwellian movie,
dystopian nightmare. It's more like Brave New World, much older, and also from Alice Huxley or Neil Postman that wrote a book back in 1980s about how we are amusing ourselves to death
and the television came into the game. Now we are consuming and amusing, quote-unquote, ourselves to death without having any fun. So we have become reactive. It's an undead society and not an active society. So it's a kind of new sort of existentialism where we don't fight the finitude of life and we go about and we have an identity, but it's more about being undead.
It's like a state of philosophical zombies where we don't, we're not tuned in. We're just reacting to impulses and speeding that up. That answer says a lot about why consistently the Scandinavian countries are rated as having the highest quality of life in the world. It's because that's such a
different, as you know, such a different mentality from the way certainly Americans think about their role in society. So Anders, I told you we'd put a pin in that topic of singularity, which is such an important one, and you've done some deep thinking about it, so I've got to get your perspective on this. Whether or not you agree or disagree with Kurzweil and kind of 45 years of singularity, I think a more important question is
How will we know it when we're there? What does the singularity, just for those not immersed in the world of singularity, the convergence of carbon and silicon-based life forms, what will be the indication that, quote, we've achieved singularity? This is a beautiful question, Dan. And I like, we went there and we should talk hours about this. But so, yeah.
an artificial general intelligence super intelligence would basically be our cognitive capacities so we have some kind of huge superpowers where we know everything can do everything so the technological singularity is basically when you have ai's creating ai's at infinitum where it evolves beyond our control and our comprehension so it's basically like um the um
that the human cognition is not to be optimized, but there is, you know, detached to a machine. And this is, I've written about this, something that I call the final narcissistic injury of mankind.
So basically we have three narcissistic injuries that was proposed by Sigmund Freud. And Freud says, yeah, now the first one was Copernicus, where we thought we had like this planet Earth at the center of a universe evolving around the superiority of mankind and our planet, right? And then we realized, well, we're at the corner of some infinite expanding universe
And that was the first narcissistic blow. Out of that came a lot of progress and development and breakthroughs in science. Then Darwin came along and said, well, we're not like God creatures. We are humans.
some evolution chain based on evolving from some kind of, you know, animals. And that was the second blow that led to a lot of progress again. And Freud came along with his psychological injury that said that, you know, we are not, as you said before, we are not like rational thinkers. We are, you know, based on some kind of, you know,
unconscious driver of our behavior and what i write about is the final narcissistic injury is basically that we take the um idea of um creationism divine creationism and we think that we as human being can create some kind of humane creationalism where we can create anything and
And that includes, you know, as I said also before, that we could theoretically, you know, mimic our brains and rebuild the structures and so on and so forth. And in this scenario that we think that we could create that externalized, we could end up replacing ourselves.
And not in a sense of, you know, robots going bananas and taking over and we have to rescue Bruce Willis or Arnold Schwarzenegger, but more in the sense of that if there is something, coming back to that romanticizing on the mensch, the human being, if there is something that is outside of the potential understanding of our foundational physics, or even outside of some kind of
materialist view of the world that is called consciousness. That is something that is for the human being, for the mensch alone.
it is not given that we wouldn't overwrite it or lose it. So you could envision a scenario where we build a complete replica of Dan and Anders and we would have the same conversation, but it would be perfect AIs. Every atom would be replicated. Every neuron in our brain would be understood based on the neural functions. So we would have 83 billion neurons firing, but we have gone the path
Kind of sort of like the matrix scenario where we move beyond a path where we merge the brain with a machine and it functions the exact same way, but there is no perception.
So the question you asked of when would we realize it? So if the conscious experience, the experience of your own experience, the qualia, that is a highly nowism, it happens in the essence of how it feels like to be something. If that would be replaced, we would notice something.
So you would have Narcissus in the Greek methodology staring into the water and this representation of this wonderful aesthetical entity would mirror in the water, but there would be no one there to perceive it. So the lights would be on, but there is no one home. And this is the philosophical zombie scenario where we could have an identical world to the one that we would live in, but there would be no light.
conscious experience of that. And this is like, it isn't even dystopia because there is no perception. It would be sad because, you know, the liveliness of life itself, the lebendigkeit, what it means to be alive, to feel and perceive, I think that is what makes us human. So in a technological singularity,
This could be one outcome. I mean, obviously, we could also reflect on some kind of higher state of consciousness or some kind of spiritual realm where we sit in awe and we have everything coming in, the right amount of dopamine and serotonin or just beaming us away to some kind of other dimension. That is all potential outcomes of such scenarios. And even if you look at it from a technology aspect, why get stuck in this body if we can put that
or upload our minds into some kind of other forms. And these are all things that could be reflected on around a technological singularity.
That is the topic for a whole other conversation. Just about every one of these topics deserves its own dedicated conversation. This is perhaps the hardest of the 340-something episodes to cut short, but we're way over time, and I've got to ask you one last question before you're off the hot seat. I want to understand your personal journey.
So I mentioned you're an Olympic handball player. Clearly, whatever you do, you're exceptional at. You're very ambitious, very talented. And now, I mean, I think some of these thoughts that you have and having read a lot of your books, it's really one of the most interesting threads of any current philosophy or futurism going on. What's the through line through your career? And who is Anders at your core? Well, I think it boils down to...
to being very curious. I'm very interested in understanding and learning stuff. And I have the privilege of
of that. So when I built companies and I did the sport and I had the headlines, the Shabam and everything, but I didn't feel success. And this become a very important topic to me. So I didn't feel success because I was driven by externalized goal. So the comparison to followers and likes and headlines and everything you did. And I think that's very unhealthy.
So today I consider myself highly successful, but not because of fortune, fame, and gold medals, because I experience progress. So the most fulfilling thing to me as success is to experience that you have learned something.
And that's the biggest driver that I have. And I'm so privileged because I get to get up seven days a week and learn. And it is okay to have a healthy way of living. But I joke about this. And I said, you know, give me 10 extra years between...
30 and 50 or give me extra time now because it seemed like I'm in a hurry to learn a lot of stuff. And that has been my driver since my youth. So everything that I've dug into on the corporate side and sport, I was very keen to learn at depth.
and also to go back to first principles and understand. And this has been the driver also of where people come back to me and say, I'm of a different opinion, Anders. And I said, I haven't spoken an opinion. I'm looking for the problem. I'm looking for the understanding and looking for progress and not to have some kind of
hard opinion. I could take any stance on the reflection side. I'm an agnostic atheist. I could dance along any logical reasoning and rational structure. And that has been how my brain has been wired, I guess, since I was a child. And I think today it's something that drives me. I'm just very, very interested in learning. Not only is that about your personal journey, but for everyone listening who's maybe
about a future where your colleague is a bot, just replay that tape and listen to what Anders said. It's about being curious and deriving your own sense of fulfillment and a thirst for knowledge. And I think that's back to the comments about the human condition and everything we talked about previously. It's a good way to end the conversation that we're
It's never been a better time to be on Team Human because the world needs optimists and big thinkers and people who are curious and kind and like learning.
I would say possibilists, because that includes the strive for the understanding. And, you know, there might be something, you know, after a finitude of life, some continuum or whatever. I'm open to all kinds of theories, you know, just bring it to me in a plausible way. And also answer me why we shouldn't then make the most out of now. If there is a better place, this is still pretty cool. We can do a lot of stuff. And I think that's,
What we get, I wouldn't say wrong because I'm not the one to judge, but at least for me, life is a wonderful journey to nowhere. And I think that you have to fill your life. So coming back to the activation part, right? An active, intrinsic, motivated task to do something. You have to fill your life in order to have a fulfilled life.
And this is the opposite to the reactionism and the strive for a purpose and the search of meaning and everything we try to tie back to an answer or a finitude of a categorized, quote unquote, knowledge or a category to stay in. So yeah, I think that this is to do things and to have that coming from some kind of inner drive. I think that's very important in order to cope with these crazy and exciting times. Yeah, well said.
Well, we managed to get through almost none of the topics that we had prepared, including before we started taping, we were talking about covering the intersection of AI and quantum. And so, you know what, you're just going to have to come back. And I think this is part one of several.
Do that, Dan. I'm happy to be back. I had a good conversation. Yeah. All right. This is really a pleasure. Thanks for hanging out. And gosh, that's more than all the time we have for this week on AI and the future of work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.