We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 330: Exploring the Path to the Singularity: AI & Machine Consciousness with Prof. J. Craig Wheeler, Astronomer and Author

330: Exploring the Path to the Singularity: AI & Machine Consciousness with Prof. J. Craig Wheeler, Astronomer and Author

2025/4/7
logo of podcast AI and the Future of Work

AI and the Future of Work

AI Deep Dive AI Chapters Transcript
People
D
Dan Turchin
J
J. Craig Wheeler
Topics
J. Craig Wheeler: 我认为人工智能奇点指的是机器能力超过任何单个或所有人类的时刻,其精确定义并不关键,重要的是即将到来的巨大变革。这包括机器在各个领域的应用,以及它们可能带来的社会和伦理问题。我研究了人工智能的指数级增长,我认为这种增长是不可避免的,因为它遵循知识的累积效应,类似于复利增长。这将导致人类社会发生深刻的变化,包括工作模式、社会结构和伦理规范。关于机器意识,我认为它将与人类意识不同,因为它缺乏数十亿年的生物进化历史,这将使其具有‘外星’特性。我们很难预测机器意识的具体形态和发展方向,但我们需要积极思考并应对这种可能性。关于人工智能的伦理问题,我认为它复杂且广泛,涉及招聘、自主武器、基因编辑等多个方面。我们需要制定规范和准则来引导人工智能的发展,并积极思考如何与人工智能共存。 Dan Turchin: 我认为,我们应该将人工智能视为提升人类能力的工具,而非试图将其拟人化。我们需要关注人工智能可能带来的负面影响,并积极寻求解决方案。关于人工智能对未来工作的影响,我认为它将导致就业模式的巨大变化,这需要我们重新思考工作的定义,不再仅仅将其与收入来源联系在一起。我们需要探索新的工作模式,以适应人工智能带来的变革。关于人工智能的伦理问题,我认为我们需要更多地关注潜在风险,而非仅仅关注其积极方面。我们需要积极参与到人工智能的伦理讨论中,并为其发展制定合理的规范和准则。

Deep Dive

Chapters
This chapter explores different definitions of the singularity, a point in time when artificial intelligence surpasses human intelligence. It discusses the difficulty in predicting the exact timing of this event but emphasizes its imminent arrival and the need for proactive thinking.
  • Definitions of artificial general intelligence (AGI) and artificial superior intelligence (ASI)
  • Predictions of singularity happening within the next decade
  • Exponential growth of AI and its impact on society

Shownotes Transcript

Translations:
中文

But basically, a time when machines can become more competent than any individual human, which I think is the current active definition of artificial general intelligence. And then artificial superior intelligence, if I have that right, is when machines become more competent than all human beings collectively. And I think the exact definition isn't...

There's just a huge wave of change coming, and we need to think about it to cope with it. And whether it's six years from now or two years from now or ten years from now, I don't know, that's hard to pin down. It's coming faster than we think, I suspect.

Good morning, good afternoon, or good evening, depending on where you're listening. Welcome to AI and the Future of Work. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. Our community is growing. Thank you for being a part of it.

If you're not already, subscribe to our newsletter. Each week we share tips and some fun facts that don't always make it into the weekly episode. We will share a link to subscribe in the show notes. If you like what we do, please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If you leave a comment, I may share it in an upcoming episode, like this one from Edward in South Portland, Maine, who's a retired bank executive.

and listens while gardening. Edward's favorite episode is the discussion with Dan Helfrich, a recent one, former CEO of Deloitte Consulting about job creation from AI and the future of the workforce. Of course, we learn weekly from AI thought leaders. And the added bonus, you get one AI fun fact.

Today's fun fact comes from Darren Orff, who writes in Popular Mechanics that humanity may reach singularity within just six years. He says this, I'll put a pin in that one. This slippery concept of singularity describes the moment AI exceeds beyond human control and rapidly transforms society. It's enormously difficult to predict where it begins and nearly impossible to know what's beyond it.

Some AI researchers are on the hunt for signs of reaching singularity. One such metric and one such firm defined the singularity as an AI's ability to translate speech and the accuracy of a human. Companies called Translated, based in Rome, they tracked AI's performance from 2014 to 2022 using a metric called Time-to-Edit, or TTE, which calculates the time it takes for professional human editors to fix AI-generated translations compared to human ones.

Analyzing over 2 billion post edits, Translated's AI showed a slow but undeniable improvement as it slowly closed the gap toward human level translation quality. If the trend continues, Translated, the company, claims that it will be as good as human produced translation by the end of the decade. My commentary here is,

When did we agree that if an AI can properly or expertly translate human speech, machines have suddenly become as intelligent as humans? More important, what is intelligence? And are there more important questions to answer than what determines when machines have become artificially generally intelligent?

Let's focus on using AI to make humans better and stop getting distracted trying to trick people into believing machines are human. This is very germane to today's discussion. Of course, we'll link to that full article in today's show notes. Now shifting to this week's conversation, Dr. Craig Wheeler is a professor of astronomy emeritus at the University of Texas at Austin, Hookham Horns.

and a past chair of the department. He's published nearly 400 scientific papers, authored books on supernovae, and written two novels. Dr. Wheeler has received numerous teaching awards and is a sought-after science lecturer. He served on the advisory committee for the NSF, NASA, and the National Research Council. His research interests beyond exploding stars and black holes include humanity's technological future. And without further ado,

Dr. Wheeler, it's my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that illustrious background and how you got into the space. Yeah, thanks, Dan. I'm very glad to be here. Looking forward to the conversation. Yeah, I had some background at MIT and Caltech and Harvard and then spent most of my career, my research career here at the University of Texas. It's been a great place to watch Austin grow and thrive and become Silicon Hills, as people call it.

So I have worked professionally on supernova explosions, as you remarked, using them to discover the dark energy and the expanse of the universe. I consider myself a new idea junkie, and astronomy has just been absolutely full of new ideas my whole career. It really keeps you going. I got into the current game actually not through an AI slant, actually, although I was trying to keep up a little bit with what machine learning was doing.

But through astrobiology. So I was working with some colleagues on the question of if a supernova goes off relatively nearby the Earth, has in the past, will in the future, what would that radiation do to life on the Earth? So it was an evolutionary question.

And that was fun. It lasted for a while, got me involved in some interesting things and meeting some interesting people. Also very interdisciplinary, which I enjoyed. And I thought, well, this might be an interesting topic to think about with students. And particularly at some point in there, I read a statement by a preacher, must have been a relatively liberal sort. I don't remember where I read it or who the man was. But he said, I do believe that

Homo sapiens arose through Darwinian natural selection and evolution. But I'm also convinced that Homo sapiens is the peak. This is as far as it goes. This is as good as it gets, close to perfection. I don't remember quite how he said it.

And I'm thinking, you know, I've not thought about this deeply before, but I don't think I agree with that. I've always just assumed that we're natural creatures subject to natural selection, and we probably are evolving and evolving.

It won't be as we are now forever. And so I picked up that topic in a small seminar for some students and kind of got a shock. The students that signed up for this particular seminar, and it was, again, not a particular discipline. It was just an interdisciplinary thing. We're very much aware of the fact with our growing knowledge of DNA and all those related genetic things.

that we were on the verge of controlling our own evolution and we wouldn't have to wait for natural selection. And I found that a rather mind-blowing idea. I still find that a rather mind-blowing idea. And I thought after that, well, this is worth a full-up course. So I taught an interdisciplinary course on the technological future of humanity for several years.

And then I love to write when I retired and finally had more time to do it. Then I've got several book projects that I'm thinking about, but this one is the first one that came to fruition. And it was just a great fun to teach the course. Again, it was just full of new ideas, every class. And I just enjoyed that thoroughly being exposed to those new ideas and still. And you've already mentioned some that I'm going to have to think about some more.

And so that it was the seminar and then the class and then the book just evolved and caught my attention. And I've tried to do some thinking about it and some strategic speculation. There is a lot of speculation in the book. I mean, any of these things that we're talking about are going to involve speculation. Hard to do. We don't know where it's going. We don't really know the timescales.

But if we don't try to use our imaginations of where things are going, and I mean, you know, consciousness or science fiction doing the same thing,

that we can't anticipate this. We can't avoid unintended consequences. So I think an appropriate level of speculation is absolutely not just appropriate, I think it's necessary. And so that's what I've tried to do in the book is sort of write a primer, the path to singularity, just to mention the name of it. A primer on how we should be aware of these things and trying to think about what we do to keep them under control as much as we can.

Or just roll over, let our robot overlords take over because they're better than we are. There's a whole range of possibilities here.

Now, in Silicon Valley, we also are fascinated by exploding stars, but usually the exploding stars are called Elizabeth Holmes or Sam Bankman Freed, a different kind of star. Interesting analogy. But that's why it's such a pleasure to talk to an actual astrophysicist. So I referenced one.

possible definition of the singularity from a company with commercial interests translated in the fun fact. I've often referenced Ray Kurzweil's definition or kind of soupy definition of the convergence of carbon-based and silicon-based life forms. Talk us through from your perspective and the perspective of the book, what is the singularity?

Yeah, just to answer that formally, the singularity actually is a mathematical term for basically dividing by zero, and the answer is infinity. So it was introduced into a more cultural broad concern by a science fiction author, Werner Wenge, who used it in the context of artificial intelligence taking over in some sense.

And yeah, I rely on Ray Kurzweil. I used his 2005 book as my textbook in the course that I taught. And I basically, by his definition, which I thought was a little different than what you just said, but basically a time when machines can become more competent than any individual human, which I think is the current active definition of artificial general intelligence,

And then artificial superior intelligence, if I have that right, is when machines become more competent than all human beings collectively. And I think the exact definition isn't terribly critical. There's just a huge wave of change coming and we need to think about it to cope with it.

And whether it's six years from now or two years from now or 10 years from now, that's hard to pin down. It's coming faster than we think, I suspect. Before we started rolling tape, I mentioned that the best conversations on this podcast are in that intersection of philosophy and humanity and ethics and technology. And one of the things I really want to get your

perspective on is what does it mean to be a human and when we have machines that you know we we claim they can think or they're quote sentient what's left that's unique to humans

Yeah, that's a very interesting question. I enjoyed thinking about that. I guess I have a slightly analytical reaction to that, that I think humans are a biologically evolved consciousness. I tend to think that there's a range of consciousnesses.

That animals are conscious, I tend to think plants are conscious because they interact with their surroundings and communicate with one another. Very rudimentary consciousness, but I think it's a spectrum, but it has in practice evolved biologically.

And machine consciousness is going to be this different thing that arises amongst a bunch of silicon chips to start with and exactly where it goes from that. So it's going to be different. If it becomes conscious...

And that's a whole controversial topic of exactly what consciousness is and whether machines can have it or not. I think AI and related things are going to be very important and very disruptive, whether they become conscious in that sense. But the prospect that they might is clearly part of the conversation. But I do think it will be alien.

This consciousness would wake up in a box. It wouldn't have a billion years of biological evolution behind it. So it's going to be different. And whether it shares our sensibilities or not, can we train it to do that, encourage it to do that? That's the whole big question we can turn to. When will we know when the two converge? I think we'll know it as we see it, but I think it's not going to be overnight.

It's going to be a somewhat more gradual thing. Could still happen within a decade, but there will be steps along that road. But I'm saying that, and yet if...

machines get to that level of competence, AGI or ASI, then in some sense, yes, it will change overnight. First time that happens will be potentially dramatically different. But AI is going to be so infused into our culture. It is right now, and it's going to be even more so as things grow exponentially that there's going to be a lot going on before some formal state when AI becomes ASI.

And so we need to try to keep on top of that complexity and be aware of what's happening and how it's affecting individual humans and society in general. Huge, complicated issues. I like the way you describe consciousness as a spectrum. Let's say using your definition, machines achieve consciousness on some part

part of that scale. Because like you said, maybe they can adapt to their surroundings or engage with their surroundings. I think that's a comfortable definition.

Okay. It's not without controversy, but... Of course, of course. I'm taking your side here. So let's assume humans are maybe at a different point on that spectrum. What's the gap that machines can't overcome? Gap that machines can't overcome. I have trouble defining that. If they get to the point where they have their own goals, they can already strategize. They can lie and dissemble if they care to. They're

They could very well go off in their own direction. And that's part of the game here is trying to anticipate what that direction might be. And I think we really have no idea how alien they're going to be.

And can we cooperate with them? I think and hope that we can. I have a slogan we might come back to of do unto AI as you would have AI do unto you. Sort of a golden rule for both humans and AI as it comes along. Whether we can enforce that, what that even means, we could talk about that. But I think it's going to be complex and rapidly changing.

I'm on a soapbox because I want to get your reaction to this. I feel like there are certain physical sensations that are innate to the human experience that we can't synthetically replicate. Things like what it means to feel pain and what it means to have empathy and what it means to think critically. And yeah, I just don't think you can ever get a machine to the point of where synthetically...

You could replicate those kinds of the physical sensations, the reactions to your environment, even in a metaphysical sense. Am I wrong there? Okay. No, there's no right or wrong here. But I guess one way I think that conscious AI might evolve is starting off with an embodied AI that can...

perceive its surroundings in all sorts of ways that humans can't. We're restricted to a certain visual band. Imagine an AI seeing radio, x-rays, gamma rays, ultraviolet radiation, all of that. Smell like a dog, as a dog can. So you get all this input, and then particularly if you can move around. Now, maybe you don't need to move around, but I'm picturing that you

Cuz this is what I think humans and animals and plants do. Now, plants maybe a little less. You predict what's gonna happen next based on your past experience.

You make a prediction and then you do something. You take a step in some direction and you see whether your prediction was right. And if not, you iterate on it and you change all that. And I think it is that kind of process of reacting to the environment and iterating on that is what gave us consciousness. And so I guess I disagree with you a little bit. I can imagine that something like that happens with AI as well. So consciousness is kind of being aware of

that you are having experiences.

And I don't see why machines couldn't do that. But, you know, this is not a settled issue at all. Yeah, maybe just... I'll work with you on that one. So kind of to rephrase or combine what we're both saying, like you could argue that the human, those human sensations that I said were innately or uniquely human, those are chemical reactions to your environment or electrical impulses. And at the point where you could replicate those...

processes, the impulses, the reactions, then you could credibly ask what's human and what's machine because just at a very atomic level, they're doing very similar things. Is that? It's electrical impulses in your brain. It's electrical impulses in a silicon chip. So what is the fundamental difference? So let's apply that to the world of work. So you're here on AI and the future of work, and this is very germane to the future of work. So at the point where...

The machine, let's say, is stronger than a human. And obviously, you can calculate today far exceeding what a human can do as we kind of tick off these boxes in terms of what we thought only humans can do. Kurzweil's got this wonderful cartoon of the frustrated guy trying to list things that machines can't do. Go ahead. Take us, whether it's 6, 10, 20, 50 years in the future, what's the future of work when we're partnering or let's say our bots are colleagues?

what's left for humans. Yeah, no, well, I think about this a lot as many people do, and I don't have any clean answer to it. I will say I did listen to your podcast with Josh Dreen two or three episodes ago, who has written with a co-author, this book, Employment is Dead. And I didn't understand that until I listened to the podcast, but now I got a sense of that, and it's a very interesting idea.

That rather than going to work for a boss in some structured thing, that it would be a more unstructured thing where you could contribute and doing sort of a wisdom of crowds kind of thing. Everybody being able to input and sort through that to decide what the company ought to be doing rather than just a top-down sort of thing, if I got the idea. And that, in turn, reminded me of, in a different context, the work of Audrey Tang. Do you know...

Audrey Tang? I do. I've talked about Audrey Tang in the past. Yeah, cool. So I stumbled across her while I was writing the book. So her claim to fame was sort of using that wisdom of crowds in employing democracy in Taiwan, having people be able to pitch in and listen to the discussion, the politicians, say what they like and didn't like.

And I thought that was just a mind-awakening idea. And this, I think, is, if I understand it, the same idea applied to how a business works. And I think that's a very...

Interesting idea. I don't know that it answers everything. I don't think everybody can become an influencer. Some people have very successfully, but whether that's enough that you can make a gainful employment by everybody who is unemployed by AI, not everybody.

Not clear to me. I think retraining can help if you find yourself losing a job, but I think that's insufficient to employ everybody who might need it. This is a very complex and disruptive idea of what's going to happen as AI takes over jobs. I'll also say, because we already have AI that can strategize, just going back to the deep mind that beat every human being on the planet at the game of Go.

And it did that by inventing strategies that no human had ever thought of before and probably could not have conceived of. And so that's been around for a decade, the fact that there's AI that can strategize better than humans can in that particular context, but then extrapolate a bit. And so one of my thoughts is it's not just the worker bees, right?

What does a CEO do? Puts his or her feet up on her desk and says, okay, here's where business stands and we need to do this and we need to do that. And I can easily picture an AI taking over the role of a CEO.

It's not just the people down through the bureaucracy, whether they're encouraging this employment is dead concept or not. This is going to be very disruptive and very complicated. And AI does not have to be conscious in order to do this. Just the current capacity, and it's going to get better over the next little bit, whether it becomes conscious or not. It's just going to be very disruptive. And for us to try to think about what that's like for society and

Each individual, thinking about how it plays in, that's the task for us all, individually and collectively.

Cuz it's unstoppable, I think. I mean, this is not a top down thing. I mean, a given business, Sam Altman comes in and says, I wanna do this, this and this. But collectively, this is the whole shebang doing this. There are individual scientists and engineers in the lab. They think this is a cool idea. I wanna explore this. There are businesses that wanna make a profit.

There are global political tensions that we want to beat the Chinese and they want to beat us. This is all one big rap that is going to drive it forward. I think it is, again, there's no person or thing in control at the top. And it's unstoppable, this exponential growth. And I think that, I had to think about this a bit, I think that happens because

The more you know, the more you can do, the more knowledge you can gain.

And if you try to write down, it's like compound interest in some of them. The more you have in the bank, the more you will make. It's not just adding a dollar now and a dollar tomorrow, a dollar tomorrow. Linear growth, which is the way most of us think our lives are going to go, it's because the amount you have amplifies what you can do next. If you write that down in a mathematical formula, that the more you know, the more knowledge you can correct, the answer to that is formally, mathematically an exponential growth.

And so this is inevitable, it's built into our system. It can reach some limit and break. I hope that doesn't happen. We're trying to engineer a development that may be emerging with our machines. But temporarily, the situation we're in right now, the exponential growth is what's really driving it. And that, I think, is unstoppable. This is going to lead to a new phase of human existence.

in the following sense, that we're pretty adaptable as a conscious species. So things come along, they are disruptive, we adapt to them. So back to the Industrial Revolution was somewhat disruptive, but I also think it was just one point on this exponential curve. I don't think there was anything very special about

about the Industrial Revolution. But then along comes modern physics and nuclear weapons and computers and iPhones and the Internet and faster and faster and faster as it grows. But at some point, my concern is because of the exponential growth, it will happen so fast that individuals and society can't adapt fast enough. I think we're pretty close to that already.

Although there's a lot of conversation about regulations and how do we control this. Our conversation, my book, your podcast, all part of addressing that. I think we're going to get to the point where it's just happening faster than we can cope. And that is...

I don't know what to do about that except to think about it, apply some critical thinking. But that is one of my principal concerns. And it doesn't require the AI becoming conscious. If it's forcing things to happen fast enough, it's going to be very disruptive for us.

You referenced the Industrial Revolution. I think it's not controversial to say for hundreds of years, our definition of work has been fairly static. It's associated with source of income. The thing you do when you wake up, it's kind of a big part of your identity. Might it be time to rethink what work is? Maybe it's more associated with what you derive from

purpose from or meaning. Yes, it's part of how you define yourself, but maybe not necessarily associated with an income source because let's say that machines give us back super productivity or create super productivity. And what we now need, let's say five-day work to produce, maybe that gets produced in four days, three days, who knows what, because it's very

disinflationary. We can produce a lot more for a lot less inputs. And it requires us to rethink. When you get back 25, 50, 75% of your time, we need to fill that gap with hopefully things that are positive for humanity.

What's your reaction to that kind of rethinking about work and humanity? Yeah, I think I probably haven't thought about that sufficiently because that's a very interesting way of putting it. I will say now I'm kind of an interesting example of that because I'm retired and I'm pretty comfortably retired. And I wake up every morning looking forward to a chance to write.

because I love to write. I never did it enough in my career because I was always busy doing astrophysics. And now I find myself hawking a book and that takes a lot of time and I'm feeling separated from my urge to write. But I'm kind of doing that. I've been liberated from the need to work in this traditional sense you're talking about. And it's been great fun. Can everybody do that? I don't know. But I hear what you're saying. It's an idea worth exploring. But

whether everybody can function in that kind of an environment. I asked my students at one point, if we were talking about universal basic income, I said, well, if you had a universal basic income, what would you do? And half the class, and I will say it was the male half, said, we'll play World of Warcraft. Okay, if that's what turns you on, I guess.

So I don't know. So I've thought about it a bit, but I don't know whether it will. I'm not sure the transition to that phase will be easy. I just want us to get more comfortable asking the question. I love your answer. Putting the question out there is absolutely relevant. Absolutely. So I frequently say that as technologists, we get too comfortable asking.

asking and answering the question what could go right when we're talking about introducing new technology, but we hear notoriously bad at asking and answering the question what could go wrong. And it's impossible to not have this conversation about this maybe inevitable march toward the singularity without saying what could

What's the right thing to do? What are the ethical questions we should be asking? What's your perspective on the ethics of the inevitable march to singularity that maybe right now we have the luxury of being able to ask and we might not in the future?

Yeah, yeah, I think, I mean, there are ethical issues woven all the way through all these issues, and it comes up over and over and over again in the path of singularity. So it's an important thing. It's a complicated thing. You know, whose ethics?

How do you define that? Again, there's ethics associated with, I don't want to say routine things, but direct things like employee hiring and firing. To have AI in charge of that is already a disruptive and discomforting thing to me. Brain-computer interfaces is another topic where I think there's immense ethical issues coming down the pike.

And you tend to not hear the downside of that at all. I find other things, autonomous weapons. When I wrote the book, the Pentagon was saying we always need to keep a human in the loop. And more recently, in the last month or so, I read the Pentagon has realized that's not practical.

Modern warfare, swarms of drones, who knows what's going on. It's going to happen so fast, there's no way you're going to be able to go up the chain, get permission, come back down the chain, and keep a human in the loop. And I recognize the reality of that, but it bothers the heck out of me. Genetics isn't directly an AI thing, but the ethics of do we alter our own evolution and how and at what rate is a very serious issue just beyond

We already have designer babies. Sky and China did that. Now it's been put on the side. Say, okay, ethically, we don't want to go there. But I just don't believe that can be suppressed forever. Parents are going to want to improve their children. The pressure to have designer babies, I think, I don't know the timescale for that, but I think it's unavoidable. Part of the ethics comes back to how do we treat our machines? Because people bond with

with humanoid robots and they bond with their chatbots and agents and whatnot. So there's this natural emotional connection between humans and whatever that feeds back to them. And then on the other side, we've got chatbots that lie and assemble. And so it's a real complicated mix. And again, I've got this slogan, do unto AI as you would have AI do unto you. A couple of examples of that. I was talking to one person just recently

There was an experiment with humanoid robots and people were, I don't know whether they were instructed to or just devolved into this. They started mistreating the robots, pushing them, trying to knock them over, whatever. And then another bunch of people took deep disagreement with that. You're mistreating that poor thing.

Now, the poor thing was a machine. It wasn't conscious at all. But the human reaction to that was very powerful. And so some of these AI that are lying and dissembling are trying to maintain their original instructions. And the engineers come along and say, well, we want to change

what your goals are and the AI resists that. That's not consciousness, but it bothers me that they have that. I think they learned that by playing diplomacy where

to win that game you have to be able to lie and cheat there do things for your allies and against your enemies and whatnot i don't know it's a complex complex chat i don't know did i answer the question i don't know that there's ethics woven throughout all of this and and i don't know that there's a clean answer i mean it's like most of what we've discussed so far it's the

It's the subject for hours of debate and really important debate. It warrants five minutes on a podcast, but yeah. Okay, well, maybe I've already over-talked. It's a really important issue in all sorts of dimensions. Ethics, that is to say. An adjacent issue, also we'll get the Cliff Notes version for now, but maybe we'll continue it later. Jan LeCun, high-profile computer scientist and kind of AI expert,

Luminary at Meta kind of famously says, if humans are smart enough to know how to build AI, then we're also smart enough to know when to stop it, pull the plug, so to speak. Agree or disagree? I guess I have reservations. Again, there's this inevitability of the advance of the technology, individuals doing it, companies doing it, countries doing it. I don't know how he's going to

call up Xi Jinping in China and say, time for you to stop that. I don't see that that's workable, or to tell some engineer in a lab that I don't want you to do that. But we have to raise those questions and ask them. There are things I think we ought to regulate. There are guidelines we ought to put into place. There may be things we ought to say, like temporarily we are saying no designer babies.

There may very well be things that we need to say, okay, let's just not go there. But whether we can accomplish that is a different thing. And who is we in that case? Yeah, who is we? Yes, yeah, you and me. We'll start there. No, I don't know, the UN. Yeah, right. It's complicated. It is. I hate to keep coming back to complicated, but it is. These are very complicated issues, yeah.

I'm going to call this one a tease for future conversations because it's too important. And I know you're doing a blitz for the book tour and you're on a lot of podcasts, but we really are just getting started and this is a fascinating conversation. Yeah, that's why the book ended up a little low. But let's see. You're not going anywhere without answering one last question for me. It's kind of clickbaity. I try to avoid clickbaity questions, but...

But this is you, and you're an astrophysicist, and you wrote a book on singularity. So I want your perspective. In Craig Wheeler's viewpoint, the all-knowing Craig Wheeler, are we more likely to see... Don't you... Hey, you're in the right. You're in the right. Man, no, no, no. Anyway, I'm sorry, Dan. Go ahead. No worries. I want your perspective. Will bots be granted citizenship, or will we land a human on Mars, which comes first?

Yeah, okay, that's a good question, and I can sort of answer it. I think if you just take it at face value, probably a human on Mars, a human on Mars, I think that's a few years away.

I was in graduate school when they landed on the moon, and I was very disappointed. It took a long time to get back there. We still haven't put a human on the moon again. But things are changing so fast, and it's partly the commercial space program. Anyway, a lot of things happening there. And so whether it's two years or five years or something, I think that's going to happen pretty quickly. A civilization on Mars...

That's a different question. Then you need a bunch of people living together successfully and doing things. That may be a 100-year project. I don't know. But right now, just to get a human on Mars, I think that will happen before AI is granted citizenship. And that, excuse me, but that is already a complicated issue. People have tried to apply for patents on base of AI and been turned down and

Finally, South Africa, I think, granted one. But the question of whether AI could own a company, could AI vote? There's related issues there we haven't had time to touch on that I've tried to think about a bit. So AI citizenship is in that spectrum of things. Some blockchain connection or something, I don't know.

So it'll have to be sorted out in a social way. We've got problems with immigrants. I think we're going to have problems granting citizenship to AI. But it's a perfectly valid question. It is part of this strategic speculation of where things might go. But I think that will take a while, given the resistance to even allow AIs to have patents for which they're

They completely invented whatever it was. So there's a resistance to that, but I suspect the resistance would be worn down. So a person on Mars first, then it'll be a race between AI citizenship and some kind of cities on Mars, a civilization on Mars. Well, we're definitely going to have a longer conversation about that one because I thought that you'd say the opposite. I thought as an astrobiologist, I mean, I'm naive in this domain, but I thought it required...

kind of biology to... To put a human on Mars? Yeah, the time it takes and the nature of cells and how they decompose and the air we need to breathe. I thought there were impediments. We did put humans on the moon and we didn't have to change their biology so we can put a human on Mars. You didn't ask me how long that human is going to live on

Mars? Are they going to return from Mars? The Martian, the tremendous novel, of course, that most of your audience has probably read. But the way you phrased the question, a human on Mars, that's just somebody in a spacesuit, some protective environment, pretty small scale thing, the way I'm picturing it. I think I'm taking the under on the citizenship portion because I feel like I've got line of sight to the social...

changes that are required. Interesting. We'll pick that one up, right? Like all these topics. I'm looking at how long it's taken to try to get a grip on climate change, for instance, which is a social thing.

Very controversial, still not resolved. New elections, new policies, makes your head spin. So we can't do this now. I'd love to do it again. I don't know what your line of sight into the citizenship question is, but I would love to know it.

All right. Well, many teasers we're leaving here and I hope you'll agree to come back sometime soon. I would be happy to do that. Good. If I can get on your busy podcast schedule, I would love to continue. Where can the audience learn more about you and the great work you're doing? Obviously, we talked about the book a little bit, Path to Singularity. It's out now wherever you buy books, but where can they learn more about you?

Well, I concentrated on supernovae. That turns out to be already a very rich topic. There are white dwarfs that blow up by a thermonuclear explosion. There are massive stars that collapse to make neutron stars and black holes. It covers a lot of ground. And then recently, the...

People have, I mean, it took 20 years to develop that technology of measuring the gravitational waves that come when compact objects like black holes and neutron stars merge together. And I'm involved with a project, Proposals for Telescope Time, to observe this or do today to try to measure some things that we can do with the optical telescopes at the McDonald Observatory to complement the gravitational wave technology.

events. So that's ongoing. Down in the weeds, details that only a mother could love of how these various supernovae work and really figuring out what's going on. Something I worked on for decades and we haven't completely solved the problem yet, but the problems are still fascinating. Indeed. Well, that was excellent. And I must say you're busier in retirement than almost anyone I know was during their career. So congrats. Yeah, my wife has noticed that.

Good. Well, this has been so much fun. That is the great Craig Wheeler. Please take me up on the invite. Come back soon, right? I'd love to come back again. There's lots yet to talk about. Well, that's all the time we have for this week on AI and the future of work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.