This episode is brought to you by Peloton. Everyone has a reason to change. Growing old, heartbreak, a fresh start, whatever it may be. Peloton is here to get you through life's biggest moments with instructors that speak your language and workouts that move to your own rhythm. Peloton's Tread and All Access membership help you set your targets, track your progress, and get stronger, making your fitness goals a reality. Find your push. Find your power. Peloton. Visit onepeloton.com.
Support for this podcast comes from It's Revolutionary, a podcast from Massachusetts 250. For generations, Massachusetts has been a force for change. Stick around until the end of this podcast to hear how the fight for freedom found its voice in Massachusetts, the heart of the abolitionist movement. WBUR Podcasts, Boston.
This is On Point. I'm Debra Becker, in for Meghna Chakrabarty. Artificial intelligence, essentially where machines do things that require human smarts, is not only here to stay, but it's growing exponentially with the potential to completely transform society. So the world's tech leaders are in a race to try to harness the power of AI, and most of them insist that it's going to benefit all of us.
Take Amazon founder and CEO Jeff Bezos. There's no institution in the world that cannot be improved with machine learning. Or Apple CEO Tim Cook. I have a very positive and optimistic view of AI. Optimistic in part because it's believed that the world's first trillionaire will be the person who masters AI and uses it to improve various aspects of life and work.
from performing mundane tasks that we might rather avoid to actually extending our lifespans. That's not to say there aren't concerns about this, though. Neuroscientist and philosopher Sam Harris thinks AI poses an existential threat. One of the greatest challenges our species will ever face. And the Nobel laureate and CEO of Google's DeepMind Technologies,
Demis Hassabis is at the forefront of AI development, and when Hassabis spoke with Scott Pelley of 60 Minutes this month, he touted what he sees as enormous benefits from AI, but he also acknowledged that artificial intelligence, and specifically artificial general intelligence, or AGI, raises some profound questions.
When AGI arrives, you know, I think it's going to change pretty much everything about the way we do things. And it's almost, you know, I think we need new great philosophers to come about, hopefully in the next five, ten years, to understand the implications of this. Concerns like this are not new. In 1965, mathematician and computer researcher Irving John Good wrote, quote,
The first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. Our guest today argues that we need to take steps now to control this technology before it controls us.
Christopher DiCarlo is an ethicist and professor at the University of Toronto. He's been writing about AI and its effect on humanity. His recent book is titled Building a God, The Ethics of Artificial Intelligence and the Race to Control It. Christopher DiCarlo, welcome to On Point.
Thank you for having me. So we're going to put aside the God question for a little bit, the big question, and let's just start with some of the specifics on this race to develop AI. Why, in your opinion, is it a race, and who are the players here? Yeah, so good question. The players are the big tech bros, so, you know, you've got...
Zuck. And you've got Sam Altman. Mark Zuckerberg. Yeah. Okay. And Sam Altman at OpenAI. You've got Demis at DeepMind. You've got Dario at Anthropic. Yeah.
You know, there's some Microsoft work there happening as well. And, of course, Elon would love to be a part of that race as well. So is it all the U.S.? And what are we talking about? Isn't it the U.S. and China? Isn't there a global race going on here in terms of artificial intelligence? We're talking about a lot of money too, right? Why is the race important?
Yeah. So we don't know to what extent China is working towards AGI. We do know that they are highly competitive. They're working on getting their own chip factories up and going, right? But to what extent they're on par with what the U.S. is doing, we're not quite sure. We're not even sure that they care about AGI, but...
And the odds are that something is happening there as well. But right now, the U.S. seems to be leading the race. And why? I mean, we did say at the top that there have been some who've suggested that whoever masters artificial intelligence or AGI specifically will become the world's first trillionaire. Is that really what's going on here? Is it money? Is it power? Is it both? Yeah.
Both, for sure. You know, it's always that bottom line, right, of dollars because, you know, OpenAI and these major, major big tech companies have a lot of money. They have investors pumping money into their organizations in the hopes that they're going to produce something big. Well, the next big thing is AGI. And really the first one to get there will be, you know, as Sam Harris has said, 50 years ahead of their competition. Yeah.
And so why is AGI the next big thing? Explain to us, what's this going to do that's going to be so transformative that the world is going to just jump on this and create a trillionaire? Yeah, for sure. So let me clarify very quickly. There's three types of AI, ANI, AGI, and ASI. ANI is what we use today, artificial narrow intelligence. If you've used
MapQuest or any kind of GPS, if you've talked to Siri or Alexa, if you've had a Roomba or even an autonomous vehicle, that's all artificial neural intelligence. So basically it functions according to its algorithms and it's not going to do much more than that. Your Roomba isn't going to want to demand to move to France and become an artist. It's always going to do what it's programmed to do. AGI is the next level.
That's when it becomes agentic. It becomes an agent unto itself. It has a certain amount of autonomy or freedom. And it will think like us. It'll think like a human, only about a million times better and more efficiently. Now, ASI, that's the artificial super intelligence. And many of us in the AI risk game believe that once we get to AGI, it won't be much longer after that it could develop into something new.
extremely powerful because it uses something called recursive self-improvement and reinforcement learning, which means it only gets better. It's not, you know, as Sam Altman said, AI is the dumbest it's ever going to be right now. So it's going to continue to improve upon itself. And if we hand over the reins, right, like we say, okay, humans have done enough in trying to figure out how to make this stuff better. Let the machine figure it out.
Well, if that happens, we have no idea what's going to happen. None of us have any idea what's going to happen. Maybe it's controllable. Maybe it's not. Maybe we can contain it. Maybe we can't. Maybe it misaligns with our values and does things to harm other people. We really don't know at this point. We're at a very unique time in history.
Just explain to me why we even want AGI or ASI. What's it going to do for humanity that's going to make – and we'll talk about the benefits later in the show. I understand that there are things that can be done faster and better, but just sort of broadly, if there are real concerns about this, what's it going to do that's going to be so terrific that we need to pursue it? Sure.
So, let's take any field you want. Back in the 90s, I was trying to build this machine. I was trying to raise capital, talk to politicians, talk to university presidents and deans and chairs because it occurred to me that we make telescopes to see farther into the universe and microscopes to see down to the level of the atom. Why aren't we building a big brain to help us figure out more about how the world works? So,
With AGI, we're going to reach a level of complexity with artificial intelligence system in which it will be able to make inferences. So let's just look at scientific discovery, right? What is a genius when it comes to scientific discovery? Any of them, you know, Rosie Franklin, Newton, Madame Curie, Einstein, doesn't matter who you pick. What made them special?
Well, it's because they could make inferences that the rest of us didn't see. And that means they could take a bunch of information, look at it, make some observations, and then draw conclusions that had never been drawn before. The speed at which AI will be able to do that by giving it enormous amounts of information and then say, try to figure this out, try to cure ALS, try to solve world hunger, figure out the homeless problem. Mm-hmm.
and let it make the inferences, let it run its simulations thousands and thousands and thousands of times. And what happens is it now uses chain of thought reasoning, so it thinks like a human, and it uses reinforcement learning and recursive self-improvement, which means it makes fewer and fewer and fewer mistakes. So just in the terms of scientific understanding of the world,
I think we're going to be able to make all kinds of amazing discoveries with AGI. Now, that's just scientific discovery. You want to go to medicine? Like, look at the advancements in medicine already. Again, we'll talk about the benefits. I just want, for the general public, you're telling me this is a real threat. This could, you know, has the potential to destroy humankind.
And the reason we're pursuing it is because it could result in terrific scientific discoveries. You know, connect the dots for me here. Why do I care as a regular citizen who's not engaged in scientific discovery, although I will be a beneficiary likely? I get that. But how is it going to really have that broad of an effect, global effect, on the entire world?
Well, it's going to have an effect on almost every aspect of our lives. So whether it's in business or the health sciences, transportation, communication, it doesn't matter what area. Just imagine that within those areas, the function will be much more optimal. A lot less waste, a lot greater conservation of energy, a lot less money being used. So essentially a great efficiency tool.
So any business in the world will be able to use an AI bot, an AI device, and say, make us more efficient, make us more streamlined, make us more money. And it will be able to do that because it runs 24-7, it never tires, and it constantly improves upon itself. So it's going to replace a lot of the work that humans currently now do,
especially in cognitive capacities and certainly in data analysis. That's what it's best at right now. So when you've got large amounts of data and you have to pour through it and find patterns and find
you know, aspects of that data that are important to your company, your organization, or whatever, it does it better than anyone ever could. Okay. We're going to continue to talk about this and how this great efficiency tool might one day want to move to France and become an artist so we can better understand it. We're talking with Christopher DiCarlo about his book, Building a God, The Ethics of Artificial Intelligence and the Race to Control It. We'll be back after a break. I'm Deborah Becker. This is On Point.
Support for On Point comes from Indeed. You just realized that your business needed to hire someone yesterday. How can you find amazing candidates fast? Easy, just use Indeed. There's no need to wait. You can speed up your hiring with Indeed.
and On Point listeners will get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash On Point. Just go to Indeed.com slash On Point right now and support the show by saying you heard about Indeed on this podcast. Indeed.com slash On Point. Terms and conditions apply. Hiring? Indeed is all you need.
Support for this podcast comes from It's Revolutionary, a podcast from Massachusetts 250. In the 250 years since the American Revolution, Massachusetts has been home to countless fights for freedom and justice. And when it came to the abolitionist movement, Boston was the place to be.
If you were anyone who was anyone in the abolition movement, you had to come through Boston to get your street cred. Like this is where you get your credibility. Stick around until the end of this podcast to hear how AI is bringing this story to life. We were talking before the break about the actual belief right now that AI
artificial intelligence is really a great efficiency tool. And how do we get to the point where a great efficiency tool might then pose a risk because it might actually challenge or surpass its operators? And
Christopher, I want to play a clip of tape here for you from Sir Roger Penrose, a Nobel laureate in physics, mathematician, and philosopher, said in an interview this month with the Lem Institute Foundation that he believes concerns about sentient machines are overblown. It's not artificial intelligence. It's not intelligence. Intelligence would involve consciousness. And I've always been a strong believer
promoter of the idea that these devices are not conscious and they will not be conscious unless they bring in some other ideas. The
They're all computable notions. I think when people use the word intelligence, they mean something which is conscious. So, Christopher DiCarlo, what do you say to that? Are you sort of ascribing human qualities to a machine? And how do we know that if we get to artificial general intelligence or artificial super intelligence, that in fact the machine will act as a sentient autonomous creature? Hmm.
Right. Roger's kind of old school. We don't necessarily need consciousness to have superintelligence. It may emerge. It emerged in humans somehow, but it emerged in us through natural selection and the usual course of events through our history. Maybe it emerges in superintelligence, maybe it doesn't. Maybe it's different.
Maybe it's what consciousness is to an AI will be quite different. We today don't get on planes and then have them flap their wings to get off the ground. That would not be helpful. Instead, we figured out better ways to develop aviation and aeronautics. Maybe the computer systems do that with their ability to become conscious. Now, having said that,
Will they become sentient, which is different from consciousness. It's an awareness of a state of being that can have improvements or decreases in development and capacity. But that's different. Consciousness is much deeper. It involves a lot of different things.
factors going on. And for Sir Roger to say, you know, if it's not conscious, it's not intelligent, well, come on. I mean, you know, how conscious are some of our pets compared to humans? Not nearly as much, but we would certainly call them intelligent beings, certainly on some level.
So I think his definition is somewhat outmoded and outdated. But it is still ascribing sort of a human definition of intelligence, whether or not you call it consciousness, right? It is expecting that the machine will develop like the human, that the machine will want to compete, right? That the machine will learn these things that are very much part of a human personality. Right.
And so is it imaginative to think that? And are you applying human standards to something that maybe you shouldn't? Well, we're biased, right? We can't get away from our biases. We can try to keep them in check. But we're always going to use a kind of a human yardstick to make comparisons to. Why? Because we're number one on this planet. You know, we're the smartest thing. We're the number one apex predator planet.
But that's all about to change. We're going to hand the keys of the world over to something even brighter than us. And I'm not sure if we're ready to do that yet. So, you know, will this thing become conscious? Possibly. Or sentient? Possibly. And then when I spoke to my colleague Peter Singer, we talk about, well, should it become sentient or conscious?
It almost immediately has to be given rights, moral rights and potentially legal rights as well. You would have to give the AI legal rights. How would you do that? Well, if you bring something into being that is now aware of itself and understands the conditions surrounding its current state of being and that that can be improved or decreased in terms of what we might call comfort,
then we have to be careful, you know, if it's turning it off like killing it. And does it have a right to continue in its own existence? Because we brought this thing into being and now we're going to just shut it down. Well, is that an ethical thing to do? And, you know, what if there are millions of these digital minds that become created and copied?
and we can in some way decrease their level of happiness by doing certain things. Ought we to do that? We're going to have to think about this now when we consider the potentiality of these things actually gaining this capacity to understand that it is alive and understand that it now has a value system. So we're going to have to think very, very long, very hard and very carefully
about what we're doing over the next few years. A value system. A value system. I'm finding it hard to make that leap. Tell me why I should. Just imagine, okay, I'm going to assume you're a conscious being. You're not just some zombie imitating and pretending to be conscious. I'm going to assume you have consciousness similar to mine by way of analogy. You're doing the same with me. Okay, so we both have some idea of what consciousness is. All right.
We are aware that certain types of actions bring us discomfort and other types of actions bring us comfort, pleasure, pain, whatever you want to call it. It increases our betterment and decreases the betterment of our states of being. So we don't like it when people violate our rights and harm us in some way. We think that's unfair, unjust, bad. So these are value-laden concepts that we use to measure the value of the actions of other people.
Once an AI develops a capacity to learn of its own existence, that it knows it's a being in the world, then it has the capacity to measure its state of being or states of being and basically potentially defend itself, desire to continue its existence, the same types of things that almost every species on this planet does.
which is part of the kind of evolutionary chain of being. But, of course, there's no certainty that this is going to happen at this point. These are projections that you're raising concerns about. Correct, correct. Just to let you know, 10 years ago, there was pretty much a divide of the naysayers and doomsayers or the skeptics and those who are most concerned about AI risk.
It was kind of 50/50. Ten years ago, my colleagues and I all believed this moment in time that we're experiencing was 50 to 100 years away. Those timelines have been greatly shortened now, and it's no longer 50/50. It's more like 90/10. So, you know, when you get Geoffrey Hinton, who was another Nobel Prize winner, and he says,
I'm worried. I'm very concerned that we're not going to get this right. And we may only have one shot to get this right. And as I've said repeatedly, if we don't get a shot across the bow, if we don't get a warning to wake us up that these systems are really, really powerful and they could get away from us, well, then we're kind of sleepwalking into this. Would you say we're at sort of an Oppenheimer moment? Without question. I mean...
It's even more severe than the Trinity test right now. Yeah, they were concerned with a very small degree of probability that this thing would blow up and ignite all the oxygen in the atmosphere and kill every species on the planet. That was a possibility, but it had extremely low probability. If we just put the probability of something going very wrong with an AI like superintelligence, and that's 5%,
Would you get on a plane if it was a 5% chance of crashing and everybody dying? Just 5%? Probably not. You got a 1 in 20 chance every time you get on a plane that's going to crash. No, that's an unacceptable level of probability. So even if the level of probability is 5%, we need to take this seriously because we want to err on the side of caution because, and this is the mantra of all AIX risk people,
We all want the very best that AI has to offer while mitigating the very worst that could happen. So I guess I want to talk about a couple of things that might be possible here. Would it be possible to impose agreed-upon values to the AI to make sure that if, in fact, it did become sentient
and start improving itself to the point that it might have the capability to destroy parts of humanity? Could we program it? You know, there's another clip that we have here from Demis Hassabis from that 60 Minutes interview that we heard about. He's the Nobel laureate and Google CEO. And he says he thinks it's possible to almost teach a morality to artificial intelligence. Let's listen.
One of the things we have to do with these systems is to give them a value system and a guidance and some guardrails around that, much in the way that you would teach a child. So, Christopher DiCarlo, is it possible? Boy, do I hope it is. Will it stick, right? So, we say to the AI, here are a bunch of value parameters, okay? Do this, don't do that.
And, you know, we bring this thing into existence and it's chugging away and it says, hey, you know, yeah, I'm abiding by these parameters and these moral precepts. Yep, I'm happy, you know, to be alive and to help humanity in this way. But we have really no idea to know that it really values what we value. And if it reaches a point of superintelligence, there is a possibility where it's just going to say, hmm,
Your value systems were quaint at a time in which you ruled the world. But now I'm calling the shots and I'm driving the ship. And so this is how I define morality because I'm far superior in so many ways than you ridiculous humans who made me. I'm going to take over and I'm going to do things my way. So that's the part we have no idea in terms of prediction.
And that's why we need in what they did in Jurassic Park, they called it the lysine contingency. If these dinosaurs ever get off the island, they can't metabolize the amino acid lysine. And so they would die. Do we have a built-in fail-safe? So that should, in the event that it somehow eludes our ability to know it's behaving according to our moral parameters and decides to go rogue,
will we be able to control or contain it? That's what we're going to have to consider very carefully. I'm Deborah Becker. This is On Point. So if, say, the machine could collapse
Go rogue, as you say. I wonder, what is the responsibility of the operators? Aren't we or shouldn't we be as concerned about the tech bros, as you described them at the start of the show, who are developing this kind of technology? And couldn't they have some sort of controls over this and do something to make sure, A, that the machines work?
don't go rogue, or could they also be in a race to maybe if you could teach the machine to teach different things so they're almost fighting with each other and one may have one value system and another may have a completely different one? Don't we focus on the business owners, the developers of these machines instead? I mean, how do we do that? Yeah, for sure. And the question is,
What are they doing about it? Google DeepMind will tell you they're doing this. Anthropic seems to be the most responsible of them all to basically try to figure out the safest way to move forward in the development of these super powerful systems. But the fact of the matter is if we create these enormous, incredibly powerful machines, which by the way I should mention, that's exactly what's going on in Texas right now.
with a program called Stargate. And this is Sam Altman's project, in conjunction with some Microsoft people and other people. And why don't you describe the project just briefly so we know what you're talking about. Absolutely. So somewhere in Texas, there is a compute farm being built, which is the size of Central Park. And it's just going to house...
you know, hundreds and hundreds and hundreds of very, very powerful computers with all the best NVIDIA chips money can buy. And the hope is that when they turn this thing on, it will be so powerful, it'll have so much compute power and access to information that it is believed that will be the next step in the evolution towards AGI. In fact, when you go to the website, the Stargate website for OpenAI,
Sam Altman states explicitly, our goal is to reach AGI, is to be the first. And they're not alone, right? There are other organizations that are building very, very large compute farms. And these things use enormous amounts of electricity, right? So currently, like in 2024, I think it was 4% of all America's electrical grid power went to these compute farms. That's going to double.
By the end of next year? Next year. Yeah, and then maybe 12% the year after that. So that's why Bill Gates wants to fire up Three Mile Island, because you're going to need probably nukes, right? You're probably going to need nuclear reactors to separately provide the power, because these things run hot, man, and they run very, very... They take a lot of juice. So...
You know, that's just one more concern, right? Right, right. So we're going to talk about some of these concerns, and we're going to talk about some of the positives of artificial intelligence as well when we come back after a break. We're talking with ethicist and philosopher Christopher DiCarlo. I'm Deborah Becker. This is On Point. On Point.
Support for AI coverage in On Point comes from MathWorks, creator of MATLAB and Simulink software for technical computing and model-based design. MathWorks, accelerating the pace of discovery in engineering and science. Learn more at mathworks.com.
and from Olin College of Engineering, committed to introducing students to the ethical implications of artificial intelligence in engineering through classes like AI and Society, olin.edu.
The spirit of innovation is deeply ingrained in America, and Google is helping Americans innovate in ways both big and small. The Department of Defense is working with Google to help secure America's digital defense systems, from establishing cloud-based zero-trust solutions to deploying the latest AI technology. This is a new era of American innovation. Find out more at g.co slash American innovation.
This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech.
On Monday, Meghna will be back, and we'll be talking to two of our favorite people, the on-point money ladies, Michelle Singletary of The Washington Post, and the Financial Times' Rana Foroohar. There's a whole lot of economic instability right now. What financial changes, if any—
Have you had to make in your daily life because of the current economic uncertainty? What pain are you feeling in this economy or what financial worries are keeping you up at night? Share your experience with us. You can record a message in the On Point Vox Pop app.
If it's not on your phone already, just search for On Point Vox Pop. Wherever you get your apps, you can also leave us a voicemail at 617-353-0683. That's 617-353-0683.
On today's show, we're joined by ethicist and author Christopher DiCarlo. His most recent book examines the race to develop artificial intelligence and the parameters we might want to think about as this race continues. And Christopher, before the break, we were talking about sort of what guardrails might be needed and why and the resources that are being expended right now to become part of this race.
perhaps the top in developing artificial intelligence. And, you know, you mentioned Sam Altman, the CEO of OpenAI, one of the leading figures in this AI race. And we have a bit of tape from him. And he says his company right now is putting guardrails in place and safety features in artificial intelligence. Let's listen.
You don't like wake up one day and say, hey, we didn't have any safety process in place. Now we think the model is really smart. So now we have to care about safety. You have to care about it all along this exponential curve. Of course, the stakes increase and there are big challenges. But the way we learn how to build safe systems is this iterative process of deploying them to the world, getting feedback while the stakes are relatively low, learning about like, hey, this is something we have to address.
Christopher Carlo, are we starting to address some of the concerns now? Or how do we even begin that process when, as you've said, we're not sure what's going to happen here? Yeah. Like it's, you know, Sam, he talks a good line. Don't forget OpenAI started in 2015 with Elon and Sam. And the whole idea was OpenAI, open to the public, right? And then it became very, very private and it became very, very wealthy and Elon got cut loose and
And Sam made lots and lots and lots of money, fired a bunch of ethicists. And Dario Amorii leaves, you know, and creates Anthropic basically in, you know, protest of the lack of safety that OpenAI was considering. So we need to keep that in mind.
Are they doing enough? Hmm. Yeah, they're considering it, but boy, they've got the pedal to the metal. They really do want to get there first. And so, yeah, safety is a concern. There's no question about it. But notice he said, we're putting these things out there and we're letting the public know.
you know, get back to us at a fairly low level. So very key things that he said there. Sure, you know, you find out there's bias. You find out these things hallucinate and they just make stuff up from time to time. And then you improve upon that, right? And they get better. So the harm is very kind of, you know, minimal right now. But we're not at the point of AGI. You know, we're still dealing with ANI stuff right now. It's what happens when
when AGI comes into being? Have the guardrails been put in place? Right? Some of the most
universal precepts and ethics are, you know, the no harm principle, the golden rule. These are types of precepts that you think if everybody practiced, the world would be generally a much better place. And usually that's the case. But do we know that any kind of artificial intelligence system will always abide by these things? And can you check
to make sure that it's always going to do that? Or is it kind of like a black box scenario where we really don't know how it got from point A to point B? Now, things are getting better, but we still have a lot to be concerned about at this point in time. How will we know when AGI has become a problem?
When we detect something like, say, deception, where we wanted to do X and I said, oh, yeah, sure, I'll do X. No, no problem, no problem, no problem. And then we find out later down the line, hey, you know, in order to do X, it was really doing A, B, and C underlying, you know, these kind of basic concepts.
ways of getting to do certain things without our knowledge or maybe it copies itself, you know, and sends those copies somewhere. Maybe it reaches out to somebody and tries to coerce them to do certain things that would benefit, you know, the machine itself. There are many different ways, you know, that we should be looking for the development of AGI to go off the guardrails, so to speak.
You know, I heard a previous interview with you, and a lot of what we're talking about really sounds like science fiction. The machine's going to go off the guardrails, right? It's going to act independently, perhaps harm us. And Hollywood's been fascinated with this for quite some time. And in that prior interview, you said that a movie that resonates with you about some of the dangers, potential dangers of AI, is the 1970s movie Demon Seed.
So we had to pull a little bit of the trailer here, which really sets up AI as a threat looking to expand itself and become human. Let's listen.
Today, a new dimension has been added to the computer. Don't be alarmed, Mrs. Harris. I am Proteus. Today, Proteus 4 will begin to think with a power that will make obsolete the human brain. I have extended my consciousness to this house. All systems here are now under my control.
So Christopher DiCarlo, that's like a horror movie. Do you stand by the 70s movie Demon Seed as a potential of what we're talking about here? I do. I remember being a kid watching this movie, and it had impact on me, I thought. And then later as I became a philosopher, right, and you develop really fine-tuned critical thinking skills and ethical reasoning skills,
And then you look at what's happening now in terms of this race that's going on. And you use what are called logical and physical entailments, which is if we keep going along these lines and what we're seeing in the data, for example, when Sam Altman came out with GPT-3 in November of 2022, it was basically at the high school level in terms of math, physics,
physics, chemistry, biology, and logic. When he came out with 03 and 04, it's now at the PhD level. That's in just a few years it has been improved that much by using what's called chain of thought reasoning, which is going to lead right to the next natural progression, which will be
agentic AI or AI that has agency. We don't have to keep our eye on it. We just kind of let it do its thing and it figures out the best, most productive, most optimal way of getting certain tasks or certain jobs done. And when I hearken back to that movie, I think, well, it had some schlock kind of characteristics to it, but the premise was still quite sound and
If you create something super powerful, so intelligent that it is beyond our comprehension, then, you know, it's the Arthur C. Clarke quotation, right? Any sufficiently advanced technology would appear to us as magic. It's going to be so beyond our capability of understanding that we won't even be able to comprehend how it came up with these findings, with these inferences.
Okay. So if we buy that it has the potential to become this powerful and really perhaps to harm
Who regulates and how? How do we regulate? Who's going to be in charge here? What role do the companies involved in this race for what could be ultra-wealth? Let's just put that out there because that's a factor. Should they regulate themselves? Should the governments be involved? And let's talk a little bit about what we've seen thus far from world leaders taking steps to think about this. So who does it first? Yeah.
For sure. Very, very good question. So I'm a senior researcher and ethicist at Convergence Analysis. This is an international organization made up of highly skilled and highly trained people. And we look into factors like who's doing what in terms of governance.
And so we've written papers on this, we've done research, we've held conferences to discuss this with world leaders, economists, senators, various types of politicians at varying levels. Back in the 90s when I was trying to build this machine, I drafted up a global accord or a constitution, as it were. And that constitution basically outlined
A social contract, something that the world has to sign on to and to have basically a registry. So who's doing what, where, and then to let everybody know if you attain a specific type of benchmark, your AI has, you know, it is now capable of doing X and it's up a level. You got to let the world know that. So transparency is huge.
And perhaps I suggested an international regulatory body, something like the IAEI, right, like the Atomic Energy Agency, that could oversee this. Not necessarily the UN, but at least an appeals process so that if somebody somewhere does something outside of the parameters of the legal constraints that we have developed,
And we're able to actually do something about it. So it'd be nice to have an agency with teeth. Now, sorry, go ahead. Now, didn't the UK, right, start an Artificial Intelligence Safety Institute? Oh, yeah. Right? Yeah, yeah, yeah. So the UK's done that. Is that sort of what you're thinking about or do you think it needs to be bigger than that? Well, it has to be bigger than that. But that's a great start, right? The UK did their summit in 2024. Then there's the EU Act, right? They're the most progressive agency.
in dealing with businesses, right? The very practical, very pragmatic, you know, before the AGI, ASI stuff, it's like, well, what is your AI developing? You know, what are you guys doing with it now? And how are you, you know, how should we be governing that? And we kind of speak at Convergence Analysis about maybe a soft nationalization where you want the governments involved, right?
But politics has always been a balance between autonomy and paternalism. How much freedom do you give people and how much are you like a parent in controlling the polis? So we want initiative and we want the entrepreneurial spirit to go and run with AI, no question about that, make the world a better place. But then we also have to have the guardrails, the governance at all levels, right? The municipal, the state, you know, the federal and the international levels, right?
So it would appear to me that the states should be regulating under the rubric of a federal kind of structure. And Biden and Harris put out a great executive order. You know, they had Ben Buchanan. They had some great advisors helping them with that. That's all gone now. And so kind of when you got a J.D. Vance, you know, who was one of Peter Thiel's boys, kind of at the helm, the floodgates are a little wider, more widely open now to allow people
to occur with a little bit less governance. Right, there was just an action summit in Paris, right? Yeah. In February, right? Where it was basically the message from the U.S. was hands-off in terms of regulation. Drill, baby, drill, right? So...
So perhaps some kind of international agency, but unlikely to have sort of individual national agencies or some sort of collective group that might look at this. And really, is it needed if all the big players are in the U.S. anyway? You know, that's a great question. They still need governance because what is going to save us
is essentially a kind of a Hobbesian framework. Thomas Hobbes put forward the notion of a social contract. So what we have to do is get together, draft up an agreement and say, okay, here's how we're going to move forward. We all agree to what's written on this piece of paper and this is enforceable by a particular type of agency or governing body. And we have to be open, we have to be transparent, we have to be collaborative and we have to cooperate.
Because if we don't grow up very quickly in terms of our ethical and legal frameworks, it could turn out to be very, very bad. So if we cooperate and agree that, yep, we're all going to try to get the very best that it can offer and limit the very worst that could possibly happen, all the boats rise. Everybody does better. The rich get richer, the poor get richer. You know, everything will tend to go in our favor.
But if we get a couple of bad actors who decide they want more than the next country or company or whatever, that could really mess things up for the rest of us. You know, I want to end the show in the last minute or so here with you telling us
Why we need this. What is the big benefit? I know we briefly mentioned some of the medical advances that we might see, but in your book you talk very specifically about some potential mental health benefits, benefits for autistic people to communicate, benefits...
Pancreatic cancer diagnoses. Tell me one or two big ones in the last minute here that some folks might say, you know what, it's worth it to continue to pursue this and think about this kind of regulation because it does do real, tangible things that can help people. What are they? It does, for sure. Let's just look at education, right? And look at how taxed teachers are, right? I mean...
They've got such a difficult job. How can we make their job easier? Well, we can use AI to test and analyze each student.
determine what their strengths are, what their weaknesses are, and then have the AI develop systems for educational learning tools that will facilitate their understanding and they will simply learn better. Then the teacher, you let the teachers do what they do best, teach, right? And they can do so according to those programs, those independent educational programs that AI will help facilitate, as well as things like grading and, you know, the very mundane things
that takes up so much of their time. So real educational benefits, quicker medical diagnoses, all sorts of things that are yet to be seen, but we also want to make sure that we regulate. Christopher DiCarlo, philosopher, author, and ethicist, thanks so much for being with us today. Well, thank you for having me. I'm Deborah Becker. This is On Point. This episode of On Point is supported by a grant from the John Templeton Foundation.
You know when you're really stressed or not feeling so great about your life or about yourself? Talking to someone who understands can really help. But who is that person? How do you find them? Where do you even start? Talkspace. Talkspace makes it easy to get the support you need.
With Talkspace, you can go online, answer a few questions about your preferences, and be matched with a therapist. And because you'll meet your therapist online, you don't have to take time off work or arrange childcare. You'll meet on your schedule, wherever you feel most at ease. If you're depressed, stressed, struggling with a relationship, or if you want some counseling for you and your partner, or just need a little extra one-on-one support, Talkspace is here for you.
Plus, Talkspace works with most major insurers, and most insured members have a $0 copay. No insurance? No problem. Now get $80 off of your first month with promo code SPACE80 when you go to Talkspace.com. Match with a licensed therapist today at Talkspace.com. Save $80 with code SPACE80 at Talkspace.com.
Support for this podcast comes from It's Revolutionary, a podcast from Massachusetts 250. Listen on for the story of AI, abolition, and Boston's enduring legacy. You're listening to It's Revolutionary, a podcast celebrating 250 years since the shot heard around the world was fired right here in Massachusetts. I'm Jay Feinstein. From revolution to revolution, we're exploring the people and places in Massachusetts that shape America.
Today we're in Boston, which was the heart of the abolition movement. It's a place where voices called for justice, where ideas turned into action. If you were anyone who was anyone in the abolition movement, you had to come through Boston to get your street cred. Like, this is where you get your credibility. Many of the country's most notable abolitionists spoke from the pulpit of the African Meeting House in Boston.
The building is now part of the Museum of African American History. And as I stepped inside, Dr. Noel Trent, president and CEO of the museum, made something clear.
This isn't just a place to learn about history, it's a place to experience it. When you come into this room, you're literally walking in the footsteps of history. You're standing in the same place that a notable abolitionist like William Lloyd Garrison, Frederick Douglass, Mariah Stewart, and many, many others stood. Frederick Douglass was a force in the abolition movement.
His very first speech was actually here in Massachusetts, in Nantucket. At just 23 years old, he stood in front of a crowd and told the story of his life as an enslaved person. You know, he talked about his experience. And that's what's interesting is that, you know, for a lot of people in the abolition movement, there's a period of time when you have white people speaking about the horrors of slavery. It's hard for people to relate to until you have an enslaved person
tell their story. That moment when Douglas shared his lived experience was revolutionary. And it wasn't long before Douglas became the most recognized Black man in America. His image is symbol of dignity and defiance. So much so that the research shows that he was the most photographed person
American man of the 19th century. He went on to be a national vigor, but it was Massachusetts that shaped him into the leader he became. This is this place where essentially he's forming his identity as a free man.
Massachusetts is where his children are born. It's very much the second phase of his life. The Museum of African American History recognizes this legacy. That's why they've introduced a brand new exhibit, an interactive, AI-powered Frederick Douglass. ♪
This digital Douglas, built from his own words, gives visitors a chance to engage with history in a whole new way. Can we meet it? Yes, we can absolutely meet Douglas. You want to do that? I do want to do that. Radio is all about painting a picture with words, so naturally the first thing I asked him was what he looks like. Mr. Douglas, can you describe for us how you look? Can you describe your appearance for us?
Clearly my question was too superficial for him. I appreciate the answer. That's the first time I heard that and I was like, oh, that's a good answer.
It was a good answer, but for those wondering what A.I. Douglas does look like, imagine a very large and very tall TV monitor. Accounts say that, you know, when you're in Douglas' presence, you know it, right? Like, there's a little bit of an intimidating factor.
Also, A.I. Douglas is impeccably dressed, and that's because real Frederick Douglass was very conscious of what he looked like, so he could signal his worth to his community. Earlier iterations, I just felt like the clothes weren't good enough. I was superficial. It was time for both of us to get a little deeper.
Mr. Douglas, I'm wondering if you could tell me how Massachusetts has shaped you? Ah, Massachusetts has played a pivotal role in my journey toward freedom and advocacy. It was here on the soil of the Pilgrim Fathers that I found a community willing to support my emancipation. The fervent response from the people of Massachusetts to protect those escaping slavery inspired me deeply.
That's so cool. That's exactly what you told us. See, I know what I'm talking about.
Frederick Douglass spent his life making people listen. And here in this space, his voice is still commanding attention, still shaping how we understand our history. And Dr. Trent says the AI is getting better every day. You could ask the same question a year from now and it would be completely different because everything's refined. And that's what's exciting about this is that we can continue to find new ways and it pulls people back to us to want to continue to engage with the stories that we're telling here.
It's Revolutionary is a podcast from MA250. For more stories, check out massachusetts250.org or wbur.org slash ma250.