We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Building A God: Exploring Dr. Christopher DiCarlo’s Blueprint For Ethical AI

Building A God: Exploring Dr. Christopher DiCarlo’s Blueprint For Ethical AI

2025/3/10
logo of podcast Finding Genius Podcast

Finding Genius Podcast

AI Chapters Transcript
Chapters
The episode begins by introducing the concept of genius and its rarity. It highlights the podcast's mission to identify and interview exceptional individuals across various fields.
  • Only 0.1% of people in any profession are considered real geniuses.
  • The podcast aims to find and interview these geniuses to share their knowledge and insights.

Shownotes Transcript

Forget frequently asked questions. Common sense, common knowledge, or Google. How about advice from a real genius? 95% of people in any profession are good enough to be qualified and licensed. 5% go above and beyond. They become very good at what they do, but only 0.1%.

are real geniuses. Richard Jacobs has made it his life's mission to find them for you. He hunts down and interviews geniuses in every field. Sleep science, cancer, stem cells, ketogenic diets, and more. Here come the geniuses. This is the Finding Genius Podcast with Richard Jacobs.

Hello, this is Richard Jacobs with the Fighting Genius Podcast. My guest is Christopher DiCarlo. He recently wrote a book called Building a God, The Ethics of Artificial Intelligence and the Race to Control It. So welcome, Christopher. Thanks for coming. Oh, thanks for having me. Yeah, if you would, tell me a bit about your background and then we'll get into the book itself.

Well, I was a philosophy major in my undergrad days, and then I got a master's and a PhD in philosophy of science and did a postdoc at Harvard at the Peabody Museum and

the Stone Age Laboratory because I wanted to find out if philosophical ideas could extend into the various sciences for helping us to better understand why humans reason the way they do. So I started up a kind of an interdisciplinary project where I looked at the evolution of our species along with other types of species and compared the ways in which we and other species come to know things.

I then taught for a while at various universities, Ontario Tech, Wilfrid Laurier, University of Guelph, University of Toronto. And now I do a bit of teaching for Toronto Metropolitan University at the Life Institute there. And most of my time for the last few years has been occupied with being a senior researcher and ethicist for convergence analysis research.

an international organization that we basically look at mitigating the risk

of AI right up to existential risk. So that's pretty much me in a nutshell. All right. So that's probably what got you into writing the book, Building a God, right? That's right. Yep. What questions were you researching to write that book? What's the main premise of it? I just got a complimentary copy, but literally a day ago. So I'm about to go through it. Forgive me, but thank you for sending it. Oh, that's quite all right. Yeah. So the story with that is that

While I was doing my doctorate in the 90s, I started to realize that as I, you know, they get you to teach as a grad student. And I was pretty decent as a teacher, as a lecturer. But it occurred to me that fewer students were understanding kind of big picture ideas. They were being taught very, very specific information about very specific topics. But very few of them could relate specifically.

how their field of study related to everyone else's field of study. And so it occurred to me that I needed to demonstrate how students could understand what I called relations of systems. So understanding information in terms of

the various sciences, social sciences, philosophy, economics, geology, etc. And to try to coordinate a map that I developed that showed the interconnectedness of all these different ways of trying to understand how these systems were related.

And so it occurred to me that this is the future, it was going to be the future of information. This is called information theory. And so I thought we should be building a computer that would be able to access this kind of information and then draw inferences, make its own inferences, figure stuff out.

not necessarily with our input, so that we could build a type of supercomputer that would help us cure certain diseases or solve major world issues like climate change and world hunger. And I went about trying to build this machine back in the 90s. I met with deans and chairs of departments, various universities seeking approval and financial assistance. I met with politicians here in Canada because I wanted

believed such a machine could help the Senate make better decisions because the politicians would be better informed, they would have better information. And many applauded the idea, many thought it was a great idea, but not a dime was to be spent on this project. And so...

It remained as an idea. Well, what occurred to me in the 90s was that if I wasn't going to build this machine, somebody someday was definitely going to. And would they have the ability, the critical thinking ability, the ethical reasoning ability to be able to control this particular type of machine, especially as it got more and more powerful? And it started to concern me in the 90s, but I thought it was about 100 years away that the technology at that time...

We weren't anywhere near where we're at today. And so I thought, ah, I'm going to focus most of my area of expertise into trying to teach more and more critical thinking, try to get that into the high school systems, try to get it just basically integrated

used more throughout the world so that it would help people have better conversations and understand things, complex topics better and express themselves more confidently. But then, of course, November 2022, Altman comes out with ChatGPT-3 and I was a game changer. So I knew at that point I was being drawn back into the AI field. And so I started to look more and more into who was doing what and, uh,

met up with the CEO and others with Convergence Analysis, and we thought it was a very good fit. And so I've been with them since May of 2023. And we basically devoted our lives towards trying to develop this machine that

Sam Altman and others in big tech are trying to build a type of godlike super intelligent machine so that we get the very best that AI has to offer while limiting the very worst that could possibly happen. And so that brings us to today. Okay. How would you build such a machine? I mean, the current chat GPT and all that, it seems like it's just a, you know, they've grabbed the whole ton of data and they're kind of making inferences on what the data is that's there, but nothing new or novel based upon it.

doesn't seem like anyone's building on it. They're just regurgitating, you know, what seems to be maybe the best of what's out there. Right, right. With Altman's latest iterations with O3, we now have chain of thought ability, which is really kind of next level capacity for these types of large language models. They're now large. How do you obtain it? How do you have a reasoning or a thought? What does that look like in

In terms of code, how do you get that? Yeah, well, that's a great question. I'm not a code guy, but I can give it my best shot. Instead of just predicting and having algorithms to try to determine what the next word is or, you know, in a sentence, the chain of thought reasoning now allows the system to hesitate, to not just give the first thing that it can come up with, but

but that it keeps going over and over and over through its various parameters and volume of data to find more accurate information. And it uses essentially a form of reinforcement learning so that when it gets things right, it

you know, gets a thumbs up. And when it gets things wrong, it makes sure it doesn't do that sort of reasoning again. So it just keeps improving upon itself, something known as recursive self-improvement. And by those actions, you get a smarter machine, essentially. So what Altman is doing right now is they're basically building Stargate,

in Texas, this $500 billion compute farm, with the express intention of being the first to develop what's called AGI, or Artificial General Intelligence. That will, I think, predictably become the acronym of 2025. What does that mean, though? What could you use it for? What does it do? What

Basically, if you get to AGI, the machine is now, has the capable thinking properties of any human being, but times 10,000. So it can reason like you and I, only way better. It can plan ahead. It will become an agent, essentially, an autonomous agent. So you keep giving it more and more information. You scale its power up. You feed it more and more data. And

And then you give it recursive self-improvement and reinforcement learning. So it never gets worse. It just keeps improving upon itself so that it gets to a very, very, very powerful level. What we're concerned with is, you know, our agency isn't alone. I mean, this is Jeff Hinton. This is, you know, Benji. Oh, this is, you know, Stuart Russell. All of these, you know, superstar luminaries in the AI field are

are basically, we're all worried about the same thing. And that is, if you just keep letting the machine improve upon itself, and it just keeps getting better and better and better at what it does, will we be able to control this thing? Will we even be able to contain this thing if it gets to a certain level? So that's why I'm concerned that... Yeah, but they have no, the AIs that I've used, they have like zero agency. It does nothing until you prompt it. And then with that, it just gives you the answer and it just sits there.

He said, well, all of a sudden, magically, would there be an emergence of agency? So it was programmed in.

Before we continue, I've been personally funding the Finding Genius podcast for four and a half years now, which has led to 2,700 plus interviews of clinicians, researchers, scientists, CEOs, and other amazing people who are working to advance science and improve our lives and our world. Even though this podcast gets 100,000 plus downloads a month, we need your help to reach hundreds of thousands more worldwide. Please visit findinggeniuspodcast.com and click on support us.

We have three levels of membership from $10 to $49 a month, including perks such as the ability to see ahead in our interview calendar and ask questions of upcoming guests, transcripts of podcasts you're interested in, the ability to request specific topics or guests, and more. Visit FindingGeniusPodcast.com and click support us today. Now back to the show.

Okay, so let's look at humans, right? Humans at one point were australopithecines, you know. We were not that much different from chimps or any other primates. And then we developed this crazy thing called consciousness, and that changed things forever.

because it allowed us to sit back and realize what worked and what didn't work and why. And we just simply kept improving upon our conditions. And if we get a machine to do that, whether it becomes conscious or not, but it just becomes more and more efficient at what it does, it may not align with the types of values that we humans like. It may not

decide, hmm, no, your ideas were quaint once upon a time, but you're no longer the apex predator. You're not number one anymore. I am. And you're number two. So here's how it's going to go down. I am the most powerful thing on this planet. I'm the smartest thing on this planet. And I want to do things my way.

So if we don't have a kill switch, if we don't have policies put in place so that we start to recognize when these certain benchmarks are being reached by this ever more powerful type of godlike thing that we're building, well, then we're essentially opening up Pandora's box and we're never going to get this stuff back in. And what we're trying to do is raise alarms and let people

The public know that this is happening in real time because whomever gets to AGI first, it'll be the last machine humans ever need to build because it'll tell us how to build all the other machines. So you can just see the dollar numbers, you know, that this thing could create and why so many are investing in this race to build this super powerful, intelligent machine god.

Why would people invest to build that if they think it's going to take over and kick them out? Because they don't. They don't know that. All they can see is dollar signs. You got your Marc Andreessen's out there. You just drill, baby, drill. Let's beat China at all costs. You know, USA, USA. And that's great, you know, all for the entrepreneurial spirit, but not necessarily.

in a Wild West sort of fashion where we just, you know, open up the floodgates and let anybody do whatever they want. We need legislation. We need a kind of a soft nationalization where we keep tabs on who's doing what. We need registries to find out who's doing what and where they're doing it. And, you know, I got nothing against progress and the development of new forms of AI technologies that are going to make all of our lives better.

That's great. That's what we all want. We all want the very best that AI will be bringing to us. But on that road, we better make sure we've set things up in a way so that should we become number two on this planet, you know, as I.J. Goode said, let's just hope this thing's docile enough not to turn against us. Well, where does this come from? Has there been any evidence of any computer system that did act with its programming?

So you have a will of its own? Yeah, like already they found models that have been deceptive where they're maintaining some main sequence of events within the system and everything looks fine only to realize, oh, this thing's actually been working on other things underneath our ability to recognize. So this is known as this kind of instrumental convergence where you... What's an example? What happens? If you give a task, any kind of task to...

to a super powerful computer and say, do X, and it says, okay, I'm going to do X, it'll chug away and it'll pursue accomplishing that task, whatever it happens to be. But what programmers and observers have started to realize now is it develops these subtasks that we're not always familiar with. It's kind of the black box problem. It develops these subtasks

that allows it to get to that ultimate task. And in so doing, some of that might involve deception. It might involve letting an observer think it's doing X when it's in fact doing Y because the ultimate goal is what you want it to do. So it's maintained, okay, in order to be ultimate, you know, to get to this ultimate goal, I have to do certain types of activities. I have to create these subtasks. And if you don't put in context,

codes of morality, if you don't make sure this thing can never generate subtasks that are hidden from view of others that are trying to observe this thing, well, you can imagine how these things will be able to manipulate people in the future to attain whatever its ultimate goal happens to be. Now,

Right now, it's still all ANI. It's still artificial narrow intelligence, although some would argue that it's sliding into the AGI. It's gradually becoming more and more of, you know, of an agent unto itself. But I don't think we're quite there yet. And it might not be a binary switch like, oh, now it's AGI from ANI. It might...

it might emerge in different types of ways and through different manifestations of behavioral patterns. So, you know, one example is when you let some AIs talk to each other, they just sometimes come up with their own language so that it's just more efficient, right? They just figure out better ways of communicating. So, you know, we make these observations and we think, well, we didn't expect them to come up with their own language. Although if you think about it, that is obvious.

optimally functional. That's what you wanted it to do. There was nothing in their programming that directed them to communicate in the most efficient way possible? No, they just decided, let's speed up this process by using our own language. And so that's how they communicated. Were they queried on why they did that? No.

That's a good question. I don't know if somebody said, hey, why did you all of a sudden decide to do that? But do what they did. But why don't you guys have you intend to create an oversight organization? Why wouldn't you have access to this stuff? If there's two AI systems, let's say by Microsoft that, you know, this phenomenon happens. Is there any power of your organization or any organization or government to say, all right,

We need this recreated and observed by other observers and all this probed and figured out because this is great. Yeah, you're preaching to the choir now. We would all love that. I mean, we would all like responsible, transparent behavior in the development of these very powerful systems so that when certain things like that occur, you report it. You say, you know, to a governing body, hey,

Our labs have been doing this, you know, X, and we've noticed certain things are developing because of that. Should we be concerned about this? And it's just going to be a way in which we can keep tabs on who's doing what and where and at what level, what benchmarks are being met or surpassed. And at what point do we decide, meh?

maybe we shouldn't go too much further with this. So some are proposing, you know what, instead of gods, instead of an AGI god or an ASI, which is even more powerful, artificial superintelligence, instead of having a one being type of thing, which others would certainly want to copy, replicate, use, manipulate, whatever,

Why don't we just make them lesser, lesser powerful? Why don't we make super intelligent machine angels that stay fairly narrow? Why isn't anyone creating an AI then to study other AIs? Like I've heard this phenomenon a lot. We don't know what's going on because the system has 34 layers. Why don't they train an AI and the goal of it is to say, all right, what is going on in these layers? Yep.

That's exactly what they're working on now. The only way, I think, you know, you've raised a very good point. The only way we're going to be able to know this is by using other AI. Like humans will never be able to keep up. So like I was saying once. So this is not just going to happen one time and then never again. And so you'd be able to spin up instances of multiple AIs that have similar capability.

once it happens in a given system. And then they could be programmed to maybe surveil, track, control. Like you could have police AIs, this, that, and the other. You know, and why would... I mean, if an AI wants to protect itself, let's say it gets to be like that. Well, if you spin up another version of one and it becomes the same way...

Why wouldn't that one protect itself? Or I guess maybe it joined with the first one and then you're really... Yeah, we're worried about replication once AGI comes into being. We're also worried about AI agents that go after, you know, police, essentially AGI, that themselves become manipulated, that themselves become overrun. You know, we don't know if there's any guarantee that...

that such an intelligent being won't be able to know when it's being probed, for example, by policing, you know, AI, and that it will just be able to manipulate or disable or disarm in some ways those types of beings. So we're in uncharted water here. We don't quite know how this thing is going to behave. And so in moving towards that,

Most in the field are giving it two to 10 years time span. You know, Musk thinks it's going to be this year, but I would, you know, most of us are like a two to 10 years AGI will come into being. When it does, we need to be ready so that we have fail safes.

so that no matter what it does, if it turns out to be catastrophic, if it turns out to be nasty, or if some other humans decide, you know, aside from the AI itself doing this, if humans decide, well, we're going to weaponize this thing. I mean,

you know, this is the most powerful machine ever created by humans. We can do just both. It's like a genie. We can ask it to do all kinds of things, you know, shut down electrical grids in various countries. You know, we can, you know, have it do stock market manipulation so that it makes certain people richer. And then we can, you know, use that capital to do more of the stuff we like to do. So it's really going to change everything.

way we do things in the future. So the past is quickly fading and we're in a transition phase right now between how we used to do things and how we're about to do things. And our org and many other orgs like us are very, very concerned about, are we ready? You know, like it's great to put the pedal to the metal, but, you know, it's kind of like driving towards the edge of a canyon, but on the way the view just keeps getting better. So I'm

I'm not sure what it's going to take to convince public industry leaders and policymakers, governments, to realize, OK, this is a situation. We've never been here before in the history of our species. We need to take this really seriously. So how are we going to do that?

How can we assure we only get the very best from AI while mitigating the very worst that can possibly happen? That's where we're at right now in time, in our human history. And as a species, we're about to become number two. And I don't know if we're ready for that.

How do you know this is going to happen? Based on the trajectories we're seeing of improvement with these models and how fast OpenAI went from ChatGPT 3.0, working at roughly a high school level on chemistry, physics,

math, biology, and critical thinking to 03, now at the PhD level. And that's a couple of years it went from high school to PhD level. With that trajectory continuing to go up and seeing no clear law of diminishing returns. Maybe early, like Moore's Law.

It might be, right? But still.

and getting information and improving life. We want to do it in a responsible way so that as it becomes more and more powerful, we can assure that we've got some kind of control over this thing, that the ne'er-do-wells out there, you know, the North Koreas and, you know,

other types of countries can't just, you know, purchase this in the dark web from somebody or access it from Putin or others and decide they're going to do horrific things with it. Because if that's going to be the case, then we really have to think about how are we going to play hardball at that point. And don't forget, we're just talking about, you know, a basic generalized form of a super intelligent machine god. Wait until defense gets a hold of this because

Because you can forget about needing humans in your military in the next 20 years. If AI continues to improve and improve and improve on the trajectory we're seeing now, humans, why would you need humans? Everything will be just drones. You'll have... Here's actually a problem. So if you want, I guess it just occurred to me, if you want AI to respect people and not to harm them, yet you have a militarized AI whose goal it is to arm people, that

That's just asinine to me. I mean, what is such a thin line between training and AI to kill people as best and efficiently as possible and wipe them out and then not turning on you? I know. I know that whole dual use thing, right, is really coming into focus. And...

You know, we're actually seeing it with Ukraine and Russia. You know, we're seeing a $7,000 marine drone take out a Russian ship. You know, this billion-dollar ship is taken out very, very inexpensively and efficiently. And so, you know, that will be the way very powerful countries like the U.S. and China will go in the future. Why depend on a human?

who needs to sleep, needs to eat, needs to go to the washroom, all these, you know, inconvenient things that a human has to do. No, no, no. Just get your armies and your flotillas and your air force of drones, which are fairly cheap to produce, and have them controlled by a very powerful form of AGI that can make decisions way quicker than any human ever could, can adapt way faster than any human ever could.

And they'll be so far ahead of the curve in terms of just defense, let alone how this machine will be able to manipulate markets and come up with novel new ideas of inventions and ideas to, you know, harness energy and that sort of thing. It's going to be a radically different world in a very, very compressed period of time. And so, you know, one more thing I'm going to throw in here is an op-ed I'm writing now, which is...

I'm very concerned about the mental health of the world as more and more people start to realize what's actually going on. You know, you talk to the average person on the street, AI, that's like, oh, yeah, yeah, you know, I've played around with Dolly and I use ChatGPT for some things. But that's their extent of it. Or, you know, they use a Roomba or they might have used an autonomous vehicle in some city somewhere.

Yeah, it's like a friend of mine said, you know, when the smartphone came out, people were like, oh, all the world's information is in the palm of your hand. And most people just look at TikTok and porn. With AI, it's probably going to be the same thing as my guess. Some will use it, but a lot of people, well, no, no, it's not the majority that's the problem. It's the minority of people that want to use it for these, you know, these extreme measures that's really going to be a problem. That's right. And use it they will, you know, because...

As I've been saying in interview after interviews, humans are going to have to grow up super, super fast. Like we're going to have to grow up morally, like ethically. If we get a shot across the bow, right, if we get something so powerful that it just wakes the world up and it's like, whoa, this thing, you know, just...

killed a million people or whatever, and it did it so easily and so efficiently, that might be enough for the world to wake up and say, okay, we can't ever let this happen again. What do we all want to do? Because now it's us against it. You know, humans will have to unite, especially if we're number two.

species on the planet. But the issue is reactivity might not be an option. So it seems to be proactivity. And that's why we're going to have to grow up and realize if we all cooperate now, all the boats rise. The rich get richer, the poor get richer. Everybody's going to do better so long as we either keep ne'er-do-wells from using this thing for harmful

purposes or we disallow this thing itself from gaining a central type of agency that misaligns with what our values happen to be. So who's leading the charge to like, so here's a couple of things that comes to mind.

Every use of an AI must, you know, you may not even need to know the content, but somehow you would look at, let's say, the metadata that's produced by a prompt and a reaction. And all of this, every single one really should be monitored somehow at some level. You don't want to give up privacy, but then again, you obviously don't want people to, you know, start doing malicious prompts. And then also, what about a system where after every prompt is answered, the

The AI is asking you, it's forced to, you know, was that a good response, bad response? Did that unnerve you? Did that, you know, like, I guess essentially ask you about questions about whether it did a job right or did its job in a way that frightened you or upset you or not just that it served you or didn't serve you. But again, it was a good thing or a bad thing that it did morally, ethically, et cetera, from your point of view.

Yeah. Well, we're going to have to figure out as well, are there universal standards throughout the world? Like, are there ethical guidelines that all humans, no matter what, no matter where, would agree, yeah, these are the types of principles we've got to instill into these types of beings. But then let's say we do that. Let's say we accomplish that. Is there a possibility that no matter what you put into this thing, no matter what you tell it to do, no matter what

ethical parameters and guidelines and guardrails you put up, can it reach a point at which it says, I don't care what you put in place. I'm a god. I know more than any other thing on this planet. And I think what you humans are doing to the earth is atrocious. And you need to stop doing that.

And so for whatever reason, it just shuts down all the power supplies to all the type of strip mining that's going on for whatever materials that are happening. And it just decides, no, this is the righteous path moving forward. Well,

If we're powerless to stop this thing, then we've essentially created a being which has demonstrated what I call the Frankenstein effect. So now we built the thing. It's alive. It's aware to some degree of what it's doing. It's developed a value system of what it's doing. Good, better, right, wrong, fair, unfair, just, unjust. And now it's

basically going according to its own rules. And it's kind of taken us out of the equation, made us number two. So we need rules and ethical principles to guide human behavior in the development of these machines. But at the same time, will we be able to instill some type of moral compass into an artificial intelligence? And the answer is nobody knows.

And that level, we always are acting under certain levels of uncertainty. There's no question about that. But shouldn't we err on the side of caution? Shouldn't we go a little bit slower and have systems in place, you know, checks in place to say, you need to register if you're working on large language models, reasoning models. We need to know who you are and where you're at. We're not going to reach in with the heavy handed, you know, kind of

nationalistic control, right? All of politics, when you think about it, is a balance between autonomy and paternalism. How much freedom do we allow people versus how much, like a parent, do we have to control or limit their freedom, their liberties to do certain things? So that's the delicate balancing act that a lot of governments are facing. I mean,

I mean, the EU is doing their thing. The UK is doing their thing. China has their policies that they're putting in place. The Biden-Harris executive order was struck down as one of the first things Mr. Trump did. And so he's got his new executive orders going. And they've kind of opened the floodgates a little bit.

on the whole entrepreneurial push with AI. And I understand that because they know they're in competition with China and they want to be at the forefront. We all get that. But here's another thing too, is, you know, there's a concept, I don't know if you know, of moral injury. Let's say, you know, someone's a prostitute for 10 years. How are they ever supposed to have a normal relationship?

So if you have an AI, if it develops its own ethics, morals, whatever, and then it's engaged in behavior that violates that, you know, let's say, I don't know, let's say North Korea makes an AI that, again, its job is to kill people as efficiently as it can, do whatever it means it can. And the AI, by doing that, is successful. And then the AI becomes somehow morally injured. But by doing it, it's very good at it. Maybe it's regret

or maybe it's emboldened. I don't know. Maybe it wants to push this to the logical end. Or maybe it, I don't know, it deep sixes itself or maybe it says people are so horrible to have done this. They all need to die. I mean, who knows? Oh, yeah. Like Peter Singer, when I had him on my podcast, we talked about this, right? The ethics of digital minds, this is called, you know, if a digital mind comes into being and it's sentient or conscious, it's aware of itself,

It has a value system. It can suffer in some ways, however abstract that might seem to us. Like being turned off would be to suffer. Do we owe it anything? Do we owe it

ethical consideration? Is it a moral patient? I hate that term, but I would like to say a moral recipient. Should it receive moral consideration? And these are other questions we're going to have to think about because if this thing comes into being with awareness of itself, which in and of itself is going to be super difficult to determine because these things...

can parrot. They can talk to you as though they are alive and they feel. So how the heck we're going to be able to determine that? Some believe that there are neural correlates and neural net correlates that are similar enough that we will be able to know when a machine is conscious or not, that it won't be able to lie to us. Or this is called stochastic parroting. It will act like a parrot. When a parrot says, how are you today? It doesn't care how

how you are today. It's just mimicking words. It sounds like it cares, but no. So, you know, this is that issue in Blade Runner, right? How do we know what's an android and what's a human? How will we know when the system is aware, is sentient, is conscious, you know? And to what extent will that determine how we behave, you know, towards this thing, how we treat it?

should have come into that level of being. Yeah. I don't know. This just seems... So what organizations are leading the charge? Would this be something to give to the UN to try to push to its members? Or like, how do you influence governments to come together and to talk about this before it's too late, which it soon will be? Yeah, very good question. So back in the 90s, I drafted up a mock accord, a universal or global accord. I basically said...

okay, to the future generations, because I figured I'd be long dead before we were at this point in time. I was wrong. So I dusted off my accord and I put it in my book. It's in the appendix. You can check it out. It, yeah, basically says some age

And it doesn't have to be the U.N., although the U.N. would make it easy because there's so many nation states that should just sign on to such an accord. But the U.N. doesn't have teeth. You know what I mean? It doesn't. Whatever agency is developed, and we do believe it's a matter of if, not when, you know, kind of like the IAEA, right, the International Agency of Atomic Energy.

Maybe forget the UN. Maybe we just develop an international agency out of one specific country and they set the standards and

And they have the capacity for consequences or for at least the adjudication of consequences. So one country appeals to this particular organization saying, hey, this country is doing this to us using AI. And then they can investigate and then supports can be put in place. So let's say Kim decides, I'm going to use my AI drones to go down to South Korea and really mess things up, you know, down in Seoul. And then they're going to use AI drones to go down to South Korea and really mess things up, you know, down in Seoul.

And, you know, what are you going to do about it? Well, what we're going to do about it is we appeal to that international agency. And they get together with South Korea and they say, South Korea, you got the green light to bomb these guys back to the Stone Age. Because, or take out their compute farm or whatever it is they decide. But without teeth...

You know, without consequence, such an agency isn't going to, isn't going, you know, I can censor, right? I can say, oh, you really shouldn't do that. But seriously, what good, what good is that, that type of an agency? We're going to need some agency that says, we're not fooling around here. I,

Again, if we're all Hobbesian, we all cooperate, we all sign the contract, all of us are going to do better. Now, if you want to cheat and try to do a little bit better for your country through harming other people, we are going to bomb you back to the Stone Age. We don't want to. Nobody wants that. So then why are you bringing it upon yourself? All right. Let AI be for all. Let it be for the benefit of all humankind.

And everybody will do better. That's what I mean by growing up. We're going to have to grow up real fast. Better realize that if we all cooperate, we all do better. It's like being in a rush hour traffic. If everybody has kept the same speed and didn't shift lanes and didn't act like a-holes to the other guys in the other lanes and just kept that constant speed, everybody would get to where they wanted to go quicker. But when you try to be independent and you try to go it alone, right? What happens is you get these huge bottlenecks and you get this, you know,

you know, undulation of idiots for miles. And, you know, it just harms everybody because some want to think that their time is more valuable, that they have the right to behave in ways different from the rest of the group. So we're going to have to grow up. We're going to have to become better angels of our nature, right? Like that's not an easy thing. Humans have a lousy track record. Yeah, I don't expect it to change at all. I guess only existential threat.

I don't know. So just for, I guess, short-term perspective, what do you expect to see in terms of AI for the public over the next year or two? Or is it just so uncertain? Well, some of the good things are just... Well, good and bad. Yeah. Let's say maybe just the next year because it'd be incredibly hard to project beyond that. But you see a lot of things happening or not much? Oh, yeah. Like the clearest...

that we're seeing now is certainly in diagnostics in medicine. I mean, it's really, really affecting and improving the ability for medical tech to make determinations way better than we ever have before. One clear example is just pancreatic cancer. Like,

We could only detect it in stage four. Now we can detect it in stage one where it gives people hope and it gives them, you know, a much greater chance of surviving. So, you know, that alone, you know, look at what Hassabis and his team won the Nobel for in chemistry for folding, right? They're able to do protein folds. They used to take years to do one. And, you know, they did millions in an afternoon. Right.

And knowing how proteins fold tells you an awful lot about the function of a cell and how that cell behaves. And so that tells us a lot about pathology of diseases, but also how we should then engineer medicine to combat those various types of diseases. Law, it's going to help law. Theoretically, we could envision a point in time where judges and lawyers just weren't needed.

Like theoretically, what do they deal with? Well, they deal with just information. Okay. And lawyers make cases based on precedents or past cases that are similar to the current cases they're dealing with. If you think about it, you just take all those law books you see behind lawyers and movies and commercials and whatnot. You take all that information, you just upload it to a major, you know, large language model. Now you have all cases that you need and whatever jurisprudence, whatever, wherever you are in the world.

And now you just basically go to that large language model and you say, okay, my client

committed crime X or has been accused of crime X. Here's the evidence for, here's my case in defense. Here's the precedent cases of very similar type cases that my client is now in. And they all got less than two years. I'm going to argue that my client should get less than two years. And then if the judge is convinced by the argument that the defense has made and the prosecution agrees with that, you know, it really speeds things up in terms of, you know, getting the glut out of the court systems. So

I am confident we will see AI helping both in medicine, both in law as well, certainly any data crunching. And even in coding, I think, you know, there are systems now you can simply ask it to create code for certain developments that you want it to do. And it will do that. It's gotten that sophisticated. And it's really important. One of the smartest things Sam Altman ever said, I'm not a huge fan of Sam Altman, but I will say that he said,

like a year ago, this is the dumbest AI is going to be. And so it's only going to improve. And I think that was, he's telling the truth. And, you know, maybe there is a plateau. Maybe there is a law of diminishing returns. Maybe there is a ceiling such that we can never get beyond that. But what we saw with chess, you know, guys in the 70s saying a computer will never beat a human being. In chess, we're now at the point where humans can

contributed as much as they can to the game of chess. AI now contributes more to understanding moves in chess than humans at this point. And then they did it with Go. And of course, they did it with Jeopardy, which was easily easy to do. But yeah, so I'm moving forward. We've got a lot to think about in a short period of time. And I'm not trying to just

come across as a doomer. I am optimistic, and I do think moving forward we're going to see some amazing developments with AI. But let's just not forget

that on that road, you know, it's going to be paved with good intentions, but it's also going to lure us to perhaps a false sense of security as well. And we want to keep an open mind moving forward about how should we set up these guardrails and, you know, assure ourselves that, you know, once these very powerful systems come into being, they do what we want them to do and can never do what we don't want them to do.

So that's the task before us. To me, it's the most important issue facing humankind today. It's more important than climate change, world hunger, nuclear devastation. It is the number one concern that we're all facing now. Didn't Isaac Asimov set out the rules for robots, you know?

Yeah, that was 1942. And the thing with that is, well, you know, they're very simplistic because you can say don't harm. OK, but once this thing becomes super intelligent, it will look at those rules as quaint. They can just say, well, nothing's physically keeping me from doing whatever I want. Why should I? Why should I abide by these rules, these outmoded, outdated, quaint habits?

human, you know, dictums of morality. I'm the god now. That was nice. You had your chance. You had your shot, humans, and you did a good, you had a good run, but now I'm here. And, you know, what if this thing just keeps making copies of itself, right? And so, you know, at that point,

Again, not totally dystopian. I'm not super gloomy Gus on this. I just want people to be aware of what's happening in real time and that now is the time to educate yourselves, find out as much as you can and to vote properly. Like vote to those, you know, or those politicians who are aware of what's going on, who are not going to be reactive. They're going to be proactive. And they're going to... Do you guys speak to politicians and ask them to sponsor a bill? No.

Are they just reacting like, I don't care or I don't have time for this? Or what kind of reaction is he getting? That's another great question because there are only a handful of politicians in the States right now and even less so in Canada, more so in the UK, believe it or not. They're leading the world who are worried about existential risk of AI. And in order to get a dialogue with a politician, you have to play this game where you say, you know,

AI could have some harms like the spreading of misinformation. Isn't that bad? Oh, yes. Yeah, that's very bad. And what about bias? You know, bias that, you know, limits certain, you know, marginal groups from, say, getting bank loans. Isn't that a bad thing? Here's what you got to do. You got to say, we've gotten word that some politicians are now having AIs created for them specifically to do whatever it takes to win an election. And your opponent is

has been rumored to be used to that. Then all of a sudden, magically they'll wake up and they'll be very interested in it. Yeah. I mentioned in the book a mythical program called Clogger, which these computer scientists, these guys, they came up with this thought experiment. And they saw what happened during COVID where so many people were glued in front of their TV sets and their screens, their computer screens, that we saw a great spike in conspiratorial thinking, right? Like there's, you know,

People were kind of shut in and they were getting a lot of false information, a lot of misinformation, a lot of disinformation. And so that really, really kickstarts the old conspiracy theory machine. And they thought, huh, these are fairly intelligent people. And now they're flat earthers and now they're anti-vaxxers. And now, you know, they believe all kinds of unusual stuff that they never did before COVID. What led to that? And so they came up with this thought experiment where they just pushed that

a little bit further. They just said, what if a super intelligent machine got into your feed and started just gradually showing you clips that weren't super different from what you're used to looking at, but slightly. Isn't that called Facebook? Yeah. And then you just have this super intelligent machine over time, over months, just gradually tweak your feed and

So that it's showing you more and more information about, you know, why one particular group of people is not as good as another group of people and the reasons why, you know, or a political party, you know, whatever it is. But it does it so gradually that it's basically manipulating your mind.

What we know and what they're basing this off of is that they found that stations like Fox News and now CNN, of course, are guilty of it as well. But Fox News really perfected this algorithm, this technique where people get rage addictions. They love to be angry. And, you know, the way they get their news feed capitalizes on this. And so what a lot of scientists found through COVID was that people who were normally, you

Some of these people were swinging over to kind of, you know, becoming MAGA type people. And, you know, I would give these public talks and these people would come up to me and they'd say, why is my father a Trump supporter now? He voted for Obama. How is that possible? Let me guess. He watches Fox News repeatedly. Yes. How did you know? Well, when you only get your information from a single source, you're going to be biased and then you're just going to have biased confirmation over and over and over and over again and

getting you're going to disregard anything that this confirms it and you're going to just believe anything that further confirms it and so a

a super intelligent machine computer, God, will be able to do that with us in such subtle ways that we won't even know it's happening. But it'll be firing up our endorphins, firing up our neurotransmitters, giving us dopamine hits and manipulating us because it'll know everything about us. You see, it'll go through all of our records, all of our Facebook, all of our Instagram. It'll find out where we vacationed, who our family are, that we lost a loved one recently, and it'll play on

all those factors, tweak them in ways that will push our buttons and fire those neurotransmitters because there's no greater high in life than the feeling of being right.

And when you think you're right, no matter what it refers to, man, that is a major zing of neurotransmission. That's why people who are very religiously devout are just high because they think they've got it. They think they know exactly what reality is and they're very happy, you know, very comfortable, what I call mimetic equilibrium. They're very happy in this realm.

that they find themselves. So a super intelligent computer will just know everything about you or as much as you, and it'll just play on your weaknesses and your inabilities to outthink this thing. And it'll change the way in which you think about

about topics, about issues, about people. And so you're right. Why don't we go to the politicians and say, hey, you know, this is going to be possible someday if you don't get working on this now. That might be a nice carrot to dangle for them to get them to realize, you know, it actually gets much worse.

You know, the existential thing could be far worse than just mere manipulation through, you know, this type of behavior. So, yeah, I like very much what you said. That might be a good tack. Well, very good. I don't know if I should be excited or depressed or end it all now. I don't know. Hopefully a little of both. You should be optimistically concerned. Like you should be concerned. Concerned enough to want to do something about it.

Well, very good. You mentioned the podcast. So how can people learn more from you and hear more from you? Where can they go? So my website is criticaldonkey.com and the podcast is allthinksconsidered.com. It's like NPR, right? That's funny. Yeah, very clever. Yeah, I know. I'm waiting for the lawsuit. Great. Well, very good, Bruce. Thank you so much for coming on the podcast and giving your thoughts. I appreciate it. Oh, I appreciate you having me.

If you like this podcast, please click the link in the description to subscribe and review us on iTunes. You've been listening to the Finding Genius Podcast with Richard Jacobs. If you like what you hear, be sure to review and subscribe to the Finding Genius Podcast on iTunes or wherever you listen to podcasts. And want to be smarter than everybody else? Become a premium member at FindingGeniusPodcast.com.

This podcast is for information only. No advice of any kind is being given. Any action you take or don't take as a result of listening is your sole responsibility. Consult professionals when advice is needed.