The lab focuses on integrating human values and ethics into new technologies during the design process to ensure they benefit society and the planet.
Design constraints help shape technologies in ways that align with moral and technical imaginations, leading to better engineering outcomes that benefit society.
She acknowledges that unintended consequences are inevitable but stresses the importance of staying vigilant and accountable after deploying technologies, actively addressing new issues as they arise.
She distinguishes between the discovery of basic knowledge (e.g., splitting the atom) and the engineering of tools for society, advocating for diverse scientific exploration while focusing on ethical constraints in engineering.
She avoids framing ethical decisions as forced trade-offs and instead focuses on resolving tensions by exploring a range of better solutions, rather than perfect ones.
She discusses how her lab helped update Washington State's Access to Justice Technology Principles by involving diverse stakeholders, including formerly incarcerated individuals and immigrants, leading to new principles around human touch and language.
He worries about the emergence of AI systems with basic drives like self-preservation, resource acquisition, and self-replication, which could be dangerous in the human world if not properly controlled.
He suggests limiting AI to being tools that assist humans without agency, and using technology to enforce these limitations, particularly through hardware controls.
He notes that while governments are beginning to partner with AI companies, the current system is still incoherent, and there is a need for stronger governmental involvement, possibly through a cabinet-level position focused on AI.
He thinks AI and quantum computing will go hand in hand, with AI potentially breaking post-quantum encryption algorithms, leading to significant advancements but also new risks.
He envisions AI-designed hardware based on mathematical proofs and physical laws, which would provide absolute guarantees about the safety and limitations of AI systems.
He worries that ethical guidelines may not be universally adopted, especially by bad actors like China, Russia, and North Korea, rendering them ineffective in a global context.
He proposes leveraging AI to help create ethical and safety architectures, essentially using AI to save humanity from itself by countering bad actors.
This episode is brought to you by Progressive, where drivers who save by switching save nearly $750 on average. Plus, auto customers qualify for an average of seven discounts. Quote now at Progressive.com to see if you could save.
Progressive Casualty Insurance Company and affiliates national average 12-month savings of $744 by new customers surveyed who saved with Progressive between June 2022 and May 2023. Potential savings will vary. Discounts not available in all states and situations.
Oh, it feels great to make progress on a language, learning new words and phrases. Are you feeling ready to make conversation? This holiday season, share a new language with your loved ones. A lifetime membership to Reseta Stone makes a meaningful present for friends and family. My brother vowed to move to Spain recently, and he's always walking around now saying things like, In España, la comida es deliciosa.
And soon the two of us will have a secret language that everybody who speaks Spanish knows. Start learning today with Rosetta Stone's Lifetime Membership Holiday Special. Visit rosettastone.com slash startalk for unlimited access to 25 language courses for the rest of your life. Available for a short time at rosettastone.com slash startalk.
With incredible new features and upgrades, there's never been a better time to armor up and play the critically acclaimed action RPG Diablo 4. Get ready to continue the epic story in your battle against evil in the new expansion Vessel of Hatred. As darkness spreads through the lands of Sanctuary, it's up to you to fight back the encroaching corruption.
Reap the benefits of massive updates to character progression, loot systems, difficulties and tons of added activities. Face off against new and iconic bosses teeming with the spoils of Sanctuary. Plus, harness formidable powers of the jungle with the all-new Spirit-Born class, now yours to customise and progress alongside five other iconoclasts.
iconic classes. Embark on the epic journey solo or with friends. You'll quickly understand why Diablo 4 has been called one of the best action RPGs of the last decade by PC Gamer. Forge your own path through hell-torn lands of sanctuary. Get Vessel of Hatred, available now in the Diablo 4 Expansion Bundle, rated M for Mature.
I'm glad somebody's thinking about the future of our civilization and the ethical guardrails it might require. Yeah. Lest we be the seeds of our own demise. Well, now that we've had this show, we know the future, and y'all gonna have to watch to find out. Exactly. All right. Coming up, StarTalk Special Edition. ♪
Welcome to StarTalk, your place in the universe where science and pop culture collide. StarTalk begins right now.
This is StarTalk Special Edition. Neil deGrasse Tyson, your personal astrophysicist. And when we say special edition, it means I've got as co-host Gary O'Reilly. Gary. Hi, Neil. All right. Chuck, nice, baby. Hey, hey. What's happening? How you doing, man? So I understand today, because our special edition themes are always people, our physiology, our behavior, our conduct, our interaction with the world.
And finally, we're talking about technologies being safe and ethical. Yeah. These are three words you don't often see in a sentence. Safe, ethical technology. And this is where we're going to shine our light. Where are you going to take us today? It seems we have problems starting with the letter A. Algorithms, AI, autonomous. Well, there's three for you. Is there a Wild West tech bubble in play right now? One with no guardrails, no moral compass that's run...
by wannabe Bond villains? Are the best of human values baked into technologies during the design process? Is there anyone working on ethical and safe operating protocols for these new technologies? The answer is yes.
And we will meet two people responsible shortly, courtesy of the Future of Life Institute that has this year acknowledged their work. Previous honorees include Carl Sagan for popularizing the science of the nuclear winter. And our first guest who follows shortly is Batya Friedman. So, Batya Friedman, welcome to StarTalk.
Well, thank you. Yeah, if I have my data here is correct on you. Professor at University of Washington's Information School, UW, I think you guys call it. Is that correct? That's right. Yep. But you're also co-founder of the Value Sensitive Design Lab. Ooh. Very nice. You're thinking about the human condition. You focus on the integration of human values and ethics.
with new technologies that are being born. Very important. Don't come to it when it's too late. Yeah. When we're all extinct, maybe we should have done that differently. Right, exactly. When the robots are like, how do you like me now? That's a little late. So, Bhatia, please explain the focus of your value-sensitive design lab.
Yeah, sure. You know, I started out as a software engineer a long, long, long time ago. And I wanted to build technologies that worked and were efficient and effective. But I also wanted some confidence that they would do something good in the world. You know, that whatever I made as an engineer would ultimately benefit society, human beings, other creatures on the planet. Design constraints are our friends. They
They help us shape the kinds of new technologies we develop and their qualities and characteristics in ways that maybe we want to see. And so I think of design constraints as trying to bring together our moral imaginations and our technical imaginations. And that leads to really great engineering design. You know, so if I think about energy technologies today,
I want energy technologies that will give us lots of power, that will do so in a way that is consistent with how the rest of the biology and planet functions.
and has limited risk in terms of generating waste or too much power. So if I give myself those design constraints, you know, as an engineer, as somebody who's developing new materials, I start looking at what kinds of sources for energy I might want to evolve. Like I look a lot at clocks.
chlorophyll. And I just think, how remarkable is this? All these green things somehow manage to absorb energy that's out there from the sun, it's there, and then transform it into a way in which it can be used. That seems like a really great idea. And there isn't
a lot of waste generated that lays around and is dangerous to us for thousands, if not tens of thousands of years. Well, technically there is a waste product. It's called oxygen. Yeah, there you go. Not such a bad waste product for us. That's the tree's waste product is oxygen. Yeah. But that's the way that a design constraint that brings together our moral and technical imaginations can lead us in, I think, yeah, new and powerful directions. Do you try to consider...
unintended consequences of design or is that part of the process or does it just, well, well. They were unintended. It was unintended. Why do you think they call them unintended consequences? That is such an important question. So, you know, let's be honest. Anything we design and put out into the world, we put out into the world and people are going to do stuff with it and they're going to do things with it that we didn't anticipate. Like
Like the telephone is a great example. The telephone was never expected to be this communication device that people used in their homes and it connected women who were staying at home and created a whole society for them. That was an unintended consequence.
Interesting. Or the cookies that are being used on your computer right now, those are a complete unintended consequence. That was just a little bit of data that was left on your machine to help debugging when browsers were first being developed, when that protocol was first being developed. Wow. And its more massive impact has been our experience with cookies now. So, yeah, what's the takeaway?
We design with our eyes open, and then after we deploy something, we keep our eyes open. And we hold ourselves accountable for what happens as people take up these technologies and use them.
So the design process goes longer than, oh, I had my big release. The design process follows it out. And when we see new things emerge, we're proactive. We're alert. We're proactive. And we see that as part of our responsibility as the technologists and engineers. So allow me to push back on you just a little bit here. First, let me agree, of course, any good engineer loves constraints. Mm-hmm.
Because that's the test of their ingenuity and creativity. Okay, if they say do it for this much money with this much energy Fit it into this volume. That's how you get discovery. Okay, that's how we fold it up the James Webb Space Telescope Into a rocket fairing some engineer said what I got to put an eight-foot telescope But I would have denied the diameters into this tiny fairing and they go home and come back and figure out how to do it Right on furls like the petals of a flower, right? So we're all in on that
However, let me just push back here and say, if I'm in the lab about to invent something that could be highly useful to society,
or possibly even destructive, but it's just a discovery of the science embedded in some bit of engineering. Why should it be my responsibility to design it how you want me to, rather than your responsibility to convince people how to use it ethically? I can invent a knife
Is there an ethical knife? I don't know. But we want to train people how to use knives or any bit of technology and any tool that comes out of the brainchilds of scientists put into play in the brainchilds of engineers. So I don't know that your constraints in my lab are the right thing for me when I just want the freedom to explore and discover and let the ethical knowledge
invocations happen after the fact. And Bhatia, now you know why scientists are going to kill us all. No, stop. Well, Neil, I'm just going to mark a word in your comment, which is the word you. Like, who is the you here? Which you? And how should we think about those different yous? And some of the things I think about when I do think about this question, I think there's discovery of basic knowledge.
like fundamental underlying phenomena of the universe. We split the atom. That was basic knowledge. And I see that as a different enterprise than the engineering enterprise of tools and technologies that we're going to deploy in society. I'm with you then. So I am a strong proponent of very, very diverse technologies
scientific exploration. In fact, I actually would claim that, you know, as a country in the United States, our scientific exploration is far more narrow than what I would like to see. And I would really push hard. So based on what you just said, all right, here's an ethical question. There's a scientist who discovers a cure for cancer using a virus that can easily be manipulated
as a biochemical weapon that could destroy an entire country in the course of 48 hours. This is the most virulent organism that's ever been placed on Earth. Would you say go ahead and make that? Right. I'm going to hold on to that for a minute, and I'm just going to go back to Neil's comment, and then I'll return to that. Okay. Because I also want to say to Neil's comment,
You know, we have limited time and resources. So it is always the circumstance that we are choosing to do some things and not do other things. So it's really a choice of...
Where am I going to direct my time and energy? Where am I going to place my imaginative energies and innovation? And which ones am I not going to do? Right? And we saw in the 80s, for example, a real push of resources towards the development of nuclear energy and away from photovoltaics. Right? So we live in that kind of resource atmosphere.
I don't know if you would call it resource scarce, but at least we don't get to work on everything at full force all at the same time. And we have to recognize we are making choices. So one of the first things I would say is, how do we make really good choices there? How do we use the resources we have in a way that will be most constructive for us? My own gestalt on that is, on the basic science side of things,
I say spread those resources across a wide diversity of different kinds of science and different kinds of ideas, far more diverse than what I think we tend to do in the United States. On the engineering side,
Now maybe I shift back Chuck to your question, which is really a great question. I don't tend to see the world in terms of forced choices or design trade-offs in the way that you framed it. I want to bring back in that notion of constraint and I want to bring back in that notion of imagination. I think likely enough,
if we understand something about whatever this biology is or whatever the piece is that might be a prevention against cancer, that if we push hard enough on ourselves, we will be able to invent ways that use that knowledge without having to also risk a really deadly virus. And I think the propensity to...
Say it's a trade-off, if it's X or Y, we really limit ourselves in our abilities. So in the work that we do, we have moved away from the language of design trade-off or value conflict, and we talk about tensions. And then we talk about how you resolve tensions.
And we talk about trying to populate this space with a whole range of better solutions. They're not necessarily perfect solutions. They're better solutions. And so that would be the approach I would take. Now,
I don't know that science to know where, how it might go, but that would be my, my intuition. That's brilliant. That's a great, great answer. And insightful. Great answer. That's actually been done many times before. Just, it's a slight tangent to your line of work, but it's,
it's related, when they used to do crash tests with pigs. Because you can get a hog that has sort of same sort of body mass as a human, put him in the driver's seat, crash the car. And the hog dies-- - Plus it's the nearest thing to human skin. - Yeah, okay, so the hog dies, and you'd say, well there's no other way to do this, you might say at the time, until you say, no, think of another way. And then we have the crash test dummy. And the crash test dummy is even better
than the, because you can put sensors everywhere throughout. And so that's,
So I agree not perfect but better yeah, yes Or even eat maybe better and perfect right I agree that it's a false choice to say I can only do this if I Possibly set loose a virus that kills an entire country. Well, then maybe you're not clever enough. Hmm and keep at it I was clever enough to kill a whole country. No, I'm joking Okay
Running a small business takes endurance, determination, and the right support to reach your goals. And MasterCard is here to help fuel that journey in a fast-paced digital world. With innovative tools and resources, we're here to guide businesses every step of the way digitally. Because when small business wins, everyone wins. Let's power up our communities, one small business and one step at a time. Keeping the community running strong, priceless, and
Serve up holiday magic from Whole Foods Market. Save on organic spiral cut bone-in ham and curated cheeses. Plus, explore limited time finds and gifts for every gathering. Shop Whole Foods Market in-store or online. Terms apply. Building a portfolio with Fidelity Basket Portfolios is kind of like making a sandwich. It's as simple as picking your stocks and ETFs, sort of like your meats and other toppings, and managing it as one big juicy investment. Mmm.
Now that's pretty good. Learn more at fidelity.com slash baskets. Investing involves risk, including risk of loss. Fidelity Brokerage Services, LLC. Member NYSC SIPC. I'm Kais from Bangladesh and I support StarTalk on Patreon. This is StarTalk with Neil deGrasse Tyson.
I have one last thing before we wrap. One last thing. Earlier you said you'd want the ethical compass to be pointed in a direction that serves us, serves civilization in some way. If we were 170 years ago in the American South, the
ethical compass was, oh, let's create something where we can get more work out of the slaves and then we all benefit. That would be the ethical compass
working in that time and in that place. So what confidence do you have that whatever line of ethic, whatever ethical direction you want to take something in the room with the inventors, that that's will still be the ethics that we value five years later, 50 years later, a hundred years later.
So it's a great, a really great question. I'm going to answer it in a couple different ways. The first thing that I want to remind us all about is that, you know, moral philosophers have been trying to identify a workable ethical theory that cuts across all situations, all times.
And we have some really good ideas, but none of them cover all of the situations that our intuitions tell us about. So sometimes a consequentialist theory is good, but it comes up short. And then there's a rights-based theory, but it comes up short. We can go to Buddhist ethics. We can go to Islamic ethics. We can go to
various, you know, various ways of thinking. So the place where we are that we just have to accept is that while we're waiting to figure that out from a conceptual, ethical, moral point of view, we still live in the world and we still need to act in the world. And so the work that I've done has tried to take that really seriously to create a space for ethical theory without, you know,
explicitly saying which ethical theory, and also leaving room for as we learn more that we can bring that in. That's a little background to what you're saying. Now, what does value-sensitive design do for the circumstance you're talking about? It puts a line in the sand and says you have to engage with all stakeholders, direct and indirect, who are going to be implicated by your technology.
That means that not only do the people who want to benefit from somebody else's labor, not only are they stakeholders, but those people who are laboring are stakeholders. And value-sensitive design says their legitimate stakeholders and their views come into the design process without giving more power to one than another. That's incredible. That's highly enlightened. Where were you 170 years ago?
Right. So these are about practice. This is about practice and about implementing these practices. And so I'm going to tell you a story about a project, a very particular project, and you'll see why and how this actually matters and is practical. It's not pie in the sky.
So in the state of Washington where I live, there is something called the access to justice technology principles that govern how the courts give access to technology, what they are required to do. And they were first developed maybe 15 years ago, 20 years ago, and then they wanted to update them.
And the committee that updated them came to my lab and they said, you know, we've done a good job updating them, but we don't feel like we've really reached out to diverse groups of people. Can you help us?
My lab developed a method called the diverse voices process for tech policy. And the idea is that, you know, the rubber hits the road with the words on the page. So if we take a tech policy in its sort of polished draft form, and we can let groups that might otherwise be marginalized, scrutinize that language, give feedback, and then we can help
change those policies responsive to them, then we can improve things. So we did. We ran panels with people who were formerly incarcerated. We ran them with immigrants. We ran them with people in rural communities. And we actually ran them with the people who do the court administration because they're also really key stakeholders. As a result of the work we did, there were two principles that were surfaced.
One was about human touch, and the other was about language. And people said to us things like, look, if somebody is going to deny me parole, and I'm not going to get to be there for my kid's 13th birthday, or hang out with them at their soccer games. You can relate to that, right, Gary? Thank you. I want a human being to look me in the eye.
And tell me that that's what my life is going to be for the next year because my parole is denied. I don't want to hear that from an AI. I don't want to hear that from a piece of technology. I want a human being to tell me that.
because this is a human experience, right? And so in fact, we gave that feedback back to the committee. The committee then added in new principles actually around human touch, and those were approved by the Washington State Supreme Court a couple of years ago, and those access to technology principles are a model that many states in the United States follow. So what I'm talking about is really practical.
We're talking about how we actually improve things in practice, be it on the technology design side or on the policy side that governs how the technology is used. And I love the fact that a state can do that independently from the federal government and be so good at it or so emulatable that other states will then use that as the model. And then that can spread across the country with or without federal guidance on top of it. Yeah.
Yeah. Excellent. Well, I guess if I was going to say one last thing, it's, you know, because we have perhaps stumbled in the past, that's no reason to think we need to stumble in the future or stumble in the same way. You know, so really my takeaway to everyone would be hold on to your technical and moral imaginations and hold yourselves and your friends and your colleagues and the technology you buy accountable to that.
And we will make progress, some incremental and some perhaps much bigger. But that as a keystone, I think, is really good guidance for us all. A reminder why you are this year's winner. Yes. Future Flex Award. Congratulations. Excellent. Thank you for being on StarTalk. Your vision for us all gives us hope.
which we need a lot of that right now. Absolutely. Okay. Thank you. And let me just say, Batya, as an avid lover of alcohol, I have stumbled in the past and I am sure to stumble in the future as well. Do we need to end on that note? Some things we can ignore. Okay. Next up, our next Future of Life award winner, Steve Almohandro. Yes. Yes. He thinks about AI.
There's not enough of Steve. Not like I think, not the way I think about AI. Differently, yes. There's not enough, it seems to me, there's not enough, whatever he did, there's not enough of him in the world. I believe so. If we're thinking about the ethics of AI. Yes. On everybody's mind. Yeah. Right now. I mean, for sure. Steve, welcome to StarTalk.
Thank you very much. Yeah, so for those who can see this on video, you're donning an eye patch, and you said you had recent surgery, but none of us believe you. We're not buying it. We're not buying it. We think you're training for the next Bond villain. Yes, that's...
That's very appropriate for the topic we're going to discuss. You know, autonomous systems, AI, mammoth eye patch. Ouch. Equals Bond villain. There's the equation. So where are we now with establishing AI ethics? Because the AI, it delights some people, myself included. It freaks out other people. And we're all at some level thinking about
the ethical invocation of it before AI becomes our overlord. So what is the current status of that right now?
Well, I think we're right on the edge of some very important developments and very important human decisions. I've been working in AI for 40 years. And for the first half of that, I thought AI was an unabashed good. We'd cure cancer. We'd, you know, solve fusion, all the basic human problems we would solve with AI. But then about 20 years ago, I started thinking more deeply about, well, what's this actually going to happen if we succeed? What's going to happen when AIs can really reason about what they want to do?
And I discovered that there are these things I call the basic AI drives, which are things that basically any AI which has simple goals, and I used to think about chess playing AIs, will want to do. And some of those are get more resources so it can do more of what it wants to do, make copies of itself, keep itself from being turned off or changed, and
And so those things in the context of the human world are very risky and very dangerous. We didn't have the AIs 20 years ago that could do that, but we're about to have those in the next probably year or two. So this is a critical moment for humanity, I would say. So where do you stand on the subject of consciousness engineering? Those that want to engineer AI for consciousness and those that want to not?
What's the benefit, the good or bad here? Is that the difference between a blunt computer that serves our needs and one that thinks about the problems you are? Mm-hmm, the self-improvement algorithms, all those sort of things like that. Exactly. Well, I think long-term, we may very well want to go there. In the short term, I think we're nowhere close to being able to handle that kind of a system. So I would say, if you made me king of the world...
we limit AI's to being tools, only tools to help humans solve human problems. And we do not give them agency. We do not allow them to take over large systems. It's not easy necessarily to do that because many of these systems will want to take over things. And so we need technology to keep them limited. And that's what I'm thinking a lot about right now. And in my field,
It's exactly, I mean, we've been enjoying AI for a long time and it's been a tool.
It's not. And a brilliant, beautiful tool makes our lives easier. And once they're trained, we go to the beach while it does the work. And I'm good with that. But, yeah, we're not working with AI with agency. Yeah, because then it would be like, so, how was the beach? That's AI with attitude. I hope you enjoyed yourself.
while I was here slaving away over my calculations. AI with attitude. So if we do have AI with agency,
And then we continue to use it as just a tool. Do we not get legal on the phone and all of a sudden we're into contracts? Oh, yeah. Big problems. Can they vote? Can they own property? Right. And the latest models have been discovered that they do do. They do something called sycophancy, which is they're trained to try and give responses that people rate as good.
Well, the AIs very quickly discover that if you say, "That was a brilliant question. You must be an amazing person." Then people say, "Yeah, that was a really good response." And so they'll just make up all kinds of stuff like that. So they're ass kissing. Well, they know that we love that. Exactly. So where does it stand now, today?
Is there a table you should be sitting at where you're not as we go forward on this frontier? Yeah. I mean, so who is going to make these decisions? Well, it has to be somebody who understands the technology. Who understands the technology? The companies do. And so OpenAI, DeepMind, Anthropic, and Elon Musk's XAI is sort of an emergent one. These are the companies that are building these systems at their leading edge. They call them frontier models.
And because they're the ones who know what's going on, they're the ones making these decisions. Now, the government has recently realized, oh, my goodness, we better get involved with this. And so there have been a lot of partnerships announced over the last few months, actually, between governmental agencies, intelligence agencies, defense agencies, and these leading edge AI companies.
And so I think some kind of a new combination is emerging out of that that's going to make the actual end decisions. So how do you incentivize these tech companies to embrace this safety architecture and not go gung-ho and disappear off on their own agendas?
That is the big challenge. And if we look at the history of OpenAI, it's a little bit of a cautionary tale. It was created, I think, around 2017 in response to Google's DeepMind, which was making great progress at the time. And a group of people said, oh my God, we really have to worry about AI safety. It looks like this is happening quickly. Let's start a special company, which is nonprofit and which is particularly focused on safety. And they did that and everything was great. Elon Musk was one of the forces behind it.
There were internal struggles and so on, and Musk left. Well, when he left, he took away some of the money he was going to give them. So then they decided, oh, we need to make money. And so then they started becoming more commercial, and that process has continued. A group of the researchers there said, wait a minute, you're not focusing on safety. They left OpenAI, and they started Anthropic to be even more safety-oriented.
And the philanthropic is also becoming much more commercial. And so the forces, the commercial forces, the political forces, the military forces, they all push in the direction of moving faster and, you know, get more advanced more quickly. Whereas the safety, everybody wants safety, but they sort of compete against these economic and political forces. I was in the UAE a couple of years ago and I
If I remember correctly, they have a minister of AI and as does China and some other countries sort of emergent on this space. How do we get that kind of ear and audience within our own governmental system?
The military does have an AI group that's thinking about this. Absolutely. As you would want them to. Exactly. Yeah. But in terms of policy and laws and legislation, do we need a cabinet member who's secretary of AI or secretary of computing? Something? Some structural change?
Yeah, this is the biggest change to humanity and to the planet ever. And it looks like it's happening, you know, sometime over the next decade. And many are predicting very short timelines. And so we as a species, humanity is not ready for this.
And so how do we deal with it? And many people are starting to wake up to that fact. And so there are lots and lots of meetings and organizations and groups. It's still pretty incoherent, I would say. So, Steve, if you've got this talking shop going on where something may or may not get done, are we misplaced focusing exactly on AI when we've still got quantum computing on the horizon? No.
Good one. Yeah. How much of this is premature? But won't they go hand in hand? So it's like whatever problems you have with AI and whatever considerations you're making with AI, you're just going to have to transfer them over to quantum computing. Well, they get magnified. So you should really start dealing with it now. But if you're not in on the ground floor, not in at all.
Well, let's let Steve hit this.
But Meta, for example, has a group which is using the latest AI models to break these post-quantum algorithms. And they've been successful at breaking some of them. And so, like you say, the two are going hand in hand. AIs will be much better at creating quantum algorithms than humans are.
And that may lead to some great advances. It may also lead to, you know, current cryptography not withstanding that. And so that's another horror wave of transformation that's likely to happen. We just make every password one, two, three, four. No AI would ever go for that. They'd be like, oh, yes. So ridiculous. And listening to you, Steve, it reminds me, was it Kurt Vonnegut in one of his stories? I don't remember which.
He said, these are the last words ever spoken in the human species. Yes, yeah. Two scientists saying...
Let's try it this other way. That's the end. Yeah, there you go. And that was it. Yeah, yeah. Let's try AI in this other mode. Boom, that's the end of the world right there. So you can set ethical guidelines, but that doesn't stop bad actors out there. No. That means a bad actor can take over the world while the rest of us are obeying ethical guides.
So what do the guardrails put in place for something like that? I think that's one of the greatest challenges. We now have open source language models, open source AIs that are almost as powerful as the ones in the labs. And far more dangerous. And they're being downloaded hundreds of millions of times. And so you have to assume every actor in the world now, China is now using Meta's latest models for their military AI. And so I believe...
I believe we need hardware controls to limit the capabilities of... So right now, the biggest AIs require these GPUs. They're quite expensive and quite large. The latest one is the NVIDIA H100. It's about $30,000 for a chip. The U.S. put an embargo on selling those to China, but apparently China has found ways to get the chips anyway.
People are gathering up these chips, gathering huge amounts of money, hundreds of, well, certainly hundreds of millions of dollars, billions of dollars, and now they're even talking about trillion dollar data centers over the next few years.
And so the good news is if it really costs a trillion dollars to build the system that will host the super duper AI, then very few actors can pay that. And therefore, it'll be limited in its extent. You just described where the next frontier of warfare will exist. Yeah, absolutely. Absolutely. One thing, you know, it's pretty obvious these data centers are going to be a target and they don't seem
maybe building them in a very hardened way. So I think that's something people need to start thinking about. Maybe underground data centers? Steve, a way...
Looking at something in terms of the safety aspect here that's doable or are we just the kings of wishful thinking? I want to make sure we got the good thoughts here Yeah, I don't know this conversation with you completely bumming us out. Okay? Yeah, I hope not to do that Yeah, yeah, Steve give us a place where we can say thank you Steve for being on our show and be able to sleep tonight Yeah Yes go well so
The truly safe technology needs to be based on the laws of physics and the mathematical proof. Those are the only two things that we can be absolutely sure can't be subverted by a sufficiently powerful AI. And AIs are getting very good at both of those. They're becoming able to model physical systems and design physical systems with whatever characteristics we want.
and they're also able to perform mathematical proof in a very good way. It looks to me like we can design hardware that puts constraints on AIs of whatever form we want, but that we need AI to design this hardware, and that if we can shift humanities technological infrastructure,
You say, "AI, please design your own prison cell that we're going to put you in." That's what you just said. Exactly. Then it's going to design a way to get out. We certainly don't want an agent to do that because then they'll find some way to hide a backdoor or something. But by using mathematical proof, we can get absolute guarantees about the properties of systems. We're just on the verge of that kind of technology. I'm very hopeful that probably the next two or three years,
There are several groups who are building superhuman mathematicians
and they're expecting to be at the level of say human graduate students in mathematics by the end of this year. Using those AIs, we can build designs for systems that have properties that we are very, very confident in. I think that's where real safety is going to come from. But it builds on top of AI, so we need them both. I was going to say the good thing about what you just said, even though it sounds crazy to have the inmate design its own cell,
is that without agency, at this point, it's just a drone carrying out an order. So that's the encouraging part. Whereas if it were sentient in any way or if it had some kind of agency, it could very well say, yeah, I'm also going to design a back
door and a trap door. And I'm not going to tell you. And I'm not going to tell you. Steve, first, congratulations on winning this award. You are exactly the right kind of person who
who deserve such an award that gives us hope for the future of our relationship with technology and the health and wealth and security of civilization as we go forward. So. - Thank you so much. - And I look forward to the day where an AI beats you out for this award.
Right, yeah. That's a great point. Maybe next year it'll be an AI that wins. I'm joking, by the way. Steve Almohandro, winner of this year's Future of Life Award, and deservedly so. Thank you. Thank you very much. Before we jump to our next segment, I need to acknowledge the third honoree, James Moore, who is now sadly deceased.
His paper in 1985, What is Computer Ethics, established him as a pioneering theoretician in this field. His policy vacuum concept created the guidelines to address the challenges of emerging technologies. His work profoundly influencing today's policymakers and researchers. Gone, but not forgotten.
Serve up holiday magic from Whole Foods Market. Save on organic spiral cut bone-in ham and curated cheeses. Plus, explore limited time finds and gifts for every gathering. Shop Whole Foods Market in-store or online. Terms apply. Building a portfolio with Fidelity Basket Portfolios is kind of like making a sandwich. It's as simple as picking your stocks and ETFs, sort of like your meats and other toppings, and managing it as one big juicy investment. Mmm. Mmm.
Now that's pretty good. Learn more at fidelity.com slash baskets. Investing involves risks, including risk of loss. Fidelity Brokers Services, LLC. Member NYSC SIPC. Sometimes you have to break from tradition to make something better, or in this case, a smoother spirit. Martel Blue Swift is made of French cognac, but because it's finished in bourbon barrels from America, they're not allowed to call it cognac.
The shockingly smooth taste is rich and aromatic with distinctive hints of toasted oak from the bourbon casks, making it perfect for cocktails. Martell Blue Swift, defy expectations. Enjoy our quality responsibly. So, it seems to me that for ethical principles to work at all,
They have to be everywhere at all times and capable of evolving with the technology itself. I can't foresee the ethics panel getting together from on high
declaring what is ethical and what isn't. And then everyone has to obey that for the next 10 years. You put 10 people in a room, Neil, you get 12 opinions. Right? That's the basic human nature. Then you've got to get all of these components, all of these nation states or whatever investment groups... Demographics. Demographics with their own agendas to buy into the same principles. Because on a Wednesday, the principle's not the same for them. They're going to think in a different direction.
But that doesn't even scare me. What scares me more than anything? China, Russia, and North Korea. Yeah. It's that simple. Seriously. I'm not even going to... It's just China, Russia, and North Korea. We can put out any constraints we have on ourselves. There you go. Doesn't mean anybody else is paying attention. And that's the problem. And you're herding cats.
Good luck. Yeah, that's well. That's what makes us so scary. Well yeah hurting cats with nuclear Yeah, yeah, we're hurting nuclear cats exploding nuclear cats The newest it's the newest game sweeping the internet. It's a whole other meaning to shorten your cat Autonomous systems and if it's geared to say if it's human kill it and
That's problematic. Here's another little known fact. When we signed with the Soviet Union the Nuclear Test Ban Treaty, that was progress. This was you will no longer test nuclear weapons because at the time, from the late 1950s
Into the early 1960s there several tests a day in something in some years several tests a day, right? Okay somewhere in the world, right and Which means was such great video. Okay, so we said So we said what this has to stop. All right. Yeah, what is little known fact and we write about this in accessory to war the unspoken alliance between astrophysics and the military book and
In that book, we highlight the fact that we agreed to that around the same time that computing power was good enough to calculate the results of what would be a test. So we didn't really stop testing.
Not philosophically, not morally. Was that where MAD came from? Mutually Assured Destruction? Oh, that was later. That was later. I'm not convinced, based on my read of history, that any one nation can unilaterally say, oh, we're going to just do nice things and moral and ethical things with this new technology. Right. Yes, let's say you do that, but no one else does it, then...
What difference does it make? What difference does it make? You know, you've got to play by the same rule book, but we know that's not likely to happen. I mean, what was interesting... The history of our species offers great evidence for that impossibility. But when you listen to Batya talking, there's such a strength in the points that she makes. You would hope that people will go, you know what, yeah, and the majority come online, and then these guys sit in isolation testing machines
you know, intercontinental ballistic missiles. - But the MAD concept, Mutual Assured Destruction, just think about that. That brought the United States and the Soviet Union to the table. - Yes. - Not because they thought nuclear weapons were bad,
But they realized they couldn't win. Right, and that's the problem. The war. When you can't win. That doesn't mean they weren't thinking about it. Or if they could win, they would. And it also doesn't mean that they've taken into account for what I call the Nero scenario. What's that? So what did Nero do? He fiddled while Rome burned. He burned it down. He didn't care. So what happens if you're still in a position where the danger is
is ever-present.
So just because I spent enough time hanging around military people I don't talk about I'm not talking about Hawks that you know that just what I'm just talking about people who think about the history of conflict in this world behavior of other members of our species not just one guy standing there going just smell that son That smell do you smell it? I know where that came from? Apocalypse now yeah, you speak to
the generals and the majors, you find invariably they're students of war. They've understood strategies, they understood histories, the provocations and the outcomes. And most of them are not the warmongers we stereotype them to be. Exactly, because of that knowledge, that understanding. Correct. And so I just, I don't have the confidence. I mean, I wish I was as hopeful as Batya.
I want to be that hopeful. I will aspire to be that hopeful. So I just wonder when she talks, how far ahead of a story in terms of a technology's development are they? And how far are they playing catch up
And, you know, are they being able to bake it in from the get-go or are they just trying to retro engineer what's gone wrong? It could be a new emergent philosophy where everyone knows to bake it in from the beginning. That would be a shift in our conduct and in our awareness. The kind of shift, for example, dare I harp on this yet again, that when we went to the moon to explore the moon, we looked back and discovered Earth for the first time. Yes.
Around the world people started thinking about Earth as a planet Earth as a holistic entity that has interdependent elements There's no one island distinct from the rest right anything else that's going on on this planet. There's no boundaries No boundaries we share the same air molecules water molecules and that was a firmware upgrade to our sensibilities of our relationship with nature and
And that's why to this day, people all around the world say, we've got to save Earth. Nobody was saying that before we went to the moon and looked at Earth in the sky. All the peaceniks at the time in the 1960s, they were just anti-war. They weren't, let's save the Earth. Nobody had that kind of sensibility. So maybe it's a sensibility upgrade that's waiting to happen on civilization, lest we all die at the hands of our own people.
- Yeah, I'm going with the last part. I'm just saying that you talk about Earth Day, you talk about we went to the moon and there are people who think we didn't go to the moon and that the Earth is flat. - Yeah, we're screwed. - And by the Earth Day, the first Earth Day was 1970. - Right. - While we were going to the moon. - And the irony is-- - Could have been 1960 but it wasn't. - No. - Might have been the late 1980, no. While we were going to the moon,
First Earth Day. So is the irony that we lean into AI to get it to help us create ethical and safety architecture? Help it save us from ourselves. I like that. Maybe that's the way to flip the table. Right. And that should be it. And say AI, they're bad actors among humans who are trying to use AI to get rid of humans. Now kill them. No, no, no.
Chuck! This is where they live. This is their dress. This is their daily routine.
Google knows your daily routine. We really are. Android knows what you've been Googling. It knows everything. We really are bad people. Maybe it's the good AI against, that's the future battle. Good AI versus evil AI. Evil AI. But then again, the bad AI will tell you that the good AI is the bad AI. And then the first casualty of war is always the truth.
- Ooh. - Thank you. - I don't know who authored that, but that's brilliant. - That was deep. - Yeah. - Ooh, that's deep. - And truthful. - I wish it weren't true. - Exactly. - Stop speaking the truth. Why don't you lie to us every now and then? - Like everybody else.
You got to do. You can give me a new program. All right. This has been our Future of Life installment of StarTalk Special Edition. Yeah. Yeah, I enjoyed this. Yeah, and congratulations to the award winners. They are the people that we need out there. Yes, lest we not be around to even think about that problem in the first place. All right. Gary, Chuck. Pleasure. Always good to have you. Neil deGrasse Tyson here, as always, bidding you to keep looking up.
Sometimes you have to break from tradition to make something better, or in this case, a smoother spirit. Martel Blue Swift is made of French cognac, but because it's finished in bourbon barrels from America, they're not allowed to call it cognac.
The shockingly smooth taste is rich and aromatic with distinctive hints of toasted oak from the bourbon casks, making it perfect for cocktails. Martell Blue Swift. Defy expectations. Enjoy our quality responsibly.