AI offers significant benefits, such as acting as a medical advisor, aiding in education, and helping small businesses operate more efficiently. It magnifies human capabilities, enabling people to achieve tasks they couldn't before. However, risks include job displacement, cybersecurity threats, and national security concerns if adversaries misuse the technology. AI could also lead to market manipulation and other societal challenges if not properly managed.
The U.S. is in a technological arms race with China over AI because maintaining leadership in AI development is critical for national security and global influence. China is investing heavily in AI research, and the U.S. must stay ahead to ensure that democratic values guide AI's development. Falling behind could allow authoritarian regimes to dominate the technology, posing significant risks to global stability and U.S. interests.
Senator Mark Warner is concerned that the rapid pace of AI development may outstrip the ability to implement necessary guardrails, similar to past failures in regulating telecom companies. He emphasizes the need for systemic risk management, protection against market manipulation, and safeguarding children from harmful AI applications. Warner also advocates for international guardrails, though he acknowledges authoritarian nations may not adhere to them.
Condoleezza Rice believes the U.S. must 'run hard and fast' to win the AI arms race, which she considers the most important technological competition in human history. She stresses the need for the U.S. to maintain its lead over China, leveraging its innovative private sector and ensuring infrastructure and talent are in place to support AI development. Rice also highlights the importance of avoiding strategic surprises from autocratic regimes like China.
Condoleezza Rice acknowledges that AI is already being used in education, such as through tools like ChatGPT, but warns against over-reliance on these technologies. She emphasizes the importance of depth in learning, encouraging students to go beyond surface-level answers provided by AI. Rice believes educators must adapt to AI's presence while ensuring it enhances, rather than hinders, the learning process.
Senator Mark Warner warns that AI in warfare could lead to significant risks if not properly regulated. He suggests the need for international guardrails, similar to a Geneva Convention for AI, to set standards for its use in military applications. However, he is skeptical that authoritarian nations like China, Russia, and North Korea would adhere to such agreements, making it crucial for the U.S. to maintain its technological edge.
Sam Altman believes that society, not just developers or lawmakers, should decide how AI is regulated. He emphasizes that OpenAI has a responsibility to set standards and make decisions about how its tools are used, but ultimately, broader societal input is necessary. Altman supports eventual legislation to govern AI but acknowledges that the technology is still evolving, making it challenging to determine the exact form regulation should take.
The Sunday panel highlights the difficulty of regulating AI without hindering innovation, particularly in the U.S. They discuss the need for policies that protect civil liberties and intellectual property while avoiding overregulation that could stifle technological progress. The panel also emphasizes the importance of addressing infrastructure needs, such as energy and broadband access, to support AI development and ensure equitable benefits across the country.
AI is transforming the film industry by automating tasks like special effects, post-production, and even scriptwriting, saving time and money. It enables filmmakers to create immersive experiences, such as AI-enabled scent emissions linked to audio cues. However, concerns about deepfakes, digital doubles, and the potential exploitation of performers have made AI a contentious issue. While some filmmakers embrace AI's potential, others worry it could compromise storytelling and the human experience in movies.
Sam Altman stresses the importance of the U.S. maintaining its lead in AI development to ensure that democratic values guide the technology's evolution. He highlights the need for robust infrastructure, including energy, computer chips, and data centers, to support AI innovation. Altman believes that leading in AI is crucial for the U.S. and its allies to shape the future of technology and avoid ceding control to authoritarian regimes like China.
This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech.
I'm Shannon Breen. Today, an in-depth look at the rise of artificial intelligence. It's promised to revolutionize human life in the 21st century and fears it could spiral out of its creator's control.
It's the driving force behind remarkable advances in disease research, self-driving cars, and state-of-the-art military and policing tools. Artificial intelligence is quickly becoming the most consequential and controversial scientific development in modern history, as the greatest minds in the world work to understand its power and dangers. We talked to OpenAI CEO Sam Altman about his vision to optimize benefits and mitigate the risks.
Plus, inside America's race to maintain its superiority on the evolving technology as China pours billions into AI research. Former Secretary of State Condoleezza Rice discusses the high stakes fights over the future of the tool and what it means for national security. We simply have to win what is now the most important technological arms race in maybe human history. All right now as Fox News Sunday looks at the state of AI.
Hello from Fox News in Washington. Here's a quick look at some of your top headlines this morning. President-elect Trump nominating longtime ally Kash Patel to serve as the next FBI director, set to replace current head Christopher Wray, who would have to resign or be fired. Patel served in a number of roles in the first Trump administration, including deputy director of national intelligence.
Syrian rebels continuing their stunning advance into government-held territory after taking control of Aleppo, the country's largest city. The offensive marking the biggest challenge to President Bashar al-Assad's regime in years.
And a major winter storm blanketing parts of western New York and the Midwest with up to 40 inches of snow, maybe more, raising serious concerns for many ahead of the busiest travel day of the year. And adding insult to injury, an ugly brawl breaking out after Michigan's upset win over Ohio State yesterday. Tempers flaring when a Wolverine player tried to plant the Michigan school flag in the middle of Ohio State's field.
Now to the focus of our special today, the state of artificial intelligence, the powerful life-changing innovation transforming just about every part of society. Over the next hour, we'll explore how AI came to be, its unrivaled capabilities and benefits, and the very real risks it poses. In just a moment, Senator Mark Warner joins us on the debate lawmakers are having about how to create guardrails without hindering innovation. But first, we go
We go to William Lodge and ask for a closer look at the progression of this technology and where it might go next.
Hey, Coppola. Good morning. What can I do for you? Voice or text, artificial intelligence is about to change your life, says Microsoft's Yousef Mehdi. Draft an email to my daughter's school PTA. Dear PTA members, I hope this email finds you well. So how does it know to come up with these ideas? It runs hundreds of millions of emails.
in millisecond time. Whether it's a conversation or generating images. So I'm going to say wheat fields in a valley at sunrise. AI is revolutionary like the PC, internet, cloud, and mobile phone. As an electronic brain. Machine learning began in the 50s.
By 97, a computer beat chess champion, Garry Kasparov. Say hi, Aibo. Later came a robotic dog and humanoid. My name is Watson. Watson won Jeopardy. Hey Siri. Apple introduced Siri. Alexa, play classical music. Amazon gave us Alexa.
By 2022, OpenAI's ChatGPT outperformed many PhDs. There's more information available through the internet than any human being in the history of the world has ever individually consumed. Access to that gives AI a superhuman ability to sort images or generate new ones. Is this a AI-generated image? This image did not exist prior to my asking for it. The future is not science fiction or fantasy. It's not even the future.
It's here and now. Many on Capitol Hill fear AI competitors are moving too fast and favor a new federal agency to audit systems before they go public. The control of this technology by just a handful of companies and governments...
is a huge, huge problem. Some states are stepping in to prohibit cloning voices or faces. There are a lot of deep fakes out there. There's not a lot of disclosure. There's not a lot of labeling. While the industry worries that regulation will stifle innovation, others fear the future if we don't.
Shannon. William Lajanets reporting. Thank you, William. Joining me now, Virginia Senator Mark Warner, chair of the Senate Intel Committee. He's backed several measures on Capitol Hill with regard to regulating AI. Senator, welcome back to Fox News Sunday. Thank you, Shannon. So there's been plenty of bipartisan support. We've seen the hearings and yet nothing has made it across the finish line. How do you see some sort of legislation coming together, some sort of creating of guardrails?
Well, first of all, Shannon, let's look at a couple of these other topics. First, we've got national security concerns. I know you're going to have Condi Rice on later today.
This battle and race to win the AI, we've got to win. And China's our huge competitor. At this point, we are ahead. And nation states like Saudi Arabia and UAE are moving their geopolitical stance to more favor us because they want our technology. That's good. We've got to maintain that lead. Second, to power AI and the GPUs that are involved, we're going to need enormous amounts of energy. The number of data centers that will be created around the country are enormous.
And I think to power those, we're going to need to bring back advanced nuclear, small modular nuclear reactors. We're not going to be able to do it in a clean way without that.
On the regulatory front, you're right, there's been lots of smoke but not much action. And I fear a little bit that the race to AI, the entities are moving ahead so quickly that we may not put any guardrails in place. Think about some recent news about Salt Typhoon where the telecom companies were moving so quickly to get speed, they didn't do much on protection, and now we've got the Chinese penetrating all of our telecom networks.
in a major, major worst hack in our history. We've got to put some of those same guardrails in place around AI without slowing innovation. And I would argue that comes in a couple of buckets. One,
around systemic risk, macro risk, how we manage that. There have been a couple of ideas, nothing put forward yet that I think passes muster, but that's one area. Secondly, to make sure that specific risk, like manipulation of markets, one of the things that we didn't see this year with AI manipulation of our elections the way we thought, these same AI tools could be used to manipulate markets. I'm very concerned about that. Some states have already moved in terms of manipulation of facial images.
And so I'm hopeful we'll see some action there. And then one area we all ought to be able to agree on is protecting our kids. If we could go back 10 years ago and put some guardrails on social media, most of us, no matter where we sit on the political agenda, would put some guardrails in place. We ought to do the same in terms of AI. We don't want our kids...
you know, non-consensual use of their faces with nude images. There are some low-hanging fruit that I think even the AI companies could move on, even without regulation. You've also said more broadly, you think there should be sort of a Geneva Convention type agreement when it comes to the use of AI in warfare. Do you trust that the international community, the other players would agree to something like that? What would it look like? Well, listen, I don't think I would believe that
the authoritarian states like Russia, China and North Korea.
would ever adhere to standards. I mean, I wish we would have done that in cybersecurity. We were the nation state back in the late 90s that didn't want that to happen. And now we've seen China penetrate so many of our networks. I do think having some guardrails because on an international basis with at least set some standard of care, because even though we develop the leading AI models, some entities like Meta, which have an open source model, you know, that's in effect released into the wild.
and can be developed by other nation states. UAE, for example, has done that. So I think having some international guardrails, recognizing at the end of the day that the authoritarian nations will probably never fully adhere to them. You have talked about
this idea of staying ahead of China on the use of AI technology. And you mentioned social media, too, because there are a lot of people within AI who think social media got away from Washington before they were able to sort of regulate some of the dangers out of it. They worry the same thing may happen here with AI. I talked to Asa Raskin, who's the co-founder of the Center for Humane Technology. By the way, he's the guy who invented infinite scrolling, and he's not happy how it's being used now in social media kind of for doom scrolling. But he says this about
China and this race and how we may have gotten it wrong on social media. Here's his take. Here's the fundamental question. We beat China to the wide scale deployment of social media into our societies. But did that make us stronger or did it make us weaker?
Yeah. So he says we can't be just in this race about getting the technology first, but we have to make sure that it especially with the world of AI, that it furthers our values, our country's values when we're in this race with China. How do we do that? How do we find that balance?
Well, I think, again, this is the question about speed. I think we are far enough ahead of China that a little bit of speed bumps will not slow down innovation. I don't want to overregulate this. Frankly, it would be almost impossible at this point. But I go back to my analogy around telecom companies. They were racing to get the fastest connection possible. We've seen recently with the worst hack in our market,
modern American history, salt typhoon, where that race to speed on telecoms has allowed China to penetrate our networks and, frankly, at a level of counterespionage problems that we've never seen before. The same could be done exponentially greater
in AI if we don't build in some protections, if we don't also think about making sure these models are appropriately trained and tested before they're released. I think there's a lot of folks in the AI community that agree with that. But when you've got literally billions and billions of dollars
racing in and who will get the next model out there. This is a constant tension. And Congress's record on this is just pitiful. I mean, we have never even put the beginning of a guardrail on any social media. Finally, some states are starting to put some age limits in place. You may see states move again on AI quicker. They've already started to use prohibitions, for example, on deepfake technology, which I think we ought to take a look at at the federal level as well.
OK, I want to get to a couple of other questions for you since we have you as chair of the Senate Intel Committee. I wanted to ask you about the situation that's developing in Syria, where that rebels or insurgents have taken Aleppo, the country's largest city, that there is, you know, there's been a critical statement from the White House coming from the NSC spokesperson, Sean Savitt, who says essentially Assad has not been cooperating with the U.N. policies and regulations that are supposed to be in place with regard to Syria.
talks about his reliance on Russia and Iran. Clearly, they are signaling they will prop him up if necessary. What are your concerns about what we're seeing in Syria when we've already got a region that has many, many other problems?
Well, Assad's a bad guy. This guy has murdered literally millions of his own people. His strongest ally is Russia, Hezbollah, Iran. And I think you're seeing the rebels now push back on Aleppo. I think you may even see movement towards other major cities in Syria. His regime could crumble. But we should be very concerned because, remember, Syria has enormous amounts of military, weaponry, chemical weapons.
But this is what happens when Russia props up and Iran props up authoritarian figures like this. Well, with respect to Russia as well, they've been ramping up attacks on Ukraine's energy infrastructure. More recently, that's been aimed at these substations that link to nuclear facilities in Ukraine. What do you make of that situation? And what do you think is coming with a transition to a second Trump administration with regard to wrapping up what's gone on with Russia and Ukraine? Well,
I wish the Biden administration had sent and allowed the Ukrainians to use the attack missiles we gave them to target those Russian sites that were attacking Ukrainian energy sources. I think that would have made sense. Clearly, Ukraine does not have the manpower that Russia has. But remember, Ukraine, without the loss of a single American soldier, took out 87 percent of Russia's pre-existing nuclear
armed forces, their army, 63 percent of our tanks, 32 percent of our armored personnel carriers. As a guy that grew up thinking that Russia was our long-term enemy, the Ukrainians have performed magnificently. They're getting worn down. I just hope that the future Trump administration doesn't pull the rug out from under them. All right. Senator Warner, we always appreciate your time. Thank you for that, sir. Good to see you today.
Thank you, Shannon. Thank you. All right. Former Secretary of State Condoleezza Rice says there's a new technological arms race when it comes to AI ahead. Her thoughts on what needs to be done to keep America ahead of the pack. This nondescript building in San Francisco is the incubator for artificial intelligence. Coming up, my conversation with the man who's guiding this technology that's changing the world.
Hey, I'm Trey Gowdy, host of the Trey Gowdy Podcast. I hope you will join me every Tuesday and Thursday as we navigate life together and hopefully find ourselves a little bit better on the other side. Listen and follow now at foxnewspodcast.com.
Welcome back to Fox News Sunday, the state of AI. Our next guest is a leading figure in the industry. Sam Altman is one of the co-founders and current CEO of OpenAI, a research company you'll know well for its development of chat GPT. Altman has found himself at the center of this ongoing conversation about how to make sure AI benefits humanity. I traveled to OpenAI's headquarters in San Francisco to meet with Altman and flesh out his thoughts about that responsibility.
Let's talk best case and worst case scenario, because you've been honest about the potential harm that could come from AI. What should we know? Society's been through things like this many, many times. We've been through the industrial revolution. We've been through the computer revolution. One thing that I think you can learn studying history is it's not always obvious what the pluses and minuses are going to be, but I'll tell you our current best guess. On the plus side, people are using these tools already today as AI medical advisors. You hear from people who
Um, they couldn't diagnose some disease they had and they had all these weird symptoms and Chachapiti helped them. You hear from people who are using this as like an AI tutor. They're learning things they couldn't learn before. Uh, you hear people who are using this to help run their small business, really wonderful things. Uh, uh,
And this is a tool that magnifies human ability in all these ways. I think we're at the very early ends of that and we'll see incredible things. There's hundreds of millions of people using this already. There will be billions. And like with any other tool, people will be able to do things they just couldn't do before. And that really is, I think, how the world society gets better and better.
uh, on the downside, you know, to get right at it, I'm sure this will impact jobs. Um, many jobs it'll make better and more productive, but some it'll make worse and some will go away entirely. Uh,
You can imagine cybersecurity incidents with these models where people use them to hack into systems. You can imagine our adversaries getting a hold of these and it being a national security issue. So I think we have a lot of work to do and we really need to stay in the lead. So you talk about this will enable people to do things they couldn't do, but what about things they shouldn't do? How much do you worry about that? Well, I think that's part of building the tools.
And there are all sorts of things that you could imagine people using this technology for, that for what we build, ChatGPT, we don't want to use that way. And we try very hard to make sure that you can't use it for some of the obvious negative things you could do with them. As this technology advances, we understand that people are anxious about how it could change the way we live. We are too.
You've testified on the Hill. You stay in touch with lawmakers. Do you think they're the ones to add the guardrails? Is it up to the developers? No, I think it should be a question for society. It should not be open AI gets to decide on its own how chat GPT or how the technology in general is used or not used. But we do have the responsibility to do the best that we can before that happens. So as society gets more experience with these tools, which will take years and years, I think it'll become easier.
what the standards should be. In the meantime, we have to make some decisions.
Should our tool respond this way or that way to a certain query? Should you be allowed to use it for this thing, which could be good or could be bad? And one thing that we try to do is publish at least what our stances are so that people can tell when it's just a bug because the technology is still somewhat early. And when it's a decision we've made that people may disagree with so that we can debate that and perhaps change it. You think Congress needs to legislate those guardrails, the restrictions regulated? I,
I think, yes, at some point. When it is, what form it should be. I don't know when that will happen, but any technology of this magnitude, I would expect there to be legislation about at some point. So there is a new incoming...
And people have said they're not sure where President-elect Trump is on some of this stuff. He's talked a lot, which I want to talk to you about, the race against China and making sure that we stay ahead of them. In fact, you tweeted about that after his election. You said, it is critically important that the U.S. maintains its lead in developing AI with democratic values. He very much wants to stay ahead of China. They're not going to have the same values in regulating AI that we will. So where do we go in this race with China? Yeah.
Infrastructure in the United States is super important. AI is a little bit different than other kinds of software in that it requires massive amounts of infrastructure, power, computer chips, data centers. And we need to build that here. And we need to be able to have the best AI infrastructure in the world to be able to lead with the technology and the capabilities. I believe President-elect Trump will be very good at that. Look forward to working with his administration on it. It does seem to us like...
This is going to be very important. It does seem like this will be one of these unusually important moments in the history of technology. And we very much believe that the United States and our allies need to lead this. You mentioned what it takes, land, electricity, water, cooling, the heat that comes from these productions. There are communities that are raising concerns about that. They are worried about the impact on their lives. First of all,
we are making enormous efficiency gains. And so this idea that, you know, you need all the energy on earth probably won't be true. One thing we hear again and again is some communities don't want data centers or chip fab facilities or new power plants. And some really do. And I think the United States is a gigantic country and there'll be plenty of room to do this. We've got incoming vice president, JD Vance. He's a Senator. He's been there where you've, when you've testified, he's talked about those who have,
raised huge concerns, worries about AI is going to kill everybody and take over the human race and replace us. He says they're asking for regulations that would entrench the tech incumbents that we actually have and make it actually harder for new entrants to be able to create the innovation that's going to power the next generation of American growth. So he seems maybe in conflict with some of what we've heard from President-elect Trump about
regulating, not regulating, if regulation ends up benefiting the big companies that are already a part of this game? First of all, we were the little up-and-comer very recently. So I think it's very important to the American innovation economy and our position in the world that we allow our innovation
companies to do what they do. I think one of the most special things about this country is our ability to repeatedly lead the way on innovation and repeatedly figure out the future of innovation.
of science, of progress, and benefit from the enormous growth that happens with that. And we can look at some of our friends around the world and see very clear evidence of how bad it is if you stop having that. So we really need that. We really, as a country, don't want to do anything to impede our smaller companies or make it more difficult for them. And we clearly have had regulatory overreach there as a country. But
But I think the big companies can handle it a little bit better. And if we're right that the systems are as powerful as we think they're going to be, then I think most Americans will say, yeah, you know what, some oversight on that is a good idea. Where are we on the spectrum of getting to where AI is making its own autonomous decisions? As its capabilities go up, maybe right now you can give it a five-second task without supervision.
And eventually you give it a five-minute task and then a five-hour task and then a five-day task. And maybe someday it can go to a five-month task and that's like a full scientist off exploring something. But I think it'll feel more like that. It'll feel more like an increasingly senior coworker, not one...
Not one moment it was not autonomous and then one moment it was. Yeah, because I think a lot of people who don't understand AI, and I would put myself in that category. I've got a basic understanding, but they worry about AI becoming sentient, about it making autonomous decisions, about it telling humans you're no longer in charge. It doesn't seem to me to be kind of where things are heading. I think, is it conscious or not will not be the right question, but it will be like,
how complex of a task can it do on its own? What about when the tool gets smarter than we are? Or the tool decides to take over? Because you seem very calm and chill about the future and optimistic that we are going to be able to handle this responsibly. Tools, in many senses, are already smarter than we are. I think that the internet is smarter than you or I. The internet knows a lot of things. In fact, society itself is obviously vastly smarter and more capable than any one person. So I think we're already good at
working with tools, institutions, structures, whatever you want to call it, that are vastly more capable than one person. And as long as we have a reasonable, reasonably level playing field where it's not like one person or one company gets vastly more powerful than everybody else, I think we know how to deal with that. Did your brain always work like this? Did you ever feel like, I think about the world and I think about the possibilities in maybe a different way than other kids in my third grade class. Um,
I grew up in St. Louis in the Midwest in a time when technology was not such a thing. And we did have some computers in my third grade class, but most kids didn't like them that much. And I thought they were super cool. Yeah. I mean, I was like a nerdy, shy kid that probably liked sci-fi much more than the average kid did. But I never in any realistic sense thought any of this would happen to me and feel very grateful. What do you think...
will be kind of your legacy? I will hopefully not think about that for a long time. I, you know, it's a, I think there are all these like deep philosophical questions. And right now I just kind of like work as hard as I can each day until I get tired and then collapse in bed. And the thinking about a legacy feels impossibly far off. Still. Um, I view, um,
sort of human progress as this one long exponential curve where we all get to build on the work people have done before us and the people that come after us get to build further on the work that we've done. And I think our legacy, like opening, we got to put in one pretty important layer of scaffolding.
And that's like a tremendous honor. And what excites me the most, to the degree there's a legacy here for us at all, is the things that people can do with this new tool that we helped discover, I think will astonish us. Well, thank you for taking a break from all of that for us. Thank you very much.
Up next, a very enlightening conversation with another one of the key voices in the world of artificial intelligence. I sat down with former Secretary of State Condoleezza Rice, currently the director of Stanford's Hoover Institution, to discuss AI's impact on national security and the classroom. It's going to be a challenge to our norms. It's going to be a challenge to our processes. But we can't go back. We have to recognize that our students are going to be using it.
It's here.
Welcome back to Fox News Sunday, the state of AI. Well, the growing technology has major implications for democracy, national security and education. Former Secretary of State Condoleezza Rice has been tracking the rapid advancements. Rice is the current director of the Hoover Institution, a think tank located on the campus of Stanford University. She also co-chairs the Stanford Emerging Technology Review. We sat down with her there in Palo Alto, California, to get her view from the forefronts.
I'm here at Stanford University, and this is really the kind of epicenter of this technological revolution. Universities like this, the private sector that is here, and I think in that is the point, which is that America remains the most creative and innovative country on the face of the earth, largely because of the activities of the private sector. But even the people who are at the frontiers of AI don't know where it's going.
and they would be the first to tell you that there are discoveries that in fact shocked them.
When you think about the fact if we were sitting here two years ago, we wouldn't have been talking about generative AI. And so it's an exciting new world out there. It is. And everybody's talking about where the guardrails come in. And from the issue of national security as well, there's a lot of open source on AI. And so China, among others, they're leveraging that. How much do you think about this in terms of a national security issue? My answer to the national security issue is to run hard and fast.
We simply have to win what is now the most important technological arms race in, I think, maybe human history, given what AI could do. In China, if you look at how they handled COVID, simply covering things up,
I don't want any strategic surprises out of an autocracy. And so while I understand that people are concerned about guardrails, what should we not want AI to do? My prescription is let's just run really fast and really hard. There seems to be a lot of bipartisan conversation about what to do and where to go. Nothing's really moved ahead legislatively.
But do you think at some point it's going to be up to Washington to work with these private entities and figure out where we go? You don't want to start regulating and making laws about something that you don't understand. And so the conversation between the creators, the private sector that is really the driving force behind these technological breakthroughs,
It really does have to be a conversation. And right now, they're beginning to speak the same language, but they're not quite there yet. Many of these same people believe that if they succeed in building computers that are as smart as humans or perhaps far smarter than humans, that technology will be at a minimum extraordinarily disruptive and at a maximum could lead to literal human extinction.
I think a lot of people have this misconception that it's a bunch of evil robots that will become sentient, that will take over, that are going to destroy humanity. I think people tend always to go to the dark side of what are the downsides, what are the dangers. I understand the killer robot. I have a friend who works kind of in this area. And his view is, well, you know, you don't have to give the AI a body.
You don't have to put it in a robot, and then the robot becomes somehow generally intelligent. So I know that the thing that gives me comfort is that a lot of the people that you talk to who are at the leading edge, at the frontiers, want to be responsible with what they're doing. They don't want to threaten humanity or threaten society. And so I think this is an open field for conversation for what good regulation might look like.
What role do you think there is for educational institutions, for the government in development? Right now, the government, I think, would be best to worry about some of the things like the big infrastructure questions. What are we going to do about a grid that really can't handle everything that we need? You know, Microsoft has famously now gone and bought Three Mile Island because they need the power.
Is that really the answer that we want? What are we going to do if the power generation is taking place, for instance, in the Middle East, where countries are making a bid to be the supplier? So I would say to government, worry about the infrastructure. Worry about making sure that we have the talent.
to do this. The people who want to come here from all over the world with the smartest people with engineering degrees and they want to be a part of this, we have to recruit the best talent. And then universities are places of unfettered discovery in the sciences and engineering. And so you say to universities what the government has done for many, many years, we're going to fund fundamental research.
We don't know where it's going. We don't know if it's commercializable. But those labs at places like Stanford or MIT or University of Texas are places where people just get up and they ask interesting questions and they try to answer them. I think our great fortune is that we have great strengths online.
across the innovation ecosystem. We have distributed innovation. One of the things that I think the Chinese may do badly is that they may try to control everything from the center. And one thing that we have going for us is that somebody someplace in a garage may actually discover something that's quite remarkable. And I find so many people who are doing the work here in the U.S.,
They're doing it for joy and love of these discoveries and the fascination they have with this technology and where it could potentially take us. But in that, too, a lot of this, as we discussed earlier, is open source. They want other people to be able to benefit from these discoveries. But sometimes those aren't people that share our values. Well, it's very hard to really contain knowledge. You can contain as we're trying to do the chip.
the Nvidia chip that makes it possible to do some certain kinds of generative AI. That you can do for a while. But eventually, a society or country like China that has as much engineering talent as it has will also figure out that problem. But eventually, people are going to figure it out. And so you just have to keep ahead of them. You're here at Stanford, a long history of leadership here.
It's an academic setting, and there have been a lot of conversations about AI in that setting with term papers, with research, with students, with professors. How do you manage that part of the equation? We are having such an interesting but difficult conversation about what to do about AI at
learning and production. And so everybody's using ChatGPT now for kind of basic functions. But I would like to convince my students that
that using a chatbot for learning has its downsides because did you ever actually really learn the material? Can you go one inch deeper than what that essay says? We've had this problem for some time, Shannon, so some of my students think if they've Googled it, they've researched it.
And I constantly say to them, you know, there's this thing called depth in anything. And so don't just take the first answer. Let's really try to think this through. So it's going to be a challenge to our norms. It's going to be a challenge to our processes. But we can't go back. We have to recognize that our students are going to be using it. We just have to figure out how it doesn't abbreviate things.
in ways that are harmful to the learning process. You seem to be generally optimistic and hopeful about artificial intelligence. What's your feeling moving forward? I'm generally an optimist, always, but I'm particularly optimistic about the technological frontiers. I recognize the downsides. For instance, one of the things people worry a lot about is kind of AI deepfakes.
But I know how hard the AI scientists, the engineers are working to try to deal with ways to show that it's a deepfake. So they're not unaware of the downsides. They don't want to be on the wrong side of history. Well, we know a lot of that innovation comes through right here at Stanford. So thank you for making time for us. Appreciate it. Great pleasure.
Now, Rice also told me she thinks the European Union went too far with its stringent AI regulations and notes that not much innovation is happening there, she says, because of it. Ahead, our Sunday panel on reports President Trump may name an AI czar to manage all of these issues and questions about the influence of Elon Musk and Vivek Ramaswamy. Next.
Because of your overwhelming. Even though we develop the leading AI models, some entities like Meta, which have an open source model, you know, that's in effect released into the wild and can be developed by other nation states. UAE, for example, has done that. So I think having some international guardrails recognizing at the end of the day that the authoritarian nations will probably never fully adhere to them.
That was Senator Mark Warner earlier in the show talking about international guardrails when it comes to the use of AI in warfare. Time now to talk about all of it with our Sunday group, Forbes contributing writer Richard Fowler and Katie Pavlich, editor at townhall.com. Welcome to you both as we debated Thanksgiving sides versus turkey in the commercial. We've got some other important stuff to talk about, too, though. We've got this issue of regulating AI. And how do you do that without, you know, stifling innovation issues?
You know, California just had a law that they got. Well, they got the bill all the way to Governor Newsom's desk. He said that he wasn't going to do it because he worried it would burden the ability to innovate. And so he didn't sign it. He vetoed it. Well, I've learned so much this morning just by watching your show. So thank you, Shannon, for diving into all of the different aspects of this. Of course, the national security side of it, the energy side. But in terms of where we are in Washington, D.C., regulation is a word that keeps coming up. And politicians claim that they want to regulate to protect the American people, to protect
private property, intellectual intelligence. And so that's what they're looking at doing. But the big question is, how do they do that without violating civil liberties, without also stifling innovation? And we still do have this thing called the Constitution. And in our haste to keep up
with AI and how it works and all these companies doing all these amazing things, we still have to keep track of what kind of policies that may come out of D.C. in terms of violating the First Amendment, violating search and seizure of private data that companies maybe are taking and not actually telling people who are on these programs or using AI. So I would say
There's a lot that can be done. There are lots of different aspects. And then finally, on the energy side of this, D.C. has to deregulate a lot when it comes to the future of A.I. Nuclear. You're hearing from all these experts is the way forward. These many nuclear reactors, because that's the only way you can actually have A.I. develop in the way that it has to to get ahead of countries like China. So in this town, they're going to have to take a look at the nuclear regulations that have limited innovation on that side of the power structure in this country and right
around the world if they're going to actually keep up with what the energy needs are. Yeah, and one of the ways that may be handled is a potential AI czar that we're told that President Trump is considering having someone chosen. They would focus on public and private resources aimed at keeping America first here, but Axios reports this too. The person would also work with Doge, that Department of Government Accountability, to use AI to root out waste, fraud,
fraud and abuse, including entitlement fraud. Now, we're reportedly told it would not be Elon Musk, but clearly he's going to have a lot of influence over President Trump in this area. He and Sam Altman started OpenAI together. They now have had a breaking apart, a difference of opinions. There's some legal action from Musk aimed at the company. But he's got the president's ear. The president has been generally against regulation, but now they've got to figure out what to do with this specific technology.
Well, let's say this. I think a difference of opinion is a nice way to put where Elon Musk and Mr. Altman sort of are in this particular one. But I do think that Katie is right around how Congress has to view artificial intelligence and regulating it. And I say that by also saying that Congress has sort of missed the boat by a couple of decades on regulating technology, period.
Right. And regulating how technology companies show up in space, show up in our lives, show up in the capitalist marketplace. And so I think regulating AI for this Congress is going to be very difficult in a world in which it's abdicated its responsibility for so long. But as they think about regulating AI or better yet, passing any policy around AI.
It's also important to think about all the other inputs that get you to where we are. One, Katie talked about the idea of how do you deal with an energy grid that at this point in time, we can argue is stressed, might be close to failing. But beyond that, it's also to how you deal with
diversity, equity, inclusion, and not in the way you think about it, but diversity and equity inclusion is how it comes to access. There's so many parts of the country, whether it be West Virginia, whether it be Arkansas, whether it be Mississippi, where there are American citizens, American residents that don't have access to broadband. So when you expand AI and parts of the country like here in Washington, D.C., where we have access to broadband, are able to use AI in education, to use AI in technology, to use AI as part of the government, which we've done for many years, especially at the IRS,
And then there's other parts of the country that don't have access to broadband. So if you don't have access to broadband or the Internet, how do you then use AI? And is there an advantage given to those places where you have access? And I think that's the question that Congress has to tackle. How do you fund research? How do you fund investment? And at the same time, how do you create some common sense rules of the road for a technology that seems to be moving?
pretty quickly yeah and i wish to know that along with the things that we talk about people being worried about it overtaking the human race or ever and it's been into a lot of medical researchers a huge tool i mean in scanning radiology uh... you know scans for looking for cancer that kind of thing early treatment for stroke be able to diagnose those things the federal government is using it to find waste in front of you send and white-collar crime that i think so there are positives it's this question of how to manage it so while we're talking about uh... potential trump nominations
Overnight, we've gotten word that he has chosen Kash Patel to lead the FBI. Now, that means Christopher Wray either resigns or is fired. But former acting FBI director Andrew McCabe has this to say about the Patel choice.
It's a terrible development for the men and women of the FBI and also for the nation that depends on a highly functioning, professional, independent Federal Bureau of Investigation. The fact that Kash Patel is profoundly unqualified for this job is not even like a matter for debate.
Well, he'll have to talk about that as he goes through a Senate confirmation process. First, let's talk about former acting FBI Director Andrew McCabe. He was fired. He was fired for leaking information to the media. The inspector general of the Department of Justice referred him
For criminal prosecution, he is one of the top people who has been has destroyed the reputation of the FBI over the past 10 years. There's been no accountability for him. And he calls the current nominee unqualified. Well, let's take a look at his at Kash Patel's qualifications. He was the deputy director of national intelligence overseeing 17 investigations.
intelligence agencies. He was in the Department of Justice under Barack Obama prosecuting ISIS during the Trump administration. The first time around, he was in charge of taking out ISIS cells in the Middle East, what the administration did successfully. So he has the domestic policy or the foreign policy down on the domestic side. He's the one who wrote that memo exposing the fact that the FBI was illegally
wiretapping, issuing FISA warrants against American citizens who worked for a political campaign, the first Trump campaign. He has gone through and exposed all of the civil liberty violations the FBI has engaged in with, again, no accountability over the past, really, 10 years.
And this started at the IRS under Barack Obama, by the way. It wasn't just in the FBI. So people like Andrew McCabe are trying to claim this is some kind of retribution. They're conflating retribution with accountability. And Cash Patel not only has the experience on foreign policy, but also on domestic policy as well, with all of his work rooting out this corruption used for political purposes against political enemies inside the intelligence communities and on Capitol Hill. Richard, I sense a different take coming from you over here.
Yes and no. Look, let's be very clear. I think for decades we've seen problems with the FBI, whether let's go back to them wiretapping Martin Luther King, to the fact that we've seen them wiretapped American citizens today, whether it be the cases that Katie mentioned or folks who are protesting for black lives to matter. Right. We've seen the FBI engage in behavior that most Americans, no matter what side of the political you sit on, raise an eyebrow to. Now, the question becomes, is Kash Patel the right person to clean this up in a world in which we've heard him say things?
like the 2020 election results weren't true, et cetera, et cetera, et cetera. Once again, it's not my job or Katie's job to make this determination. This will be the job of the United States Senate. Understanding that for many, many years, the FBI director's term has been 10 years because the whole deal is the political office or better yet, the nonpolitical office of the FBI should be something that's apolitical. And the question now that
But the past two directors will both be appointed by Trump. We'll have many Americans questioning, many senators questioning, is the FBI now apolitical or aren't they apolitical? And that's the question the Senate will have to say. Certainly not apolitical. Well, and listen, President Trump chose Christopher Wray, but now he's going to have to go if Kash Patel is going to take this job. He is, what, seven years into a tenure term that you would traditionally see. Do you guys think Wray resigns or Trump fires him? I believe Wray has resigned.
Already said that he is going to be leaving at the end of the Biden administration. So if he does, Trump will be able to potentially get his guy. If he does not leave, maybe there's potential President Trump can fire him and then appoint Cash Mattel as the acting director for 200 days. Then they'll have to find someone else. Well, there are many Senate battles to come and we'll all be here for it front and center. All right. Thank you, guys. We'll see you next Sunday. Lights, camera, innovation. How artificial intelligence is writing itself into the Hollywood story. Next.
I really look forward to that future where someone, you know, doesn't have to move to L.A. and meet a producer and get the budget and get their film made. Something that makes years, right? I think because of this technology, some kids from Indiana will be able to tell their story easier. Well, AI not only looks like it's right out of the movies, it's also influencing how films are written, produced and released. Claudia Cowan reports on its impact on the big screen.
From whiz-bang special effects to helping studios optimize movie release dates, artificial intelligence is disrupting the film industry. It's putting more time and money back in the hands of the creatives by automating a lot of the stuff that is very, very much part of the craft and the manufacturing aspect of production and post-production. Audiences have marveled at action heroes aging backwards in the new Indiana Jones movie and The Irishman. And soon... How cool.
AI-enabled scent emissions linked to audio cues promised to make movies more immersive. Oh, that's the ocean. I smell the ocean. But real-world concerns about deep fakes and digital doubles without consent have turned AI into a bombshell issue. Shut it! Woo!
The possibility for exploitation both for members performing with their voice and with their movement, or in some cases all three, voice, face, and movement. Both sides made progress toward addressing the use of generative AI on screen. But if scripts are one day written by large language models, Oscar-nominated screenwriter Billy Ray argues movies will suffer. What you're going to find as a consumer is that everything will get worse.
and the kinds of stories we tell will be limited, and the human experience will be compromised. And that's what movies are for. - Some filmmakers are launching their own AI models. Visual effects expert Nikola Todorovic co-founded Wonder Dynamics, Autodesk's software that can benefit
filmmakers with no funding or studio connections. Some kids from Indiana will be able to tell their story easier. So I do think long term we'll see better effects of it and more opportunities. Anyone with a browser now has access to AI generated CG characters for a fraction of the time and money it would normally take on a big budget film.
And here's something else to consider. AI today is the worst it will ever be, leaving many eager to embrace the potential blockbuster benefits. Others worried that without sufficient guardrails, AI could lead to a tragic Hollywood ending.
Shannon. All right. Thank you to Claudia Cowan reporting there. By the way, if you haven't seen it yet, be sure to check out Fox Nation's hit series, Martin Scorsese Presents the Saints. The third episode out today focuses on the life of Sebastian, who worked to convert Roman elites to Christianity and became a martyr for his faith. Scorsese explains how he drew inspiration for the series from the Saints' lives. I think it started with people just telling stories of men and women who did
extraordinary things. We're extraordinary people who stood up to injustice and cruelty and risked their lives to help other people.
Martin Scorsese presents The Saints is available on Fox Nation. There are new episodes every Sunday through December 8th. And by the way, next week, you don't want to miss Fox News Sunday. We will be doing our state of defense special. We'll discuss the top military and defense issues facing the country and the world as a new Trump administration prepares to take over. I'll be reporting from the Reagan National Defense Forum in Simi Valley, California.
That is it for us today. Thank you for joining us. I'm Shannon Bream. Have a blessed week. We'll see you next Fox News Sunday.
Fox News Audio presents the Fox Nation Investigates podcast. The Menendez brothers, monsters or misunderstood? We have evolved to understanding that this kind of stuff can happen. Judge Jeanine Pirro and a panel of experts break down the Menendez brothers' new fight for freedom, and their defense attorney explains why he's optimistic he can clear their names. Are these convicted killers monsters or just misunderstood? Listen and follow starting January 7th at foxtruecrime.com or wherever you get your favorite podcasts.
Listen to Fox News Sunday ad-free on Amazon Music with your Prime membership or subscribe wherever you get your podcasts.