We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode How Much AI Regulation Is The Right Amount?

How Much AI Regulation Is The Right Amount?

2024/6/13
logo of podcast FiveThirtyEight Politics

FiveThirtyEight Politics

AI Deep Dive AI Chapters Transcript
People
G
Galen Druk
G
Gregory Allen
Topics
Galen Druk: 本期节目讨论了美国国会对人工智能的监管问题,包括参议院提出的320亿美元人工智能研究拨款计划,以及对深度伪造技术和数据隐私的关注。批评人士认为该计划过于模糊,未能迅速采取行动,并回避了版权法等棘手问题。立法者则认为,模糊性是保持两党合作的关键,他们不希望扼杀创新。 Galen Druk还讨论了公众对人工智能的担忧,以及人工智能对即将到来的选举的潜在影响。一项民调显示,78%的美国人认为应该对使用公共数据训练人工智能模型进行政府监管,并且两党在这方面的观点差异不大。尽管人工智能不是即将到来的选举中人们关注的主要问题,但立法者可能感受到来自选民的压力。 Gregory Allen: Gregory Allen 认为,欧盟已经通过了《欧盟人工智能法案》,美国也需要对人工智能进行监管,因为人工智能是一项通用技术,应用广泛。现有的行业特定监管仍然适用,但人工智能作为通用技术也需要水平的跨行业监管。 Allen 指出,国会最关注的是人工智能生成合成媒体(深度伪造)的应用,特别是其在选举干预中的应用。深度伪造技术可以生成高质量的视频伪造品,虽然专家可以检测到,但普通人可能无法检测到。国会还关注自动驾驶汽车、隐私等领域。 Allen 认为,美国更倾向于利用现有立法和机构来监管人工智能,而欧盟则更积极地创建新的机构。他认为,今年国会最有可能通过的是关于深度伪造和选举欺诈的立法,而关于版权和隐私的立法可能要到2025年以后才能出台。他还讨论了人工智能对选举的潜在影响,认为现在就下结论还为时尚早,但应该采取措施来降低风险。

Deep Dive

Chapters

Shownotes Transcript

Translations:
中文

Ryan Reynolds here for, I guess, my 100th Mint commercial. No, no, no, no, no, no, no, no, no. I mean, honestly, when I started this, I thought I'd only have to do like four of these. I mean, it's unlimited premium wireless for $15 a month. How are there still people paying two or three times that much? I'm sorry, I shouldn't be victim blaming here. Give it a try at mintmobile.com slash save whenever you're ready. For

Just because something bad hasn't happened yet doesn't mean something couldn't happen, right? If the year before the Three Mile Island nuclear disaster, you said, there's never been a nuclear safety disaster, that means we'll always be safe, right? You'd be an idiot. ♪

Hello and welcome to the FiveThirtyEight Politics Podcast. I'm Galen Druk. We're all familiar with the scene at this point. Congress hauls in tech industry bigwigs to question them on the harms of social media or virtues of net neutrality or even First Amendment rights and national security, only to make headlines themselves for their own limited understanding of modern technology.

lawmakers have often taken heat for being behind the curve when it comes to grappling with big tech.

Now enter artificial intelligence. Last month, a bipartisan group of senators released a roadmap for AI policy after spending nearly a year convening industry leaders, academics, and members of civil society. The top lines from the report include a call for $32 billion in annual spending to support research in AI. It also recommends legislation preventing the use of deepfakes in election campaigns and passing a federal data privacy law, among other things.

Notably, the group did not recommend one large AI bill, instead leaving the legislation to be worked on piecemeal by congressional committees.

Critics say that the report is too vague, won't spur quick enough action, and avoids thorny issues like the future of copyright law. The lawmakers who worked on the roadmap argue that some of the vagueness was necessary to keep the project bipartisan and that they don't want to quash innovation. So, is Congress doing enough to keep up with AI and what will the future of regulation look like? Here

Here with me today to discuss it all is Gregory Allen, director of the Wadwani Center for AI and Advanced Technologies and the host of the AI Policy Podcast. Gregory, welcome. Hey, thanks very much for having me on. Let's just start with the broad areas of concern where lawmakers might consider regulation altogether, putting the roadmap itself aside for a second. What are the kinds of things

of things that a government, either the US government or the EU or anyone else might consider regulating today.

Sure. And I think in the case of the EU, it's not a considering to regulate. They have already passed the EU AI Act. So there's going to be AI regulation in the European Union. And even in the United States, AI is a general purpose technology. That means you can use it in autonomous cars. You can use it in AI driven algorithms to make financial trades on the stock market. And you can use it in human resources, software and a bunch of other stuff.

Well, everything that I just said is already a regulated sector, and it's regulated whether or not you're using AI. So if you're using AI, then those regulations still apply to you, and there might be some unique implications. So the question is not really whether or not we or the EU is going to regulate AI. We're going to regulate AI, and we already are, in fact, regulating AI.

But now there has been this theory that AI as a general purpose technology deserves not just vertical sector specific regulation, like most of the regulation that exists across the U.S. government, but in fact, horizontal regulation, stuff that applies to AI across many different industries, across many different use cases, whether in government or the private sector.

And some of the use cases of AI as a sort of general purpose technology where they're saying maybe we should be regulating the technology itself, not just the use cases, are

gets to a few thorny issues. The first, I would say, and the one that probably has the most momentum in Congress, relates to the use of AI to generate synthetic media, or what most folks have probably already heard of, deepfakes. And deepfakes are interesting because while it has always been possible for somebody to type out a memo and claim that this memo was written by somebody who never wrote it or never saw it,

Now you can do that with audio. Now you can do that with video. And by impersonating someone, you can generate pretty high quality video forgeries. And even though

They are detectable to experts. They might not be detectable to just everyday people who are watching these videos. They're that good. And so the first area that they're really interested in deepfakes is around election interference. This is something that is recommended by the roadmap. And now the Senate Committee on Rules, led by Senator Amy Klobuchar, has already introduced multiple pieces of legislation looking to tackle the use of deepfakes in election imagery.

Other areas that they're interested in are some of the usual suspects, like autonomous cars, privacy, and other areas where people are familiar with these debates, even predating AI.

You mentioned that this AI regulation is somewhat different in the sense that it's not sector specific. It's just technology specific. Are there other examples where we've done this in the past where we say, OK, this one type of technology or this one area of commerce or whatever it may be is unique enough that it needs to be regulated sort of all on its own? You can point to certain examples that look like that, but they're all sort of imperfect, right?

So, for example, nuclear technology is regulated, but the number of nuclear material use cases is really small. It's weapons, energy, and a handful of use cases in medicine. And other than that, nobody really cares about nuclear technology. So even though you're regulating the technology horizontally, what that actually means in the terms of the number of covered market verticals is pretty modest.

Other areas would be something like electricity. So we are familiar with the fact that there are electrical regulations related to fire safety or other types of health damage. And those apply whether you're talking about the electrical grid or you're talking about consumer product safety type implications. But the actual implementation of that is usually different agencies doing different stuff.

All right, so let's dive into the conflict here. There are lots of differing opinions on how we should go about this. And actually, some of them are pretty evident from the roadmap that the Senate put out, the White House's executive action, and then also this shadow report from advocacy groups and civil society groups saying,

that say that the Senate report has not gone far enough. So I just want to give a couple examples. It seems like the Senate report really focuses on innovation, investing in AI, making sure that we're supporting it both in the private sector and in defense,

It addresses things like the impact on workforce or transparency and privacy, things like that. But those seem to come almost secondarily. Like the big priority here seems to be the EU went too far. We don't want to sort of hinder innovation in America. And so we want to make sure that we have the best AI technology out there. The White House in their executive ad

action seems to be focusing a lot on sort of safety and security, privacy, you know, supporting workers, ensuring that the government uses it responsibly. You get to the shadow report, which is sort of more advocacy groups and listed out in terms of their priorities are racial justice and equality, immigration, labor. They talk about climate change, poverty. So it's kind of like many of the

political concerns of more leftist groups may be mapped on to AI regulation and policy. Who does it seem like is going to win in this debate at this moment? Yeah, I would say in the United States, the innovation community is definitely has a real momentum in both Congress and the White House. The United States has a

overall leadership in frontier AI research. And that's really valuable to the US economy. That's really valuable to US society. We led in internet technology and that worked out great for the US economy. And so I think most folks look at AI and say, okay, this is the next frontier. Whatever is going to be the next equivalent of the iPhone in terms of transforming the overall tech industry, we would like that to be made in America and we would like that to be led by America.

Now, where that comes down to in terms of all these various issues around bias, just take one example, bias in hiring decisions. So the Civil Rights Act, which created the Equal Employment Opportunity Commission back in the 70s,

It prohibits racial discrimination as the basis for a hiring decision, a firing decision, a promotion decision, a demotion decision, or a bonus decision. I mean, it regulates all of those outcomes and says that they cannot be justified because of racial discrimination. Well, that regulation applies whether or not you're using AI.

And if you go to a judge and you say, oh, I wasn't racist. I was trusting this AI product that I bought to not be racist. The judge does not care at all. The law is very clear. The discrimination is illegal regardless of the source of that discrimination. And I think a lot of the existing law looks like this. Think about, for example, critical infrastructure. So, you know, I've been talking about AI safety recently.

Well, one of the things that makes AI really tough is that the ways in which AI software can break...

are really weird and unusual compared to the ways that traditional computer software, which is all rules-based, if-then statement type computer software. Well, we have like decades of experience on what it's like to build ultra high reliability traditional software systems. If you want to, you know, use software to, for example, land a rover on Mars, well, you better know, you know, how to make sure that that thing is only going to fail one out of every 10 million times, right?

But machine learning software is different and it breaks in different ways. So if you're thinking about using AI in a nuclear power plant, for example, well, how can you promise that there's not going to be some kind of safety disaster or really any kind of critical infrastructure?

But my point is that the existing regulations around what constitutes negligence, what constitutes an irresponsible use of an unimproven technology, those types of standards apply whether or not it's AI. And so there is sort of this existing base of regulation. And so I think that's part of the reason that folks are so focused on

on transparency into the algorithms that are being used so that you can actually know how a hiring software or whatever is being instructed. Well, I think their transparency is the dream, but it is not necessarily an achievable dream. And I mean that in raw technical terms. So what you can tell a company is,

to do is to tell you, you know, what was the data set that this AI system was trained upon? What is the algorithm that you used to apply to that training data set? And then you can say, run a bunch of tests and tell me what the results of the tests are.

But that doesn't actually give you necessarily the degree of transparency that we have come to expect with traditional software, right? If you want to ask a bank, tell me what criteria you used to make a loan decision, they will show you an equation, right? And that equation will probably, you know, have maybe 10 variables in it. So we're talking like a high school algebra student could probably look at that equation and make sense of it.

You know, these large language AI models might have like 100 billion parameters. So even if the company gives you everything, the degree of transparency that is currently technically feasible, that is, you know, available to the smartest, most well-resourced companies in the world, is less than what's currently possible with traditional software. And while there's been like some basic research progress in this area, it's like not

solved at all on a basic research level. And so that's why, as you said, transparency is a really big focus of all of this, but it's more transparency on test results type of data. What are the inputs that went into the creation of this system? Because that's what you can actually get. Yeah. And so for people who think that this is a problem, and it seems like even the Senate roadmap, which is heavily sort of skews maybe towards the innovation side of the equation,

still talks about explainability and transparency. Yeah. I mean, it's interesting that you say that, but I'll just tell you, for example, the Biden administration will describe the executive order as the maximum amount of, you know, action that they could take in terms of, you know, promoting safety, promoting fairness, preventing discrimination. They would say it was the maximum amount of action that they found themselves with the legal authorities to take.

Right. So, you know, you're sort of describing the situation as. Well, I guess, of course, Congress, which has legislative authority, can do all kinds of things that the president can't. They can go farther. Yeah. You know, just that's how our government works. And so they can do all kinds of things in terms of, you know, requiring transparency or explainability or requiring labeling or banning certain uses for AI or whatnot. Of course, the president can't do that. But I mean, to that point.

For people who think that explainability and transparency are a real challenge when it comes to AI, regulating AI, understanding its impacts on society, and whether or not they're like deleterious or whether they're beneficial or whatnot, like how do you address that if you wanted to?

Yeah. So I think transparency is addressable with the limitations that I just said. You can have transparency in terms of requiring the companies to give you the types of stuff that they have in order to understand what these systems are, how they will work, and under what conditions they will do that.

So that type of transparency can be required and, in fact, already is being required by the executive order. So for the AI systems that are not this generation of AI systems but the next generation of AI systems, they define the limit based on a computing technology.

performance threshold, which the next generation of systems are expected to hit. If executive order is using the Defense Production Act, of all things, to compel the companies to produce transparency reports around, you know, all sorts of relevant data around the performance of these systems, and both in terms of, you know, safety and

rights protection. But that's for the large language models, these really beefy systems that cost hundreds of millions of dollars to make big supercomputers. We're talking like the stuff that was used to make chat GPT. There's this whole other universe of AI applications, which might be considerably smaller and might still be used in safety-critical or human rights-critical applications.

That's where the European Union, for example, went quite a bit farther. What they did is they created what they're calling the pyramid of risk. So they say these are low-risk applications, these are high-risk applications, and these are quote-unquote unacceptable risk applications. So unacceptable risk applications are stuff like social credit scoring, where you're using an AI to assess

you know, the social worthiness of people and used in making all kinds of decisions or certain types of biometric surveillance by law enforcement agencies. And those are just called unacceptable and they're straight up banned. Then there's other applications like the use of AI in HR, human resources for careers. And that's deemed like a high risk application and is subject to all these kinds of regulations.

And that, I think, is sort of where we are in both the United States and in Europe. I think the United States is more interested at this stage in looking at the existing body of legislation and having agencies and commissions apply their existing legislative authorities and adapt it for AI and then sort of get a sense of how well that works.

And the EU is, of course, more aggressive in terms of actually coming up with new agencies. Yeah. So let's dig into that a little bit more. It seems like we're not taking the EU path, which is to pass one big AI act that encompasses all of this. Which, by the way, Senator Schumer, who came to CSIS, my institution – that's where the Wadwani Center is housed –

last June, June 2023. And when he announced his safe innovation framework, the original goal was comprehensive legislation. So they were looking for a big omnibus bill. And I think they've decided actually what they'd like to do is to deal with this piecemeal and get it through committees. One reason why I think that was an attractive option is it's basically what was plausible in terms of getting something passed in an election year, which is always tough.

This is going to be an especially tough one. And that's why I think, you know, deep fakes and election based AI deception and advertising. Those are really high priority things to get after. Yeah, that was going to be my question. Do you expect any AI regulation to pass this year in Congress? And if so, what?

Yeah. So before I answer your question about Congress, I just want to point out some of this stuff is already being done by relative agencies and commissions. States as well. Yeah. Yeah. So after, for example, in the New Hampshire primary, there was a deep fake. It was pretending to be President Biden and it had his voice. It was using AI to generate his voice. Yeah.

And there was a series of robocalls where this fake robo Biden was telling people, you know, the wrong date for the election and to do that. And, you know, robocalls have existed for a long time, but it's just, you know, it was the first time it was in President Biden's voice. Well, right after that, the Federal Communications Commission, you know, banned robocalls.

the use of AI in deceptive election advertising or robocalls, I think, more generally. So there's some stuff that's being done by the agencies. Now, in Congress, as I said, we've got three bills from the Senate Committee on Rules introduced by Senator Klobuchar. And then also in the states,

More than a dozen states have now introduced various types of bills, some of them even already passed, that regulate the use of AI and deceptive election advertising. And do the election regulation AI bills have bipartisan support? Definitely. I mean, so Senator Schumer's gang of four is bipartisan, has been bipartisan for a long time.

since the outset. I testified in one of the Insight forums that Senator Schumer and the Gang of Four held on AI. And one thing that I think was just so remarkable that I, you know, I've testified before Congress before. And one thing that I saw here that I'd never seen before is it's not just that there was, you know, senators up on the dais, right?

who were, you know, making remarks or asking questions. There were senators in the audience who were just there to learn and to listen. So, you know, you mentioned at the beginning of our conversation, you know, some of the times where members of Congress have embarrassed themselves over their ignorance of the basic facets of, say, social media technology.

And I think this is an area, AI is an area, where senators and Congress have revealed an incredible appetite to learn. Today's podcast is brought to you by GiveWell. You're a details person. You want to understand how things really work. So when you're giving to charity, you should look at GiveWell, an independent resource for rigorous, transparent research about great giving opportunities whose website will leave even the most detail-oriented reader stunned.

Busy. GiveWell has now spent over 17 years researching charitable organizations and only directs funding to a few of the highest impact opportunities they've found. Over 100,000 donors have used GiveWell to donate more than $2 billion.

Rigorous evidence suggests that these donations will save over 200,000 lives and improve the lives of millions more. GiveWell wants as many donors as possible to make informed decisions about high-impact giving. You can find all their research and recommendations on their site for free. And you can make tax-deductible donations to their recommended funds or charities. And GiveWell doesn't take a cut.

Today's podcast is brought to you by Shopify. Ready to make the smartest choice for your business? Say hello to Shopify, the global commerce platform that makes selling a breeze.

Whether you're starting your online shop, opening your first physical store, or hitting a million orders, Shopify is your growth partner. Sell everywhere with Shopify's all-in-one e-commerce platform and in-person POS system. Turn browsers into buyers with Shopify's best converting checkout, 36% better than other platforms. Effortlessly sell more with Shopify Magic, your AI-powered all-star.

Did you know Shopify powers 10% of all e-commerce in the U.S. and supports global brands like Allbirds, Rothy's, and Brooklinen? Join millions of successful entrepreneurs across 175 countries, backed by Shopify's extensive support and help resources.

Because businesses that grow, grow with Shopify. Start your success story today. Sign up for a $1 per month trial period at shopify.com slash 538. That's the numbers, not the letters. Shopify.com slash 538.

You know, we love polling here. And so according to the Artificial Intelligence Policy Institute, 78% of Americans believe that there should be government regulations on the use of public data to train AI models. And there really isn't much of a partisan difference there. You know, it's 83% of Democrats and 78% of Republicans. Independents are actually the least concerned out of everyone, but that's 72%. So really not much difference there.

When we talk about issues that Americans are concerned about in the upcoming election, we don't really talk about AI. But is this an area where lawmakers feel pressure from their constituents to act? Or is this just lawmakers themselves are concerned? Which direction is this sort of pressure coming from? Sure. I'll just give you a few things that I think are interesting and signaling to me about this topic. Number one is...

AI has completely captured the public imagination like no technology in a very, very long time. You know that people care about AI because every dang media organization anywhere is running stories about AI. And a lot of those media organizations are making choices based on what people click on. So people are clicking on, you know, these these AI stories and people care.

Don't read me like that, Gregory. Of course, FiveThirtyEight would never make decisions on anything other than quality and importance to the public. But other news sites, they're doing this because people care. And when senators go and go back to their home districts or states –

They are getting questions about AI. It's something that people care about, something that people are concerned about. Another thing I think that's kind of interesting in this is President Biden, who's constantly meeting with foreign heads of state, right? He's meeting with the leaders of other countries.

And what I've heard from the White House is that AI comes up in basically all of those conversations. Like, it's been a long time since Biden has had a meeting with a head of foreign state where AI has not featured to some greater or lesser extent in the conversation. Now, what pressure does that level of interest and that level of concern translate to? I think they do feel the need to do something. Right?

And the reality is they have done something. They've created this new federal organization around AI safety. The executive order has directed all these different federal agencies to comb through their legal authorities to look at which of those authorities they can deploy to regulating AI appropriately. If they haven't already written guidance as to how the existing regulations apply in the new case of AI, in many cases, they're directed to go write that guidance. But

in terms of, you know, Congress. Yeah, I was going to say, well, just to pause right there, because, of course, even that action in and of itself is viewed through a partisan lens. There are plenty of

Americans, chiefly Republicans, who don't like the regulatory authority of these agencies, for one, think that it's, you know, some folks with a more libertarian bent think that it's extra constitutional. And beyond that, regardless of what you think about it from a philosophical political perspective, it's stuff that can easily be undone with the next administration, unlike passing legislation through Congress. Right.

Yeah, and we don't know in a hypothetical second Trump administration, you know, how they would pursue an AI policy agenda, you know, whether regulatory or otherwise. At least in terms of the first Trump administration, there actually was a statement on this topic made by the White House Office of Science and Technology Policy. And their theory at the time was that AI regulation should be pursued with a, quote, light touch.

And there, the Trump administration was basically saying that we should follow the story that we did in the internet, which is let's wait and see. Let's let the internet grow and be interesting and not try and stifle this young and exciting technology ecosystem with a bunch of regulatory requirements.

And then as it grows in both breadth and capability, then we can sort of assess what and whether regulations might be approached. There's definitely a faction of the Republican Party that believes that that is the right course with AI as well. There's also a faction of the Democratic Party that feels the same way.

I would say that there are sort of exceptions, though. Some of them do reflect entrenched interests. So, for example, artificial intelligence and copyright is a really hot-button issue right now. The New York Times and a number of other media organizations have sued OpenAI in court, alleging that it is trained off their data, it is regurgitating their articles, and doing so without compensating the New York Times in any way. Okay.

So that is a source of legal dispute. It's also a source of political dispute. The main lobbyist organization behind Hollywood and the movie industry is trying to go after the AI companies to ensure that their intellectual property is protected when it's being used to train these models. So my point here basically is that the issues where I can see –

Something happening this year as opposed to something happening after the election are really kind of these niche issues around deep fakes and election deception. Maybe something on intellectual property, but I definitely do not see some kind of groundswell of public support for...

a cross-cutting legislation such as the EU AI Act. Yeah, I want to talk a little bit more about what could happen next from a policy perspective. But to that point about the election, according to an Elon University poll, 54% of Americans described their feelings towards AI with the word cautious, and 70% of Americans believe that AI could significantly impact elections through the generation of fake information videos.

and audios. I think there has been a lot of attention paid to the potential impacts of AI on this election. In fact, a little less than a year ago, we did an episode on this podcast that was titled something like the first AI election. So far, I think the general sense has been that AI

The anticipated or maybe worried impact of AI on the election has not worn out. Obviously, as you cited, there was the case in during the New Hampshire primary, but that this election has not thus far looked very different as a result of AI.

Would you agree with that? Do you think that that would maybe be like coming to conclusions too soon? What is your take on that? Yeah, this is very easy for me to answer. It's definitely coming to conclusions way too soon. So let me give you a few data points that strike me as really interesting. So folks might remember in the 2016 election, the Russian intelligence services were involved in creating a lot of disinformation-based information.

content. And that was coming out of the, I think it was called the Internet Research Agency, if memory serves. It was definitely the IRA out of Russia. And that had hundreds of people working in an office in Russia. And every day they're waking up, they're clocking in and they're cranking out deceptive information content. But there's a problem, which is most of them don't speak great English. And

And so a lot of the stuff that they're writing has like the common hallmarks when English is your second language and Russian is your native language. Well, just recently, OpenAI announced that they have detected both Russian and Chinese intelligence services using their platform to generate disinformation in advance of the election with a politically motivated intent.

And I think what's really interesting there is that OpenAI, ChatGPT does not make grammatical mistakes. And OpenAI, ChatGPT does not require you to hire hundreds and hundreds of people.

And what we've seen, you know, in the text domain, which was already achievable before, you know, now that same sort of synthetic media automatically generated stuff, highly customized stuff, you know, it can be more audience targeted, audience calibrated. We can now bring that to audio, video images at massive scale with capacity. And I think there's sort of two scenarios to think about here.

Number one is just massive scale. What percentage of 4chan today is disinformation that to some greater or lesser extent has its origins in potentially foreign content created by AI? I don't know. I don't think a reliable survey has been done or really could be done on that topic at the present time. That's a sort of scale-based attack. The other attack that I would really be concerned about is just

an incredibly precise, perfectly timed attack. Like the October surprise P-tape or N-word tape or like Biden falling down or appearing to have a stroke or whatever in the 2016 or 2020 election, people would have a stronger sense of whether or not it was real. But today, whether it's real or not, people will just not know. Yes, exactly. You know, I mean, the right information, the right media, right?

at the right time, can really be the hinge moment in really important moments in history. And my question then becomes, right, could something actually make an impact on the US election? As a starting hypothesis, I would say, yes, it definitely could. And we should be taking steps now to make that chance go down. So essentially, one, even if we don't have the blockbuster use of AI that people might be afraid of, such as a deepfake

In October, there could be effects of AI that are a lot less sexy, which is just the kind of information that's being spread on the Internet amongst people using social media or whatnot. But that also, given the nature of our election cycle and October surprises and whatnot, it could be far too soon to come to any conclusions about the impact of AI on this election.

Yeah. I mean, just because something bad hasn't happened yet doesn't mean something couldn't happen, right? If the year before the Three Mile Island nuclear disaster, you said there's never been a nuclear safety disaster, that means we'll always be safe, right? You'd be an idiot. And I think the same thing strikes me as the truth about election interference with AI. I don't know.

It would be wrong of me to say that I know 100% that AI election interference will be a big problem and a big phenomenon this year. But I do feel like I know that it could be a big phenomenon and a big problem. And so I think that's enough to justify our taking steps to mitigate that risk. As you mentioned, it seems like the most immediate focus in Congress would be AI uses related to the upcoming election. But beyond that, is it clear that there is the...

political will to regulate AI in different ways when it comes to copyright, as you've mentioned, or sort of what's mentioned in this roadmap, for example, is a privacy bill that will, of course, affect AI. Is there bipartisan support for those things? You know, what comes next after we've maybe addressed the upcoming election?

I think privacy is going to be really tough to pass at the federal level. At the state level, I think this is happening. It's already happened in some states. I also think at the international level, ChatGPT was briefly banned in Italy for noncompliance with GDPR, the existing big European privacy regulations. I mention that because all of these companies operate both in the United States and Europe.

And usually when they're forced to comply with European regulation, they just do that worldwide because it's simpler than trying to calibrate, you know, what they do based on different jurisdictions.

So that's, I think, the story on privacy. I think that's a really tough one. Intellectual property, I think it's a really rough political debate. There's very entrenched special interests on both sides, but one of them might win. I mean, that could be, I think we'll probably have that fight in 2025 would be my guess. Then you get to the more, I guess here you kind of have to separate the two types of AI systems that you might want to regulate.

Historically, when we've been talking about AI, right, when the EU AI Act was first drafted, they did not have chat GPT on their minds. The first draft of the EU AI Act predates chat GPT. And the reason why I mention that is before large language models,

Most AI systems were application specific, right? If you have an AI system that is a computer vision image recognition system, if you give it a bunch of pictures of cats, it's going to be good at recognizing cats. It's not going to be good at recognizing military aircraft or tanks or something like that. So historically, AI systems are very application specific.

What's interesting about, you know, ChatGPT and the other large language models is they're not application specific. It will give you medical advice. It will give you legal advice. It will give you entrepreneurship advice. It will give you life coaching or therapist, you know, type advice. And so you have these individual systems that...

that are so diverse in the number of applications that they can do that you might want to regulate those as an entity. And so in the case of the EU AI Act, for example, they separate the sort of sector-specific regulations, which is the low risk, high risk, unacceptable risk, you know, risk pyramid. And that's like based on what the AI system is doing.

But then they also have this set of regulations around what they call general purpose AI systems that pose a systemic risk. And that's just regulating the technology because of its capabilities. Here's what's interesting, I think, there and in the United States and elsewhere. The regulations, at least in the legislation, mostly say thou shalt follow the standard of

And by the way, standards coming soon, we promise. That's what's so interesting is they've actually mandated the development of standards and then they've mandated the following of those standards. So right now there is no existing standard for what constitutes the responsible development and the responsible operation of a super general purpose, super capable AI system like ChatGPT or Claude or Gemini, right?

But those are coming. And I think that's what also the US kind of has to wrestle with is, do we only want to continue this existing paradigm of application specific regulation? Or do we also want to regulate based on the technology overall? So far, all we've done in the latter case is mandate some transparency and reporting requirements.

You know, there's a debate here that I think we've touched on, which is, do you want to wait for the technology to develop because it can sort of bring a lot of prosperity and efficiency alongside it before you regulate it? Or do you want to preemptively regulate it because you are worried about the deleterious effects it may have?

Is it clear which side of the equation the United States is on for this? No, I think in terms of mitigating the deleterious effects...

There are some spots that you can point to. So, for example, there's an entire community here in Washington, D.C. of policy wonks, myself included, who are really interested and concerned about the intersection of AI and biotechnology. Right. Could AI lower the barriers to entry for designing new pathogens that are even more deadly and dangerous than COVID or something like that?

So those kind of niche areas, I think the U.S. and specifically the Biden administration kind of want to get out ahead of those types of risks and regulate. But on the much broader concern around fairness and bias, I think there's more of a wait and see and like, let's see how the existing regulations change.

do in handling this challenge. Let's see what happens in Europe and whether or not they are patting themselves on the back or feeling like an idiot for what they've done. And then let's make a decision on how we want to proceed. So

So I do think it kind of depends upon those different issues. And then one other thing, right, is this transparency requirement, safety reporting requirements of the large language models, which might give the government a better sense of why it does not want to regulate or it might give the government a better sense of why it does want to regulate.

Oftentimes, we see Congress move swiftly when there's a crisis. Yes. The only time we see Congress move swiftly is when there's a crisis. And so what is the likelihood that we end up in a situation where things have gone quite badly and Congress ends up reacting to that? There's another sort of situation where Congress slowly comes around to a new consensus, which is that, you know, for much of my life, being a child of the 90s, like,

free trade and globalization was the name of the game. And both parties have slowly come around to the understanding that that actually hollowed out a lot of America's industrial base, exacerbated the alienation of the working class that has had all kinds of political impacts. And so while maybe sort of Trump was the impetus for switching positions for Republicans, Biden has furthered

A decent number of those policy positions vis-a-vis tariffs on China, making things in America. And so export controls on China, specifically targeted AI. Exactly. And so there's two kinds of ways that things change in Washington, which is one, there's a crisis and Congress reacts. Or two, over a period of a decade plus, everyone starts to realize we kind of we kind of f***ed

Or even if there were positive impacts in the long term, some of the negative impacts outweigh those or we've gone too far or whatnot.

Looking into the future now, what do you envision our relationship to AI being like? Yeah, so I think on the crisis-driven, I think that's the most plausible path to regulation, right? It's something really egregious, really bright, shiny disaster crisis kind of happens. And that spurs momentum for a new round of regulation. And I think that's why a lot of this work that's being done right now, even if they lose...

I still think it's worth paying attention to because a lot of times what happens when there is a crisis is you go look, hey, who had proposals on this topic that are somewhat near mature? That I think is a super plausible scenario now or in the future.

You mentioned China, and I think so far, at least in the US political debate, China has mostly been a reason to not regulate. China has mostly been a reason to invest when we think about the sort of competitive pressure that China brings, because hopefully most folks understand this, but in the global AI landscape,

The United States, Europe, and China are all really good at AI research. The U.S. is the best, but it's not like a massive lead. It's a lead. Where the U.S. and China just dwarf Europe is on AI commercialization and industrialization, actually building companies who actually make money by serving actual customers,

and having real robust venture capital ecosystems, the United States and China are unambiguously the global leaders in there. And there's plenty of very smart, very talented people in AI. And so that's an argument that comes up in Congress time and again is, right, if we hamstring our companies with regulation, are we giving China a leg up in, you know, what might be the race for a technology even more important than the internet turned out to be? Those are kind of the stakes. All right, well...

Pretty big stakes, though. Thank you so much for taking the time to chat with me today. Hey, my pleasure. You know, long-time listener, first-time caller, I should say. Appreciate it. My name is Galen Druk. Our producers are Shane McKeon and Cameron Chertavian, and our intern is Jayla Everett. You can get in touch by emailing us at podcasts at 538.com. You can also, of course, tweet at us with any questions or comments. If you're a fan of the show, leave us a rating or review in the Apple Podcast Store or tell someone about us. Thanks for listening, and we will see you soon. ♪