We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode U.S. Public Opinion about AI with Professor Paul Brewer and co-authors

U.S. Public Opinion about AI with Professor Paul Brewer and co-authors

2020/9/2
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Ashley Paintsil
J
James Biggerman
P
Paul Brewer
Topics
Paul Brewer: 本研究发现,美国公众对人工智能的看法存在显著的两面性。一方面,大多数人对人工智能改善日常生活抱有希望;另一方面,他们也担心人工智能可能威胁人类生存。此外,公众对人工智能的监管存在矛盾心理,既希望监管,又不信任政府进行监管。这种矛盾源于公众对人工智能的了解有限,以及媒体信息对公众认知的影响。 James Biggerman: 通过对开放式问题的分析,我们发现公众对人工智能的最初印象主要源于媒体对机器人和技术的描绘。这种刻板印象影响了公众对人工智能风险的认知,许多人将人工智能与机器人接管世界等科幻场景联系起来。 Ashley Paintsil: 本研究表明,媒体信息对公众对人工智能的态度有显著影响。关注科技新闻、观看科幻作品以及使用人工智能个人助理的人们对人工智能的看法更为积极。此外,教育程度也与对人工智能的积极态度相关。媒体中对人工智能的框架性叙事(例如社会进步框架或潘多拉魔盒框架)以及图像(例如Siri/Alexa的图像或科幻电影中机器人的图像)都会影响公众的认知。

Deep Dive

Chapters
The survey was motivated by popular culture images of AI and recent news stories about its potential dangers, as well as the increasing presence of AI in everyday life and societal issues like policing and racism.

Shownotes Transcript

Translations:
中文

Hello and welcome to Sky in Today's Let's Talk AI podcast where you can hear from AI researchers about what's actually going on with AI and what is just clickbait headlines. We release weekly AI news coverage and also occasional interviews such as today. I am Andrey Krennikov, a third year PhD student at the Stanford Vision and Learning Lab and the host of this episode.

In this interview episode, we'll get to hear from several of our authors of a recent survey paper, Media Messages and U.S. Public Opinion About Artificial Intelligence, which was released by the Department of Communication at the University of Delaware. First, let me introduce Professor Paul Brewer, who is the co-principal investigator on the paper.

His research interests include political communication, public opinion, and science communication. And he is particularly interested in how messages in traditional news, entertainment media, and social media shape public opinion about policy issues and public perceptions of science. And in this case, presumably why he worked on artificial intelligence. He is the author of the book, Value War, Public Opinion and the Politics of Gay Rights.

Then, of course, I need to introduce his co-authors on the paper, communication PhD students James Biggerman and Ashley Paintsill. James Biggerman received his MA in mass communication and sports media from Texas Tech University and his BA in communication studies from San Jose State University. And his research interests include sports media and communication, media effects, and strategic communication.

And on to the last participant in this interview today, we have Ashley Paintsill, who in addition to being a PhD student in communication, is also an adjunct fashion journalism professor at the University of Delaware. Her research interests include organizational communication and media psychology. Thank you so much, Paul, James, and Ashley for making the time to be on this episode.

Thank you for having us on here. We're excited to talk about our research. Yes, thank you so much. All right. Great. So once again, the paper we are going to be focusing on is media messages and U.S. public opinion about artificial intelligence, which, of course, as the host of this podcast and also a grad student in AI, I find quite interesting to find out what the public opinion is, what people outside of research think.

Now, the paper has quite a large set of survey results regarding such questions as general opinions about AI, the role of AI in society, hopes and fears about AI, who should develop and use AI, and more. So before we dive into these results and start unpacking them, I'd be actually curious to hear from...

each or some of you what your thoughts are on AI before doing this project and I suppose why you think now or this year was a good time to conduct such a survey and what was the motivation for it?

Yes, this is Paul. I'll tackle that first. So I'm a little bit older than my colleagues, Ashley and James. So my memories of AI go back to movies from the late 60s, 70s. I'm not old.

I wasn't around in the late 60s, but when I was growing up, I saw movies like 2001, A Space Odyssey, which has one of the most famous AIs in popular culture, Hal, who doesn't work out so well in the movie. He basically has a mental breakdown and murders the crew, except for one.

And then on through the 80s, the name for your podcast is very on brand for our study, because Skynet from the Terminator movies is one of my earlier memories of artificial intelligence. I just watched The Matrix with my 11-year-old. It was the first time he...

He'd seen it. We watched it this past weekend. It was not the first time I've seen it, of course. The Machines from the Matrix trilogy being another one of the most famous examples of AI in popular culture. And Minority Report, the Tom Cruise movie about using artificial intelligence to predict what criminals of the future are going to do.

So that's part of what motivated our project was all these popular culture images of AI and our interest in how they might influence public opinion. But I was also intrigued by news stories from recent years about artificial intelligence.

Because my kind of hazy impression before I studied this was there are a lot of stories hyping the potential dangers of artificial intelligence. And oftentimes these stories seem to include eye-popping quotes from people like Stephen Hawking or Bill Gates or Elon Musk talking about how artificial intelligence is an existential threat to humanity.

And a few years back, a graduate student at the University of Delaware, Lucy Opazintsev, who's also an investigator on this project, did a master's thesis where she looked at news coverage of artificial intelligence and kind of dug beneath these headlines and found that beneath the hype, the real picture of AI was a lot more complicated. And even if you look at some of those quotes from Hawking and Gates and Musk, they got a lot of attention.

What they were actually saying about artificial intelligence was sometimes considerably more complex than the news coverage portrayed it as being. Even a few years back, there was an episode of Last Week Tonight with John Oliver where he talked to Hawking. And, of course, they got into the inevitable, will artificial intelligence take over the world and eliminate humanity someday?

So that's been a topic that's been kicking around in my mind for a while. Why study it now? Well, partly artificial intelligence is increasingly widespread in society, including in people's everyday lives with technology like Amazon's Alexa and Siri.

And also there, if you look at the headlines, there are some big picture issues going on in American society today that touch on topics concerning AI. For example, there's a lot of concern about police and racism and policing right now. And so...

In 2020, the companies that have been developing applications of artificial intelligence, such as facial recognition software, have sometimes responded to this controversy by saying, well, maybe we're going to take a step back from selling some of this technology to law enforcement agencies across the country. So these are some of the things that prompted my interest in the topic and the timing of the study as well.

I see. Yeah. And I think that explanation of a motivation of there being a lot of media portrayals and maybe the portrayals being not necessarily close to what AI is in actuality, that actually is a motivation for this podcast. And it's kind of today more broadly is to try to go beyond some of the headlines and to, you know,

give a more informed take from a researcher perspective on what's actually worth paying attention to, what is actually interesting and what isn't. So it makes a lot of sense to me for sure. Ashley or James, do you have any thoughts on sort of what your perceptions of AI were before the project or what motivated you to take part?

Yeah. So my dad's a scientist. So growing up in my household, always sort of this push for science and technology. And he always had the newest and greatest software programs for his work. So I think for me, I just had this intrigue about science and technology. And that kind of transformed into...

reading Wired and TechCrunch and trying to keep up with the latest gadgets. And it's hard not to see the influence of AI in today's technology. So as soon as Dr. Brewer let me know about this project, I'm like, I have to get on this. This is exciting stuff. And I kept track of some of the developments in AI. So that was what really did it for me.

Yeah, and for me, before I joined the PhD program, I was actually a reporter at a fashion business publication called Fashion Vest. And we covered technology and investment news in the fashion industry. So I would have to read publications like Wired and Tut Crunch every day to see what the investment news was. And I was seeing a lot of investments in companies that were producing chatbots

for, you know, shopping websites like Macy's or Forever 21. And that's kind of, you know, my background in that. So when I heard about this, I was super intrigued and it was super fun, you know, to be a part of this project. Cool. Yeah, makes a lot of sense. So again, before diving into results, I want to quickly ask you, can you describe...

Sort of how the survey came together, how did you decide upon the questions that would be in it? Who were the survey participants? And pretty much, yeah, what was the process, what was involved in getting this together?

One of the things we did before we designed our own study, and James and Ashley did a lot of the work and thinking on this, was looking at what other survey organizations have asked about AI. So there have been a few, not a lot, but a few high-profile, well-done national surveys or international surveys on artificial intelligence by organizations such as Ipsos and Gallup.

And generally the picture that they've presented is that the public has kind of some mixed feelings about artificial intelligence concerns, but also seeing some of the potential benefits of it.

By and large, what these studies haven't done, however, has looked at how media and technology use might help explain why members of the public hold the attitude they do. And that's really what we were trying to add here is asking questions about people's media use and media habits and technology use and technology exposure and seeing if those are linked to what people think about artificial intelligence.

So once we came up with the questions that we were interested in, we worked with one of the most prestigious survey organizations in the United States, which is the National Opinion Research Center affiliated with the University of Chicago. And they have an online panel, the AmeriSpeak panel, which is a nationally representative sample of Americans who participate in online surveys.

So almost 2,000 members of this panel took part in our study, and the results are weighted so that they're representative of the U.S. population. In other words, the results that we're talking about, they're only from 2,000 Americans, but they're weighted, and the survey is done through random sampling so that they should reflect what the population as a whole believes about this.

I see. Very interesting. And yeah, Ashley or James, do you want to maybe chime in on your part in the report? And yeah, I suppose what your work involved and any other interesting bits on what was necessary to get this finished?

Yeah. So, um, a big part of this survey, uh, we also embedded an experiment in it. Uh, and so making sure that that was theoretically grounded, uh, in some of the past literature on emerging technologies and, and AI hasn't been done necessarily as much as some other technologies. Um,

at least when it comes to media studies and communication. So looking at what past work has looked at in terms of emerging technologies, nanotechnology, and other things like that, or just science communication in general. So pulling from that literature and kind of seeing what's been done, how they went about that, and kind of grounding the experiment within that area. I have nothing to add to that. Yeah.

Great. Yeah, interesting. Very interesting. Thank you for that overview of how this came together. So let's maybe go ahead and dive in into the actual contents of the survey. As I said at the beginning, it has a lot of results. It has, you know, if you look at the actual PDF, there is dozens of pages with various interesting bits.

So to maybe start looking through those, I'm going to ask what were some highlights for you? What were some of the more interesting results or more insightful results or generally some things that stand out and seem worth sharing to you from the survey results?

Maybe it's best to start with a big picture view of what the American public thinks about AI. And so here, the big picture that we came away with was the public sees both

the positive and the negative potentials of AI. So, you know, we had a bunch of questions asking people what they were worried about and what they were hopeful about with AI. And so, for example, you have a majority of Americans are hopeful that AI will make people's everyday lives better, but a majority are also worried that AI could pose a threat to the existence of the human race.

Most Americans want researchers to develop AI, but an even bigger percentage of Americans want the development of AI to be regulated. And there are some interesting contradictions here. And if you study public opinion long enough, you'll get very used to the idea that...

members of the public can have seemingly contradictory ideas, and that doesn't really bother them too much. For example, most Americans want to regulate AI, but most Americans don't trust the federal government to manage the development of AI. And you might think, well, for some people, they're holding some potentially contradictory views there. But again, that's pretty common in public opinion. And so one thing to keep in mind here is people's attitudes about AI.

And tough topics like emerging technologies can often be pretty nuanced and complicated. But also we found that most Americans are fairly unfamiliar with artificial intelligence. And so this, for many people, this is something that they're probably still figuring out what they think about. They're not sure about it. And so it's not surprising that they hold some pretty mixed views on the topic.

Yeah, makes sense. Thanks a lot for that overview of results. Actually, that's a great place to start. James or Ashley, do you have any other bits that stood out to you that you might want to share? Yeah. So I think for me, the most insightful results were looking when we looked at the open ended responses that we did.

And we saw that, you know, a majority of people responded to the question of, you know, the first thing that comes to mind when they think of AI as robots, computers and technology. And so obviously, I think that comes from the fact that that's how the media portrays it. But we obviously know AI is so much more than that.

Yeah. And just jumping off that, I think what's interesting is you see this sort of trend across all the open-ended questions, or at least a lot of the open-ended questions, in that, you know, whether they're talking about risk or science fiction, it kind of comes back to the idea of robots and these figures that you see from the media. So I know that the

of those that, you know, brought up movies or television or some sort of science fiction mentioned the Terminator or Skynet. Uh,

And then you see with risks, the most common risk was that there was going to be robots taking over the world. And so that's really kind of rooted in those science fiction depictions of AI, which I found quite interesting. So the open-ended responses gave us a kind of good look at, you know, what is top of mind to people? And regardless of what category it fell in, it kind of seemed to fall back on some sort of robot or robotics approach.

Yeah, that's a great point to make. I think it's maybe not surprising, but it's good, of course, to know that this is actually the case, that people still maybe base their view of AI less on what is AI in the real world and more on what has been done in the media.

On that note, maybe we can talk a bit about, I believe you had in a survey some results highlighting people's opinions or perceptions of AI based on what media they consumed, what TV shows, or if they read sci-fi or so on. So, yeah, curious to hear your thoughts on those results and what came out there.

Yeah, kind of the big story there is that people who follow technology more and know more about technology or people who are interested in science and technology, whether it's fictional or news, they tend to hold more favorable attitudes about things like developing AI and funding public research on artificial intelligence.

So, for example, people who follow news about technology are more likely to favor those things than people who don't. People who watch science fiction shows and movies are more favorable toward AI than people who don't. And people who have personal assistants that rely on AI, like Siri and Alexa, are also more favorable toward AI. And...

There's also an education gap. The more educated people are, generally the more favorable they are towards artificial intelligence. And one of the things that this suggests to me, there's a power, there's a potential for education and information and media messages to shape public support for or opposition to artificial intelligence. So what people think about AI 10, 20 years down the road, it's going to depend a lot on the information that they get about the technology.

Great. Yeah. I wonder, James, do you have any ideas or thoughts on this notion of how people perceive or get ideas about AI? Maybe Ashley, as someone who used to do journalism, do you have any thoughts on how people

present day journalism is shaping things and maybe changing it right now? Yeah. I mean, obviously in the beginning, we talked about the fact that there was a lot of, you know, mixed feelings and fear because there is a lot of reporting on that. But then at the same time, we're starting to see AI be perceived a little bit more positively because, you know, it's being used to help people understand

cancer in people's bodies. And, um, but then there are also negative things of course, with, you know, surveillance and, and people's mixed feelings on, um, facial, uh, recognition and identification. So, um, I think the public really does have, um, you know, a mixed view of it based off of, you know, what news is being reported. Yep. They nailed it. Great. Yeah. So, um,

On that note, one thing that was mentioned is that actually the majority, I think something like 67% of respondents also said that they really don't know much about AI. So they have these opinions, but at the same time, they maybe don't feel confident that the

they know or understand what's going on there. So it's interesting to me the interaction between that result and the idea that people who interact with Siri or other AI technologies or follow tech news also have different views of AI. Do you think that as more people interact with things like Siri or see news articles about present day applications of AI,

people will sort of naturally feel like they understand AI better and know what it is and maybe stop thinking of it as Skynet or robots, but start thinking about it more like Siri or Google Translate? Or is there too much science fiction media for this notion of AI to go away? And maybe it'll take more kind of education, more active efforts to change that gap.

I certainly think that's a plausible interpretation of our results. So we did find that link between Siri and Alexa use and favorable views per day. And we did find that in those open-ended responses that Ashley and James were talking about,

robots was the most popular theme, but a substantial percentage did mention real-world applications like Siri and Alexa. And so those technology uses are starting to at least become part of the popular image. Now, so there are two things that could make a difference here. One is

to the extent that more and more people use these technologies, they will, they may become more faithful toward AI. Of course, for them to make that connection, they have to perceive that these technologies are AI and connect AI to those sorts of technologies and not, you know, uh, scary robots. Uh,

Another thing that we found, and James mentioned the experiment earlier, we tried to capture some of these message effects. One of the key concepts in science communication is framing. So a frame is simply one way of interpreting a story or an event or a topic.

So one common frame in news coverage of technology, and not just for AI, but other forms of technology like biotechnology and nanotechnology, is a social progress frame. This new technology is a tool that's going to do all of these great things for society.

Another frame with very different implications for how to interpret the technology is a Pandora's box. That this is something that we're going to do and it's going to get out of control and it's going to cause all of these problems. It's a runaway train. It's a Frankenstein's monster. And so one of the things we did in our study was we gave different respondents different frames for artificial intelligence.

And whether the respondents got a social progress frame or Pandora's box frame did influence how supportive they were of AI. The people who got that this is a helpful tool message were more favorable toward AI than people who got the, oh, this is going to cause lots of problems message. We also tried to capture how images made a difference. So some people got an image of the Siri logo and an Amazon dot image.

And other people got pictures of, you know, kind of cute helper robots helping like a patient in a hospital or whatever.

carrying drinks around at a reception. And then another group got your classic, scary movie AI images, the Skynet logo, a Terminator, and Ultron from the Avengers Age of Ultron movie in the Marvel Cinematic Universe franchise. And those images didn't make a difference by themselves, but when they were paired with frames that resonated with them. So the Pandora's box frame plus the scary movie AI's frames.

or the benign, mundane Alexa and Siri, less the social progress frame. It is the combination of images and frames that also influence people's opinions about artificial intelligence. Yeah, very interesting. I think, James, you mentioned you took a look into how to set up this experiment. And yeah, it seemed like you were involved with that. So I wonder if you have any more details on it.

maybe interesting ways this experiment ran or what was involved or in general, any extra bits to share there? Yeah, I think going into it, we kind of had this idea that these depictions, these media depictions are so prevalent that, you know, you think Terminator and then you think something scary. So we knew that images, you know,

might have an effect here. And so it was really interesting to combine them with the text frames because a lot of framing experiments, uh, have looked at just the text frames, but we live in such a, uh, multimodal, uh, environment now where these text frames are going to be accompanied by images, whether it's a video, whether it's a picture. So it was important for us to see if there is that combination effect with text and images. And so making sure we found images, um,

Like Dr. Brewer said, you know, some of these benign, you wouldn't really even think sometimes that AI and then these sort of scary representations, negative representations. And seeing that that clear effect is small, but clear effect with images and text was a really cool finding. And it makes you kind of geek out as a communication scholar, seeing that effect there. Yeah.

Definitely. It's very interesting to me as well because I worked last year on a little survey, actually, informal survey, you could say. I asked fellow AI researchers what were their tips for journalists for covering AI and sort of what were things they wish the media could do differently. And I

One of the most common responses, perhaps unsurprisingly, but related to this is don't use images of a Terminator in your AI stories. Right. Don't have the headline image be of a Terminator as well as a lot of more nuanced points. So related to that, maybe I can ask Ashley, someone who worked in journalism, this is a thing that a lot of AI researchers feel. And I know having seen some a bit about how coverage happens in

A lot of this has to do with sort of editorial thinking. You know, you need, you want the title to be interesting and clickable and you want the headline image to be intriguing. So maybe an image of Siri along with AI doesn't seem as intriguing or something abstract. But yeah, do you think,

editors and journalists will sort of move away from these more sensationalized images, from these images that are not really real world AI? Or is there just too much connection there between readers' opinions? And it's kind of hard not to go to those images. I think, you know, as, you know, more technology gets developed and as more people associate, you

AI with things like Siri, Alexa, self-driving cars, things that are positive, you're going to start to see that more in the news. But at the end of the day, they have to get people to click on their stories. So I'm not sure people will

um, get away from that completely, but I think we'll move a little further away from it. Yes. Yeah. Good point. And, uh, I think one thing I can say is, uh, there's now more dedicated journalists to the topic and certainly, uh, publications like Wired or New York Times have been a little more scrupulous with not going for those sensationalized images. Um,

let me uh i'm gonna think a bit on where i want to go next there's a lot of uh options here uh do you have any suggestions actually this is i'm gonna cut this out so you know one thing to bear in mind in thinking about the results we've been talking about is this is a snapshot one point in time we did this survey in march 2020 and one of the

One of the things we think are finally suggested, there's a lot of room for opinion here to change because so many people say that they're unfamiliar with it and have conflicting views about it.

A sneak preview of where things are going, we're actually going back into the field next month and re-interviewing more than 1,000 of the respondents that we interviewed back in March to see if anybody changed their mind and to see what kinds of people changed their minds, what kind of psychological processes could be going on to explain how attitudes change about artificial intelligence.

Given the controversies about facial recognition, we're also adding a new experiment where we ask people about applications of facial recognition in things like monitoring protests or trying to identify suspected criminals. And not just asking their opinions about that, but doing another one of these image experiments where we show a face with, you know, your kind of stereotypical biometric lines and dots on it. And we're going to

manipulate those images so that sometimes the person's face being shown as African-American, sometimes they're white. Sometimes it's a man. Sometimes it's a woman.

So do those images influence what people think about the uses of facial recognition technology in an AI context? Because that's been a big controversy unfolding over 2020. I see. Very interesting. Yeah, because we do cover kind of weekly AI news as well on the podcast. And definitely facial recognition has been popping up consistently, very frequently, a lot of news going on. So it's a real, real great moment to look into it.

I know also that some of the results in the survey had to do with using AI for medicine. Are you thinking to ask at all about COVID? I think this was conducted in early March. So I wonder if you think that will affect any changes as well. Yeah, so we're asking about, we're asking that question again. So the most

In addition to asking people in general what they thought about AI, we did ask some questions about specific uses of AI. And back in March, using AI to help diagnose diseases was pretty clearly the most popular application of AI, more so than using it for things like drones. People were pretty down on self-driving cars. That was the least popular application. Given what's happened over the past half year, it'll be interesting to see if that use is even more popular

And also interesting, going back to the facial recognition issue, whether people's views have changed. One question we asked is, how worried are you that facial recognition will discriminate based on race or sex? And people weren't as worried about that as some other things to do with AI. Have people become more worried about that given the publicity on the topic? That's one of the things we're curious to go back and see whether public opinion has changed.

Very cool. James or Ashley, do you have any thoughts on what are you curious to see going forward if you're going to be involved in these follow-up surveys? Yeah, definitely. I'm...

This whole project from start and from when we started with looking at what was existing out there and kind of getting a chance to get on this project, I'm excited to see it transform. And we kind of looked at the stories going on at the time, right? So we were putting this report together and inundated with these articles, Washington Post, New York Times, talking about surveillance and police surveillance.

with facial recognition. And so that idea kind of spurred this new line. So it was a real time, like looking at the news, seeing what's going on. And that's one of the exciting things about being a social scientist is kind of looking at...

as Dr. Brewer said, that snapshot in time, right? It's March, 2020, but as it goes on, as we're putting the report together, things have changed. Um, and so that's, that's what I'm excited to look at is how much is, how much of things changed. Uh, and that's just a direct response to us just kind of letting, letting that happen and keeping our eye on the news, keeping our eye on what's going on and then hoping to, to jump on it and see if we can get some of these responses.

Yeah, I'd have to agree. It'll be interesting to see, you know, how people's opinions change or how people still hold their view of AI because, you know,

people's opinions can be pretty sedentary on certain things. But obviously, as more news coverage happens and more events concerning AI happen, there'll be room, I guess, to grow or open up their minds about what they believe and think about AI. So it'll be super interesting to see what we come up with.

One other thing I'd like to add is that all of the results that we're describing and talking about now, they're in a report that's publicly available. So if it's possible for you to place in the podcast notes, a link to that, we'd be happy for any readers to go and take a look at it themselves and see, you know, the questions that we asked. We have some nice colorful charts illustrating some of the results. So that's all out there for and available to the public.

Certainly, certainly we'll place that link.

I actually have been looking at the PDF myself and I can say to our listeners that you really do not need to be a researcher or grad student to read this. It's very approachable and very easy to understand and quite interesting, I think. There's a lot of details here that, of course, we don't have time to touch on. But if this is an interesting topic to you, how people perceive AI, definitely take a look for our description or you can just Google

media messages and US public opinion about artificial intelligence and find it. To finish up, I think one last question touching on some of your previous points. I wonder, having worked on this project and presumably taking a deeper look at AI and what kind of questions you can ask and the media stories about it,

Have your outlooks on AI changed at all? Or yeah, have you started thinking of some new things or in different ways about it? I would say, uh, certainly I feel I've, by looking at coverage, certainly hasn't made me an expert on AI. Uh, I don't know about James Ashley, but I wouldn't represent myself as I know a lot about AI. I,

I'm more comfortable saying I know about what the public thinks about AI. But just studying it, it has made me more aware of the ways that journalism and other media work is pretty inevitably going to distort the ways that technology gets presented. I knew that in practice in other areas, but it's very interesting to watch it play out in this area I hadn't looked at before. Things like, you know,

push for sensationalism, for drama, for novelty. And so, you know,

But even looking at the broader context for some of the most high profile warnings about the dangers of AI in these things from people like Hawking and Gates and Musk and seeing how those are used to create one frame for AI. When if you look at what researchers are saying themselves, you get a very different picture of the technology. So, you know, when I as I was doing the study and thinking about it and kind of, you know,

The uses of AI, I was starting to see how this can be applicable to other industries as well because AI touches a range of many different industries. And as I've been having conversations, we're seeing it grow more in particular in the fashion industry. People are using it to predict what people want to wear and what production should look like for a particular garment.

So I was starting to think about how this could be applied to the supply chain and that type of thing. Yeah, I think for me, doing this and becoming more aware of what's out there with regard to AI, I got into the weeds a little bit on my own reading about the uncanny valley and this notion that...

looking more humanoid or human-like and they're appealing but only up to like a certain point and then it becomes sort of strange and I got lost in the weeds a little bit on the uncanny valley and I think that's because seeing how AI has changed and seeing these responses and seeing this sort of robot connection I just kind of read a little bit about that so that's what piques my interest is sort of how close are we going to get to that and you know

those applications of the human like robots or the helpers and things like that. So that for me is what piqued my interest. Got it. Yeah, very interesting to hear. So I think that we'll go in and close out.

Thanks again to Professor Paul Brewer and his PhD student co-authors James Biggerman and Ashley Paintsil for being on this episode. Again, we have been talking about the survey paper Media Messages and US Public Opinion about Artificial Intelligence, which I would recommend you check out yourself.

Thank you so much for listening to this episode of SkyNet Today's Let's Talk AI podcast. You can find articles on similar topics to today's and subscribe to our weekly newsletter with similar ones at skynettoday.com. Subscribe to us wherever you get your podcasts and don't forget to leave us a rating if you like the show and be sure to tune in to our future episodes. All right.