We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode The Top Myths About AI

The Top Myths About AI

2020/10/25
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Alexa Steinbrück
D
Daniel Leufer
Topics
Daniel Leufer:AI领域充斥着炒作、误解和错误描述,这些问题阻碍了相关工作的开展。AImyths.org旨在消除这些误解,帮助相关组织更高效地开展工作,主要目标受众是民间组织。 Daniel Leufer:通过调查,确定了AI领域五大常见误解:AI具有自主性;超级智能即将到来;AI术语含义模糊;AI完全客观、无偏见;AI拥有足够数据就能解决任何问题。“AI具有自主性”这一误解掩盖了人类在AI系统中的作用和劳动,需要揭示其背后的复杂的人类行为。“超级智能即将到来”这一误解夸大了AI发展速度,将一些谨慎的学术讨论歪曲成了不切实际的预测。“AI术语含义清晰”这一误解忽视了AI术语的模糊性和多义性,需要具体分析AI系统实际的功能和局限性。 Daniel Leufer:在AImyths.org网站上,我们使用长文本格式,并加入了互动元素,例如“标题改写器”,以清晰地解释AI相关概念和背景。 Alexa Steinbrück:公众对AI的理解存在偏差,将AI与科幻作品中的机器人和奇点混淆,需要将AI的多种含义区分开来。“超级智能即将到来”这一误解具有危险性,因为它会让人们忽视AI带来的实际风险。如果一开始就使用更精确的术语来描述AI,许多误解可能就不会出现。网站上的“标题改写器”功能可以将将AI描述为具有自主性的标题改写成更符合实际情况的版本。

Deep Dive

Chapters
The motivation behind AImyths.org stems from the frustration with the hype and misrepresentations of AI, which hinder effective work and communication in the field. The project was initiated during Daniel Leufer's Mozilla Fellowship and aimed to create a resource to debunk common misconceptions about AI.

Shownotes Transcript

Translations:
中文

Hello and welcome to SkyNet Today's Let's Talk AI podcast, where you can hear from AI researchers about what's actually going on with AI and what is just clickbait headlines. We release weekly AI news coverage and also occasional interviews such as today. I am Andrey Krennikov, a third year PhD student at the Stanford Vision and Learning Lab and the host of this episode.

In this interview episode, we'll get to hear from several of the people who created AImyths.org, Daniel Lufer and Alexa Steinberg. AImyths.org was put together as part of Daniel Lufer's Mozilla Fellowship project. From October 2019 to July 2020, Daniel was hosted by the digital rights organization AccessNow.

Daniel's background is in philosophy and he has a PhD from KU Leuven in Belgium. He is also a member of the working group on philosophy of technology at KU Leuven and is currently still working with Access Now. Alexa Steinbrook is a software developer, artist, and design researcher. She has a degree in artificial intelligence from the University of Amsterdam.

Her research interest is the representation and perception of AI in the public discourse and consumer products like voice assistants. She runs a lab for artificial intelligence and robotics at the University of Art and Design, Berg, German word, where she also, she researches creative applications of AI technologies. Thank you so much, Daniel and Alexa for joining us for this episode.

It's great to be here. Great to be here. Yeah. Nice to join. Alrighty. So let's go in and dive in. We'll be talking about your project, AImyths.org, which, as the title implies, is meant to help debunk some of the popular misconceptions about AI. Before we dive into what these myths are and what the project results were, maybe each of you can briefly summarize your motivation for working on it and maybe also the inspiration for going in this direction.

Great. Maybe I can give a bit on the, I'll talk about the initial steps and stuff. And then once it gets over to actually putting it together, I can let Alexa give you the details as she was leading the thing there. So the original idea for the website came out of me thinking,

kind of being addicted to being annoyed and, you know, getting quite annoyed at all of the hype and misrepresentations and misconceptions around AI. So I was working with AccessNow on a project around AI ethics guidelines beforehand. And, you know, what we saw was that we kept coming up against the same

misconceptions, the same inaccuracies about AI all the time. And a lot of the work that we were doing, you know, time was almost wasted kind of debunking those. Um, and access now also organizes, um,

RightsCon, which is the main conference of the digital rights world. And at RightsCon, we organized a side session where we got a lot of people who were working on AI policy in a room together. And we were kind of brainstorming ways to coordinate our work a bit better. And one common ground that we found was that we were all equally annoyed by these myths

And they were all getting in the way of our work. So we thought it would be great if someone could put together a sort of a shared resource to tackle, you know, the most common ways that these misconceptions manifest. And luckily at that time I was applying for a Mozilla fellowship and to my surprise and joy, it was accepted as a project. And this is what the project became to develop that resource. So I, yeah,

We had kind of brainstormed a list of misconceptions and myths at that RightsCon session. And then I did some more work kind of investigating, you know, which ones were the most common, what ones got in the way of people's work. That involved, I did an initial blog post with the survey group.

and got some feedback from people. But also I think it's worth mentioning that the real target audience of the website is civil society organizations. So human rights organizations or activist organizations who are

you know, now find themselves having to work on AI because, you know, if you work on discrimination, you know, almost any topic related to digital rights or even, you know, human rights more broadly, AI is popping up ever more center stage in dealing with those issues. So the aim was to sort of help organizations like that

get over the hump in tackling these misconceptions. I'd also seen in some reports I had read where you might have like a

you know, a rights organization doing a report about AI, but they devoted three pages at the start to talking about super intelligence. And so to kind of give them, you know, a quick way to say, look, you don't need to address super intelligence because despite the way we phrased the myth on the homepage, it's not coming soon.

And then actually at MozFest, I happened to bump into Alexa and we got chatting then about what kind of forum that would take and maybe she can explain from there. Yeah, we met at MozFest, which is the annual event of Mozilla.

And I was doing a session about demystifying voice assistants and explaining how they work and explaining also the mechanisms of anthropomorphization and how we get to this point that we call this thing, this gadget, NAI. And I can absolutely share what Daniel said about this obsession with being annoyed by AI myths.

Um, so this project is very dear to my heart and I have this personal angle to it that it tackles, um, uh, like a phenomenon I observed during the last seven years or so that if I, when I have conversations, like even casual conversations with, um,

people about AI, people who don't work in the field of AI, I realized that we're talking about totally different things. And I studied AI and from my point of view, AI is machine learning and computer vision and natural language processing and all these applied fields. And what they mean often is robots and the singularity and machines enslaving mankind and stuff. So I always feel like we're not really having a conversation because we're talking about two different things.

And I think this doesn't only happen in private conversations, but also in the public debate and in media and on conferences. And there is this great quote I often use by Maya Indira Ganesh, who says, AI exists in an awkward and unique space as technology metaphor and socio-technical imaginary exists.

And I think this is completely what it is. So there are different levels of meaning and they are all kind of entangled. And I think it's time to disentangle them and separate the myths from the cultural narratives from the facts.

I see. Yeah, it makes a lot of sense. And as Daniel says, I'm sure a lot of AI researchers are also very familiar with the frustration of seeing misrepresentations of AI. And part of the reason for this podcast, of course, is to also highlight some of those myths and help people understand what's going on with AI. So, of course, I think researchers should

that sympathize and agree with your motivation and like the idea of a project. So given that setup, yeah, maybe we could talk a bit more before we dive into the actual myths, a bit more on the process. So for instance, Daniel, how did you go about deciding what the myths are and kind of getting it down to a pretty short list when in media there's a lot of different ways in which AI is misrepresented?

Sure. Yeah. So I mentioned before that there was a survey and there was actually a very clear top five out of that. The top five was number one by a long shot was AI has agency.

which I think you guys at Skynet today would sympathize with as well as one of the most annoying and prevalent misconceptions. Second was superintelligence is coming soon. Third was the kind of ambiguities around the term AI. So the idea that the term AI has a clear meaning and clear unambiguous meaning.

Meaning the fourth was that, you know, AI is totally objective. It's unbiased. It just makes objective decisions from data. And the fifth was the idea that AI can solve any problem once it has enough data. After that, there was a bit of editorial discretion to search through the data.

the rest of the results. But in order, they were ethics guidelines will save us. We can't and shouldn't regulate AI and AI equals shiny human robots, which I will admit, whether it made the top eight or not was going in because that's my favorite one. There were others on the list as well, but we ended up working with deciding to go with those eight.

Makes a lot of sense. Yeah. And then process wise, I guess, Alexa, do you have any anything to add on once you decided on the myths? What was the process on developing the website and getting it together and communicating them well before we dive into what they actually are?

Yeah, absolutely. So once we've decided what the myths are, we talked about what could be the appropriate format to tackle our myth-busting project. And the first idea we had was to make it more like a web-based interactive game. And we were inspired by the project Survival of the Best Fit by Alia El-Katan, which educates people about bias and machine learning tools for hiring.

But then we realized we actually have a lot of content to cover and also the topic has quite some nuance and there's a lot of background story and context to be explained. So we realized that the best format is probably a long text format and

Yeah, Daniel is a philosopher, so that is what he's most strong in. But we also decided that it would be nice to have some interactive elements. So these are the two widgets we might talk about later. And we also, each myth has a quite long section, which is the bibliography and sometimes also a section which is

lists common arguments for each, for several positions regarding each myth. And they kind of help you follow the debate or even find your arguments in real life.

I guess maybe one other interesting thing about process is that the issues are so interdisciplinary that, um, I'm hugely indebted to a lot of, you know, different people who helped out, um, from technical people who, you know, gave a, an overview, including Alexa, but also, um,

Agata Foricherts, who's also a fellow student of yours there at Stanford, Andre. But also a huge shout out to the

Let me get the exact title. The Harvard Law School Cyberlaw Clinic at the Brookman Klein Center for Internet and Society. And, you know, to Jessica Fjelder, who was a member, who was a participant in the original meeting I mentioned at RightsCon and who made the project part of the Cyberlaw Clinic's project.

work in the spring semester this year and got two students, Rachel Yang and Catherine Muller,

law students who worked on parts of the project. And I think that that's an interesting part of it that, you know, you're stepping into legal arguments, philosophical arguments, technical specifications, everything. So it was challenging as definite stretch for my brain to, to keep all of those balls up in the air at all times, but really enjoyable to, to do that. And just, yeah, very thankful to all the people who helped. Yeah.

Makes sense. Yeah, quite the team for this scale project of essentially trying to communicate what AI actually is and a lot of the kind of important caveats, which, as we all know, takes quite a bit of words and explanations and so on.

And yeah, listeners right now, as we dive into some of these myths, you can actually go to AI myths dot org and browse the website. And it's very straightforward. You click on each of the myths and you can read it's well organized and quite approachable. So you might find it interesting to kind of browse it now or later to see what it's all about.

So, yes, Daniel, you mentioned you had this ranking of top five myths. So now we can perhaps discuss

what those are and what you found looking into them and maybe some of the feedback that people gave you as well, starting with AI has agency. So in my mind, that sounds like basically this notion that AI can make decisions, you know, often headlines are phrased like this AI learned to

you know, speak human language or stuff like that. And it makes it sound like the AI did it and not human developers and researchers. So that's my take on that. But how about you give us a bit more of an overview and any interesting bits about this myth that AI has agency?

I think you just said it perfectly. So it's about this notion that AI is this intelligent entity that acts according to its own desires. And instead of, of course, AI being a scientific discipline and a set of ideas and techniques. And what Daniel argues is that this is critical because this idea

Yeah, it hides human agency and also hides human labor. Maybe Daniel, you can talk more about this. Yeah, I think it's, as Alexa said, you know, there's a clear starting point with the dodgy headlines, but obviously the question of like machines having agency can go very, very far philosophically. So I think a challenge actually of writing software

that piece was figuring out the philosophical depth at which to stop. So I think, you know, there's obvious misconceptions to debunk these dodgy headlines and

you know, exposing hidden human labor, you know, behind AI systems. There was an example that just came out when I was finishing off the text of an accounting firm. I think they were based in the UK who claimed to have a, you know, AI accounting system. And it turned out that they were using offshore labor in the Philippines. So, you know, there's these very obvious cases in which it's actually a person hiding under a table. Yeah.

And, you know, they're the name Mechanical Turk for Amazon's platform is pretty apt. But, you know, things get much more complex. And I would like to give a shout out to Daniel Estrada, who's a philosophy professor who, you know, wrote a kind of very detailed thread in reaction to this piece in which, you know, he said, you know, was really in agreement, I think, in the entire sort of

deconstruction of the complex human agency that's embedded in these systems. But he would actually go much further and say that we need to also turn that lens onto human agency and look at how in our own agency, we're actually sort of mediated by all forms of technological artifacts and

That was a challenge to, you know, I was aware of his work before and I could sort of see that that would be an angle where someone coming from his perspective could say you didn't go far enough. But at the end of the day, you need to sort of think of the audience. And I didn't think that taking things to a total philosophical deconstruction of the notion of human agency was going to be useful in that particular piece. Yeah.

I'd also maybe just to pass to Alexa to talk about the headline rephraser, because I think it's probably my favorite thing on the website and that's something that she developed.

Yeah, so we have this interactive bit, which is a headline rephraser. So we take headlines about AI that express this notion that AI is acting according to its own desires. And then we rephrase them into something more realistic. So for example, a headline could be, would AI be better at governing than politicians? And the rephrasing would be,

Would unelected people who hide behind a smoke screen of technology be better at governing than people who were democratically elected? And so we have 10 or 11 quite brilliant rephrasing from Daniel. And you can also, as a visitor of this website, you can also send us your headlines, headlines you come across, and we might rephrase them and put them on the website.

Very cool. Yeah, I was browsing some of the headline rephrases and they're quite great. One I like is a headline, Facebook AI creates its own language in creepy preview of our potential future, which is a particularly annoying piece of AI hype a few years ago. And the rephrasing is Facebook chatbot experiment turned off because it was set up badly and descended into absolute gibberish. So quite an accurate portrayal of AI

Basically, it was just an experiment set up badly and you got some non-useful results. But then people tried to turn it into quick bait. And yeah, indeed, Daniel, as you said, this topic can go quite deep. So it's interesting that you tried to cover it at the surface level, but also some of the deeper implications there. Next, let's cover a couple more myths, maybe not all of them, because the website is there for people to browse.

But one that you mentioned as the second ranked is superintelligence is coming soon. So this is one we see a lot of and in a lot of serious conversations about, you know, regulation and the implications of AI and people in power. And yeah, politicians often seem to talk about, even though many researchers think it's not very relevant.

So yeah, maybe you can also talk about what you found there as far as people's feedback from the AI community and what you've seen in other communities. Yeah, this one is funny. It was difficult to have a mix, I think, of which literature on superintelligence to focus on because there are really well-reasoned, careful people

sober discussions of the kind of topics related to super intelligence so uh the idea of an intelligence explosion um you know the difference between artificial general intelligence and narrow intelligence um and all of these topics and you know there's really good work out there and then there's like ray kurzweil um kind of saying crazy stuff like uh you know in like 2030 um

AI is going to surpass. I think he said something like, I use a quote in the piece, AI will achieve and surpass human levels of intelligence by passing a valid Turing test in 2030. And, you know, these kind of complete nonsense statements where, I mean, the thing that I focus on in the piece is that a Turing test is in no way a measure of

something surpassing human intelligence. The Turing test is a, you know, it's an interesting thought experiment, but it's very complex and there's been a lot of really

good work kind of deconstructing what exactly would a Turing test show. And there's also been some very good feminist work kind of exploring. Uh, I don't know if I think not an awful lot of people, but probably a higher percentage of, of the listeners to this podcast would be aware that the, the original setup of the Turing test is that there's, um,

a man and a woman communicating to the tester. And I think it's that the man is trying to convince the tester that he's the woman. And then Turing sort of says, now let's imagine that this is with a man and a machine. And there's been some good analyses there. But I think in general, on the super intelligence topic,

I tried to give real kind of fair consideration to some of the ideas, but to show how when they're taken up by sort of AI influencers or AI thought leaders on LinkedIn, they often tend to distort the more

sober reflections of someone like Nick Bostrom into kind of crazy predictions about what's going to happen in five years time. Maybe just one anecdote as well, because I

I had this constant thing when I was writing these texts, which I think most people who write texts have, where when you're about at the end, you start to think it's totally superficial and useless and it's not going to be of any use to anyone. And with every one of the texts, the day that I was at that point, I saw some headline that showed that actually no, this myth and exactly the kind of superficial understandings that I was trying to debunk were hugely prevalent.

And actually last week I participated in a hearing of a venerable international institution, which I will not name. And it was a hearing about AI and AI in humans and, you know, trying to look at human rights considerations for what the potential risks could be of these interactions. And it was full of stuff about superintelligence.

It said things like in the next couple of years, AI is going to surpass human intelligence in all domains.

So you see that actually these topics are hugely embedded in the thinking of people outside of the kind of real core research community. And despite the fact that actually, so I think there were five experts on the panel. We all told the hosts that, you know, this is absolutely nothing. And they persisted in asking questions about brain uploading and things, despite our reassurances that we're nowhere near that.

Thank you.

Yeah, I think that we have kind of myths that are like funny and like annoying, but funny. But I think this one is kind of a myth that is also quite dangerous in terms of how society debates risks and risk and problems correlated with AI. So when we say we want to debate risks and we do it in the science fiction context, we

and we ignore the tangible risks that are already affecting real people, then this has an actual impact on people's lives and society. And I think this is a very important myth.

Agreed. Yeah. If people think that, you know, problems with AI come down to superintelligence, they can ignore all the real modern, you know, things we have today, like facial recognition, you know, surveillance, all these things that we actually see happening and that need to be addressed instead of these hypotheticals that may or may not happen and certainly aren't happening yet. So very good myth to tackle.

And yeah, maybe we can do one more just because each of these we can, of course, go quite deep on. But you already have the website to let people do that themselves. We can maybe finish up with the myth that the term AI has a clear meaning. So that one is a little bit tricky because you are getting into defining what AI is and categorizing and so on.

So I'm curious, yeah, how did you go about deciding on sort of summarizing this issue with this one myth and coming into it? Did you learn more about how people view AI and how to define it? Yeah, generally, any reflections on this one? Yeah, as you say, that's almost the kind of most difficult problem. And I think, you know,

we don't define AI or kind of claim to bring a solution to that problem in the piece. But I think the aim was more to

You know, again, start off with some very clear misconceptions. And the example that we used was this AI babysitting service, which if I'm correct, was a Stanford spinoff. Actually, it was called Predictum, which thankfully no longer has a website because it was shut down. And it claimed to use advanced artificial intelligence to screen babysitter candidates.

And the idea was to start from the perspective of a worried parent who sort of thinks I really need to find a reliable person to look after my children. And why not use this advanced artificial intelligence that's based on hard data? So to sort of

Take people from that point of saying, if you hear this claim, you know, there are many reasons why you might think it sounds reasonable. But then to sort of break that down and say, what were Predictum actually doing?

And just to show that, you know, they were using natural language processing to analyze posts on candidates' social media. And that involved trying to detect kind of categories of language, such as bullying, harassment, and bad attitude that are, you know, not clear-cut, not objective. They're highly contentious. And, you know, these type of...

Tools that are trying to detect inappropriate content have, for example, been shown to consistently flag content from African-Americans as inappropriate much more often than

From other groups. So there's lots of documentation of how these tools can be biased and even the categories in themselves are hugely contentious. They were also using computer vision technology to scan pictures on babysitters' social media to look for things like explicit content and their content.

We raised the example that the British police were using a similar system to try to detect nudity, and it was consistently flagging pictures of sand dunes, pretending or falsely thinking that they were naked pictures. So, you know, once you I think once you break that down for someone and say when a company like this says they're they're selling A.I.,

what do they actually mean? And that's one general recommendation that I made in the piece was just always take it down a level of specificity. So instead of saying, we're going to use AI, say, okay, we're going to use this subfield of machine learning. And then you can, you're, you can ask more reasonable questions about, you know, can you kind of,

uncontentiously formulate the objective that you're trying to solve in a way that makes sense for the system. Is there or could there be suitable data? And it'll just get people to think like much more reasonably about the claims that are being made. I think another key thing that I wanted to do in the piece as well is just to give people a history of the term, because I think that really, you know,

demystifies things a bit when you look at how, you know, just pointing out to people that when Turing wrote his hugely influential papers, the term artificial intelligence didn't exist and how it came out of the Dartmouth Summer Research Project on Artificial Intelligence. The proposal came from a funding proposal originally, which was submitted in 1955. And

Also just pointing out that even at the time there was a debate, so Newell and Simon, the two researchers who were present at the Dartmouth College workshop, and the only two who actually had developed research,

an AI system, for want of a better word, actually didn't like the term and continued to use the term complex information processing after the workshop for quite a few years. But other participants, such as Marvin Minsky, were pushing for the term AI because it was, you know, it had more of a marketing appeal. So it's interesting that that debate exists right from the beginning.

I also wonder if this myth isn't the core myth. If history would have decided to call this field complex information processing, would we have the myth of agency and these bad robot pigs and all these other artifacts and discourse distortions?

I also want to point to this second widget we have where we take quotes of partly famous people and we replace the word AI with more sober terms. So again, there is complex information processing or depending on the context, you can replace this word with proprietary software or just plain machine learning.

You can also type in your own words and see how the meaning shifts when you use a different term. My favorite one is the quote of Elon Musk.

where he speaks about the risks of AI and the original sentence is, I mean, with AI, we're summoning the demon. And if you replace this with another term, it could sound like, I mean, with complex information processing systems, we're summoning the demon. So it really changes the story.

Yeah, I just was playing with it, actually. And as you say, it's quite fun to point out, especially with Elon Musk. I guess that's a case where AI is used in this very vague sense. But there are other examples also where it's talking about maybe specific products. And as Daniel pointed out, in those cases, they are actually doing something. They actually have systems and particular algorithms that

that maybe aren't what people imagine when they hear AI. So very useful to know that in general, AI can refer to a lot of things. It's very vague. So always be skeptical of, you know, what it actually refers to.

And then, yeah, so those were some of the top myths. And then on the website, there is also all the other ones to browse. So once again, AImyths.org, if you want to take a look, some of those, it's actually kind of fun to browse, even if you know these myths and what AI is and so on. It's pretty interesting and interesting.

Certainly, even if you are an AI researcher, some of these other ones like on ethics guidelines or AI regulation might deal with things that you aren't as familiar with. So useful to all sorts of people, it seems to me. With that, we're going to go ahead and wrap up this discussion of AImoves.org. Thank you, Daniel and Alexa, for being with us on this episode. Thank you.

Great, thanks for having us. It was a pleasure to chat.

And thank you listeners for being with us for this episode of Skynet Today's Let's Talk AI podcast. You can find articles on similar topics to today's and subscribe to our weekly newsletter of similar ones at skynettoday.com. Subscribe to us wherever you get your podcasts and don't forget to leave us a rating if you like the show. Reviews are very helpful to know if these sorts of things are actually useful. And be sure to tune in to our future episodes.