cover of episode Artificial Intelligence Ethicology (WILL A.I. CRASH OUT?) with Abeba Birhane

Artificial Intelligence Ethicology (WILL A.I. CRASH OUT?) with Abeba Birhane

2025/5/8
logo of podcast Ologies with Alie Ward

Ologies with Alie Ward

AI Deep Dive Transcript
People
A
Abeba Birhane
Topics
Abeba Birhane: 我是认知科学家,也是人工智能伦理学家。我的研究关注人工智能的伦理问题,包括数据偏见、算法透明度和AI对就业的影响。我长期从事AI伦理研究,对AI技术发展及其社会影响有深入的了解。 我发现,当前AI领域存在大量炒作,许多人夸大了AI的能力,同时忽视了其局限性和潜在风险。许多AI系统,特别是生成式AI,其输出结果往往不可靠,存在事实错误和偏见。这不仅会误导公众,还会对社会造成负面影响。 AI的训练数据通常来自互联网,其中包含大量偏见和有害信息,这导致AI系统也继承了这些偏见,并可能加剧社会不公。此外,AI技术的发展也引发了人们对就业的担忧,一些人担心AI会取代人类的工作。 我认为,我们需要对AI进行更严格的监管,以确保其安全、可靠和公平。这包括对AI训练数据进行审核,以减少偏见;提高AI系统的透明度,让人们了解其工作原理;以及制定政策,以应对AI对就业的影响。 尽管AI存在风险,但我仍然对AI技术抱有乐观态度。AI可以用于解决许多社会问题,例如医疗保健、环境保护和灾难救援。关键在于,我们需要负责任地开发和使用AI,确保其造福人类。

Deep Dive

Shownotes Transcript

Translations:
中文

Stripe is the go-to choice for AI companies, from early-stage startups to scaled enterprises. 78% of the leading AI companies use Stripe to go to market quickly and scale globally. That includes pioneers like NVIDIA, OpenAI, and Perplexity. Stripe has developed cutting-edge tools to improve everything from fraud detection to checkout optimization. Whether you're aiming for incremental gains or planning for enterprise transformation, see how Stripe can help at stripe.com.

Instacart is here to keep you on the couch this basketball season. With pregame rituals and postgame interviews, it's hard to find time for everything else. Let Instacart handle your game day snacks or weekly restocks with delivery in as fast as 30 minutes because it's bad luck to be hungry on game day. Download the Instacart app today and enjoy $0 delivery fees on your first three orders. Service fees apply for three orders in 14 days, excludes restaurants.

Oh, hey, it's the tag that you wish you'd cut out of your shirt, Allie Ward. And for every one of us that has seen AI become more and more present in our lives and wondered, is anyone driving this bus? I have here for you a chat with an expert who tells us exactly who is driving the bus and where it could be headed. Is AI evil? Does AI even care about us? Is it going to kill us? Should we feel bad for it?

Don't ask me. I'm not theologist. We're going to get to it. Now, this expert is a senior fellow in trustworthy AI and an assistant professor at the School of Computer Science and Statistics at Trinity College in Dublin, Ireland. And they're a cognitive scientist. They research ethics in artificial intelligence, and they've published papers with such titles as The Forgotten Margins of AI Ethics.

toward decolonizing computational sciences, the unseen black faces of AI algorithms, and the values encoded in machine learning research. So they're on it. And I got to sit down and visit and chat in person when I was in Ireland just last month. Also, just a pleasing aesthetic side note, they were born in Ethiopia, but lives in Ireland. And this expert has the most melodic,

cadence, just like Bjork. I was mesmerized. We're going to get to all that in a sec. But first, if you ever need shorter kid-friendly episodes with no adult language, we have Smologies. They're episodes in their own feed, wherever you get podcasts. Also linked in the show notes. That's Smologies. Also, thank you to patrons for supporting

Remily also says, Remily also says,

Remily, anytime's a good time. Thanks for that. Okay, let's get right into artificial intelligence ethicology. It's the ethics of machine cognition. Is it cognition? We'll talk about it. What does chat GPT stand for? Why is Siri a lady? Can you ask a robot for a cheeseburger yet? What happens when you're rude to a chatbot?

Also, how do artists prevent getting ripped off? How much energy does AI take up? When we all lose our jobs, booby traps, doorbell marks, commonly used fallacies, how the creators of AI feel about AI? What is hype and what is horror? What are the benefits of AI?

What happens if you assign a chatbot your homework and whether or not AI is the root of all evil or a pocket pal? With embodied cognitive scientist, professor, scholar, and artificial intelligence ethicologist, Dr. Ababa Burhani.

I have some questions that would take hours to unravel and they expect you to answer everything like one minute, two minute marks. They're rushing you to get off. So no, no worries. I am Ababa Bruhani. She, hers. Great. And AI, you have been an expert in this field.

for a while, but I haven't known about AI for that long. How long have you been studying it? Technically speaking, I am a cognitive scientist. So I finished my PhD about three years ago in cognitive science. So halfway through my PhD, then around the end of my second year, I left the cognitive science department and joined a lab in

where people do a lot of testing, evaluating and testing of, you know, chatbots in various AI models. And what is exactly cognitive science?

Cognitive science is very broad. So traditional cognitive science tends to be about, you know, understanding cognition, understanding, you know, human behavior, understanding human interaction and so on. And cognitive science often is not taught at graduate level.

It's either at a master's level or a PhD level because cognitive science is really interdisciplinary. And Dr. Brahani says that cognitive science is actually a mishmash of disciplines or what's sometimes called the cognitive hexagon with sides representing philosophy, psychology, linguistics, neuroscience, artificial intelligence, and anthropology, according to some institutions. And anthropology can also mean social sciences.

Different institutions will phrase it their own way, but broadly speaking, cognitive science is a lot of stuff. So the idea is you will take, you know, important or helpful aspects from these various disciplines together.

And cognitive science then allows you to synthesize, to combine these various theories, even computational models to understand human cognition. That's the idea. So from philosophy, for example, you will learn how to question assumptions. You will go down into the various questions around what's cognition, what's intelligence, you know, what's human emotion. So philosophy really lends you the analytic tools, right?

Same from neuroscience. The idea is you get to learn how the brain works and use it in a way to synthesize from all these different disciplines. So that's traditional cognitive science. You can call cognitive science COG-Sci if you want to. And she says that she's in a really niche specialty within COG-Sci, and it's called embodied cognitive science, which isn't just about understanding the human brain. Where

Whereas embodied cognitive science is moving away from this idea of treating cognition in isolation, your cognition doesn't end at your brain and your sense of self doesn't end at the skin, but rather...

It's extended into the tools you use. Anything you do, you do it as an embodied self, as an embodied person. So your body, your social circle, your history, your culture, even your gender and sexuality, all these are important factors that play into, you know, your understanding, your cognition, your intelligence, your emotion, and so on. And when you're studying cognitive science, how do you not...

use your brain to think about your brain all the time. How do you get out of your brain? How do you do it? Yeah, yeah. Yeah, I mean, you have to, you have to use your brain. So you are familiar, you know, with Descartes' famous quote, cogito ergo sum, so I think therefore I am. The idea is you can know who you are. You can know you are a thinking person.

being, you can know that you are someone. So that is like using your brain to understand your brain, so to speak. Whereas again, the emphasis with embodied cognitive science is that, you know, all that is really very individualistic, even to confirm our existence. It's through conversations with others. Yeah.

So that's why embodied cognitive science emphasizes others and communities in your body as really important factors in understanding cognition. So embodied cognitive science kind of goes downstairs a bit, and it considers how a person's

body experiences the world and how the environment shapes thinking and perception. And cognitive scientists have this thought experiment that tickled me, and it envisions a little person in your brain interpreting inputs. But then who is in the brain inside that little person's brain? Like, does it have its own

inside the little person's... It's like Russian nesting dolls. And there are just infinity little humans inside humans' brains to interpret what the brain inside the brain is braining. And this is called, endearingly, the homunculus argument. And yes, embodied cognitive science...

is like it's more than a tiny person in your skull. And how can we truly understand artificial intelligence if we don't first grasp intelligence intelligently? And when we're thinking about who we interface with, with AI, way back in the day, there used to be a search engine called Ask Jeeves, and it was like a butler who would find you the answers to things. And

And now we have Siri and Alexa. Why do you think with AI they've gendered and they've personified people

these voices that are actually a huge network of artificial intelligence. Is that to help our brains understand? It's a little bit of like a marketing strategy, but it's also a little bit of appealing to human nature. We tend to kind of gender and personify objects. So if you are

interacting with chat GPT, for example, we tend to just naturally, you know, treat it as another person, another being, another entity. On the one hand, as you said, these chatbots, you know, there is no intention, there is no understanding, there is no soul in these machines. They are just pure machines. But also the developers and vendors of these systems, you

they tend to market them as kind of personified entities because it's much more appealing to think that you are interacting with another sentient thing. I always wonder if they made some of them female voices because we're more accepting, we're less threatened by females. We're socialized to have a mommy figure come and help us with something. It's not as threatening as

that it'll turn on us. And also it's like, oh, a lady will get it for you, you know?

Yeah, yeah. I mean, we naturally tend to think women are much more like nurturing and they have the role of helping you. So it is kind of related to the social norms that really dictate society. So it's really also to some extent leaning on that stereotype that if it's a woman, you know, it's approachable, it's there to help you.

And yes, I am far from the first person to notice that digital assistants tend to be ladies. And some historians and media scholars think that it starts early by hearing a female voice while you're cooking in the womb. And that's why people love a lady voice. Or that the first telephone operator in the late 1800s happened to be a woman with a great voice and then it just stuck. But at

But as these AI avatars start to have faces and personalities and pastimes and Instagrams, why do we see mostly younger, hotter avatars?

Well, there was a 2024 article published by Reuters Institute, and it reached out to this multimedia professor, April Newton, who noted that a gentle, well-modulated woman's voice is usually the default for AI assistants and avatars because, quote, we order those devices to do things for us, and we are very comfortable ordering women to do stuff for us. And also, just a side note for me, did you know that the word robot, it comes from a Slavic root for forced labor or slave? Yes.

It's creepy. And historically, humans have loved capable servants, but not too capable or else you're just begging for an uprising. So the future, it's also the past. Am I rooting for the robots now? I don't know.

When it comes to Siri or virtual assistants, how is that different than the AI that's been ramped up in the last couple of years? Is Siri and Alexa, are those AIs or are those just search engines? Is Google an AI? We call some things AI and then other things just computing. What's the difference?

Yeah, so what's really the difference between, say, traditional AI, whether it's implemented as a chatbot or as a predictive system, generative AI that has really exploded over the past two years? Think of ChatGPT, Gemini, and cloud, and so on. The fundamental difference is that

Should I go into the technical details or remain clear of technical details? Yeah, no, you can give us some technical details. I'll try to keep it light. Okay.

There is no way of explaining the difference without getting into reinforcement learning. A typical classification system is an algorithm that you give it massive amounts of data. Think of like a face identification system. And these days, data sets have to be really big. You might even have, you know, trillions of tokens of images of faces and you train your algorithm like this.

this is a face, this is a face, this is not a face, this is a face, this is not a face. It's called machine learning. If you have succeeded in your training, then your AI should be able to tell you it's a face or it's not a face. So this is like a typical classification system. What we consider AI is

is a very broad term that encompasses so many different subcategories. Just side note, how alone am I in not knowing that Apple Virtual Assistant Siri stands for something? Speech Interpretation and Recognition Interface? Did you know Siri was an acronym? I didn't. Also, Apple is reportedly freaking out behind the scenes about ChatGPT for the last few years, being generative and having chatbot

capabilities and longer conversations in Siri. So apparently a lot of their efforts in self-driving cars got shuttled over to their AI division. And a lot of people at Apple are like, I can't even speak publicly about this.

The other big subcategory under the broad umbrella of AI is NLP or natural language processing. So this is the area that deals with human language. So you have audio data that you feed into the AI system. And the idea there is that, you know, you are building an AI system that even make predictions about human language.

NLP tools would learn, for example, predictive text. So what the algorithm is doing is kind of predicting what the next word is likely to be. So it's just predicting the next token, the next token. So that's kind of traditional classification or predictive systems.

And that was machine learning or NLP, which is natural language processing. And those deal with visual data turned into tokens and they predict language like when your phone knows you better than you know yourself. And it's heartwarming and it's scary. It's like love. Now, chat GPT, for example, I didn't know this, but the GPT stands for generative pre-trained communication.

transformer. And a transformer is this type of deep learning system, and it converts info into tokens and can handle more complex processing from language to vision to games and audio generation. So it's definitely a step up from just simple prediction. Whereas over the past year with generative AI, these are

AI systems that do more than classification, more than prediction, more than aggregation, they are called generative systems. They produce, you know, something new. So image generators, for example, you can put a text description as a prompt and

And the AI system will produce an image that resembles based on your description. The same with language systems. So ChatGPT, for example, you put in a prompt and it's able to produce new answers. So this is where the new, this is what's new with generative systems. And the systems also need to learn which of the words in a language model are the important ones.

which is part of the self-attention mechanism. And then they generate based on statistics, like what is the most probable way based on the data sets it's learned from, from completing a certain sentence or prompt.

So the training data really has a really significant impact in what outputs, whether it's image or text, what outputs these systems can generate. Let's get to the juicy part. So with any tokens, would that data be in tokens? Like you were saying, facial recognition might have a trillion tokens. Does AI kind of scrub that?

what we know of impressionist art and science fiction and anime, does it scrub it and grab a bunch of tokens so then it can have those as reference points? So the training data sets is one of the most contested issues in AI because data is constantly harvested. So like

your search history feeds into some kind of AI one way or another. I don't know if you are like me, if you signed up for some kind of bonus point, you go to a grocery shop and you tap that. So that is kind of like in the background, that's kind of collecting your behavioral data. So that data may breeze through infrastructure like Google and then just be on its merry way to a third party aggregator or a broker or

or an AI company itself, and they just yum, yum, yum, gobble up the data.

So this kind of practice puts a lot of data gathering for the purpose of training AI systems, either illegal or borderline illegal. Yes, because, you know, first of all, there is no consent. People are not even aware that, you know, their data is being used to train AI. So that's just number one on the general level. Let's get to art. But you will also have noticed over the past decade

year or two, the creative community, writers and artists are realizing that their work, their novels, their writing, their arts are being used to train AI systems again without any compensation or with very little compensation after the fact if they go and contest the use of their data.

Because large companies that are kind of developing AI, you know, think of Google, DeepMind, Meta, OpenAI, Anthropic,

They are businesses. They operate under the business model. They are commercial entities. Their objective is to maximize profit. My work is auditing, for example, large scale training data sets. People don't have access to what kind of data set these companies hold.

So we don't have any mechanism to kind of take a look to scrutinize what's in the data set. Where does it come from? How large is it? These are the kind of questions that simply there is just no mechanism for tech corporations to be transparent, to open up for us to have an objective understanding.

So big companies are like, no, you can't see it. But auditors like Dr. Bahani and colleagues can observe open source publicly available similar data sets as proxies or substitutes to kind of figure out what might be going on organizationally behind the locked Willy Wonka factory gates of the other big tech companies.

So we do get an idea, but through proxy data sets. So to answer your question, I mean, if you are an artist, a writer, if you have produced novels and so on, it's very, very likely that your work is being used to train AI, but there is very little legal mechanism to actually have a clear idea. Is it like they're stealing a bunch of ingredients and then making something, but they're like, you can't see our recipe. And you're like, you stole the ingredients. And they're like, you can't see our recipe. Yeah.

We made something with the... Is that sort of how it's going? Yeah, so the fact that they are protected by proprietary rights as a commercial entity means that there is virtually no mechanism to force these companies to open up their data sets. Of course, you can encourage them. You can kind of, you know, appeal to their good sides and so on. Good luck. Yeah.

But yeah, there is no legal mechanism. There is no law or regulation that says you have to open access or you have to, you know, share your data set. And why not, you ask? Of course, if that is to happen, then, you know, all these AI companies would go out of business as it is. A lot of them are under a lot of lawsuits. Yeah.

Yeah, you know, from Meta to OpenAI, OpenAI itself is under a lot of lawsuits, including from the New York Times. So the minute they open up their data, it becomes clear that a lot of people's suspicions, especially, you know, the creative and artist communities, the minute the data sets are open, why they are going to court.

Is there any recourse that artists or writers have? Like, is there anything they can do other than trying to open a really big, expensive lawsuit? Yeah. So writers and creatives are organizing

for class action suits. I know there are a bunch of class action lawsuits, both in the U.S. and in the U.K. Now, last August, there was this landmark case and it decided in favor of artists. It found that generative AI systems like MidJourney and DeviantArt's DreamUp and Stability AI's Stable Diffusion, that those were violating copyright law by using data sets of billions of artistic examples scraped from the web. And Getty Images sued Stability

stable diffusion for copyright infringements. And the evidence was like almost funny if it were such a bummer. But apparently the generative AI so relied on Getty Images that it started adding a blurry gray box to some of its AI output, which was learned from the iconic Getty Images watermark.

It's embarrassing. Now, in March of this year, a judge ordered that this lawsuit between The New York Times and OpenAI can proceed despite OpenAI begging it not to. And the newspaper, New York Times, along with a few other journalism outlets, alleges that OpenAI scraped a lot of their work to train ChatGPT. So there are lawsuits, but there are also just huge glaring holes.

But also in the UK, for example, they are considering a regulation that leaves massive loophole for intellectual copyright issues that leaves artists and writers with absolutely no protection at all. So people are really organizing on the ground and they have massive amounts of petitions signed and so on. But also there are technical...

I don't want to say solutions, technical kind of remedies. So you can use data poisoning tools, for example. You have... Oh. Yeah, so you have... Like a booby trap? Like one of them is called Nightshade. So for images, for example, you would insert various adversarial attacks that will make...

The data unusable for machines, because it's maybe a tiny pixel that's been altered, so that's not visible to the human eye, but kind of meshes with the automated system or how machines kind of use these data sets. So there are various tools like that. Another type of AI booby trap is called a tar pit, and it sends images.

AI crawlers in this infinite loop where they just get stuck and they can't escape. And I just love to think of like the AI system on the toilet, just scrolling, just not being able to exit and...

get back to work. Even if you find technical solutions now, the big companies are likely to, you know, come back with another solution that makes your own solution defunct. So you really have to be adapting to that constantly. So I think a viable solution has to come from the regulation, from the legal space.

And how do you feel about, is it Sam? Oh my gosh. Sam Wonsman. Thank you. I almost said Sam Edelman, which is a shoemaker. What is wrong with me? How do you feel about kind of some recent changes? Like last May, I believe he went before Congress in the U S saying like, Hey, we better watch out. And now I,

I think he was like at the inauguration. So we chatted about this in 2023's Neurotechnology episode because just a few months prior, in May 2023, Sam Altman was in the news a lot as a cautionary voice. And as we said in that episode, if you're wondering why this was a big deal, Sam Altman is the head of OpenAI, which invented ChatGPT. And in spring of 2023, he spoke at the Senate Judiciary Committee subcommittee on privacy technology and the law hearing called OpenAI.

Oversight of AI, Rules for Artificial Intelligence. He also signed a statement about trying to mitigate the risk of extinction. And he told the committee that, quote, AI could cause significant harm to the world. My worst fears are that we cause significant, we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways. I think if this technology goes wrong, it can go quite wrong.

And we want to be vocal about that. We want to work with the government to prevent that from happening. And ultimately, Altman urged the committee to help establish a new framework for this new technology. And though in 2016, Altman declared that Donald Trump was terrible, he recently backpedaled on that. And Altman said that he's changed his mind and donated $1 million to Trump's reelection campaign in 2024.

So Altman's thoughts on AI regulations likely have pivoted in the last few years since that hearing. It doesn't seem like regulations are going to happen very fast. Yeah, so these

This idea of, oh, we got to watch out because our AIs might become sentient and might be out of control, might cause existential risk and lead to human extinction. So unfortunately, this is a very...

popular and commonly disseminated worry emerging out of AI, but there is very little scientific evidence for it. People have done a thorough analysis how such a possibility is just 0% likely. It's just... Really? But human beings can be so terrible and they're learning from us. Yeah. So there is no intention to

There is no wish or there is no desire to act on something, to do something. But at the end of the day, it really is...

A massive complex algorithm, of course, and that is to some extent unpredictable. But that doesn't mean, you know, AI systems as they develop further, you know, all of a sudden develop intentionality or wishes or interests or needs. I mean, you and I are, you know, as human beings, we're

We do something and we get satisfaction out of it. I have a motivation for doing the research I do. And if that doesn't happen, I feel disappointed. I can feel sad. I also feel accountable when I put out, you know, a research paper. I know if there are errors in it. I know I'm the one responsible for it. So...

There is none of that. So when an AI system gives you an output, it's not because it might be worried about if it's an incorrect answer or it's because it wants to please you. It's just a chatbot that is designed to provide, you know, given answers, again, based on prompt. So this idea of AI systems causing existential risk leans on...

This huge leap of faith that requires you to believe that there is intention, there is emotion, there is motivation, all these human characters, all these things that makes us human. But it's just not there. It can't emerge out of nowhere. That's part of what makes us different and unique as individuals, as biological organisms. These are things that are hardwired on us.

These are also things that makes us human. This is why I fret still. Of course, we have to worry about powerful people using AI to do terrible things. And what worries me is over the past year, and especially since the rise of Trump and since the Trump administration came to power, you have a lot of large corporations really abandoning their voluntary pledge to protect people.

fundamental rights. So Meta has, for example, walked back their commitment for DEI, their commitment to fact check and monitor their social media platforms. Against hate speech too as well, which is...

Exactly. What really is worrying also now is you have superpowers in powerful governments like the US government, the UK government, even the European Union itself, using AI, moving into AI for surveillance, for military purposes, for warfares.

And a lot of AI companies, starting from OpenAI, Meta, Amazon, Google, they had voluntary principles not to use AI for military purposes. But over the past year, all of them have abandoned that. So even here across in the EU,

You have a French AI company called Mistral announcing that they are open to working with European governments to provide military AI. So, of course, we have to worry about governments using AI under the guise of national security, which really means, you know, monitoring and surveillance.

and squishing dissent. And really, this is against fundamental rights for freedom of expression, freedom of movement, and so on. So we have to worry about AI, but AI in the hands of powerful governments and people in position of power rather than the AI itself, because the AI can't do anything by itself.

Is it sort of like the guns don't kill people, people kill people kind of a situation? Yeah, exactly. How is that working out for us in the U.S., though? Well, death by firearm is the leading cause of mortality for teens and children in the U.S., according to the Pew Research Center. And over half of our nearly 50,000 gun deaths a year in the U.S. are suicides. That's not going well. And that's because the NRA slogan, guns don't kill people, people do,

is what is known as bumper sticker logic or a misdirection, also called a false dichotomy, or plainly speaking, a fallacy, according to philosophers. So giving AI a sense of technological neutrality is a bit misguided. The regulations being walked back is terrifying, especially trying to put trust in a government to stop things when a lot of our

People in power don't know how to use their own printers. You know, so some of the questions in the congressional hearings are like, how does this even work? Does Google track my movement? Does Google, through this phone, know that I have moved here and moved over to the left? Which is terrifying. So maybe they don't have morals and guilt and things like that and ambitions, but...

I was looking at some research showing that AI is being trained to become more and more sexist, more and more xenophobic, more and more racist, use more and more hate speech. And is it learning from the worst of humanity? Is it amplifying it? Is that just exposing how much hate is in the world? Yeah. So let's maybe walk back 20 years. Okay.

Because that's when, you know, real progress in AI started to emerge. I mean, we've had a lot of the core principles for AI, you know, since in the 1950s, 60s, 70s.

like some of the foundational papers about reinforcement learning, deep learning were written in the 1980s. So Geoff Hinton's famous paper on, I think it was convolutional learning was written, you know, late 1980s. We don't have to go too deep into it, but I do want to tell you that Geoffrey Hinton is apparently considered the godfather of AI and a leading figure in the deep learning world. And in 2024...

He won the Nobel Prize for his work. He's also worked for Google Brain, and then he quit Google because he wanted to, quote, freely speak about the risks of AI. Quit Google so he could talk about it. Now, in 2023, during a CBS Saturday Morning News segment, he warned about deliberate misuse by malicious actors, unemployment,

an existential risk involving AI. He is very much in favor of research on the risks of what could become a monster that he helped create. He's like, yo, we need some safety guidelines and regulations, buddies, and that is not really happening. But yes, he is among a few who over the last many decades drove these innovations. But what really made the AI revolution possible is

is the World Wide Web. With the emergence of the World Wide Web, it became possible to kind of scrape, to kind of gather, harvest massive amounts of data from the World Wide Web, you know, through chat forums or domains like Wikipedia. They are really a core element of training AI, at least for text data.

So that means that a lot of our training material for AI comes from the World Wide Web, whether it's our digital traces, whether, you know, it's the pictures we put on social media, pictures of your kids, your dogs, yourselves, and so on.

Or the kind of infrastructure, digital infrastructure, like Google is everywhere and has dominated whether you want to email or, you know, prepare a presentation or write a document. Google has provided the infrastructure. That means they have the infrastructure to constantly harvest training data. This means that a lot of the data is

that we are using for training reflects, you know, humanity's beauty, but also our cruelty and the ugliness of humanity. And just last week, a tech report released by Google admitted that its Gemini 2.5 flash model is more likely than its predecessor model 2.0 to generate results outside of its safety guidelines. And images are even worse than text at that.

And I mentioned this in a 2023 episode we did with Dr. Nita Farhani about neurotechnology. But around Juneteenth of that year, I saw this viral tweet about ChatGPT not acknowledging that the Texas and Oklahoma border, the panhandle, was in fact influenced by Texas desiring to stay a slave state, which is fact that ChatGPT would not acknowledge. So

So Dr. Barhani notes that when an AI is built on racist, sexist, xenophobic, etc. data set, the results, like history itself, are not kind to minoritized identities, she says. It reflects, you know, societal norms. It reflects, you know, historical injustices and so on. Unless you really delve into the data set,

and ensure that you do a thorough job of cleaning the dataset. We've audited numerous datasets and you find content that shouldn't be there. You find, you know, images of genocide, images of, you know, child rape. One of the early datasets we audited back in 2019 was a dataset called 80 Million Tiny Images. It was held by MIT and

We found several thousands of images, really problematic images, images of black people labeled with the N-word, images of women labeled with the B-word, the C-word, and words I can't really say on air. So while the upside of AI is detecting cancer from scans earlier or predicting cancer,

tornado patterns. There's also so much concern. Now, Dr. Martin Luther King Jr. observed and proclaimed that the arc of the moral universe is long.

but it bends toward justice. But I think we might consider that the arc of the internet is short and it bends towards smut and hate. So you can assume any data you collect from the web is really horrible. And in one of the recent audits, actually, we found an overwhelming amount of

women, women concepts really represented by images that come from the pornographic space. So massive amounts of the web is also really, you know, pornographic and really, you know, problematic content. So you have to do a lot of filtering. So as a result,

This is why, you know, DEI initiatives, this is why obligations to audit your data set to ensure that, you know, toxic contents have been removed and so on. This is why it's so critical. So an AI is audited.

only as ethical as its data sets. And the internet is a weird, dark place where people say things they would never say in person. So the data sets are feeding that. But as we are seeing now, a lot of these companies are abandoning their pledges and we're really walking backward. But for any given AI system, whether it's a predictive system or classification or generating, you can assume that deeply held societal injustices and norms are

will be reflected in how that AI performs in the kind of output the AI gives you. So that's the default. So we have to work backwards to ensure we are removing those biases. Let's say that some of the comments online, some of the hate online is AI generated comments, which...

I sometimes I'll look at now X and I'll say, who are all these people? Like, why are comments getting meaner and meaner with Facebook with a lack of fact checking more and more sort of hateful speech? Does that mean that the next tokens and data sets pick up on that and say, oh, this is how people think. And then the next one. So does it get amplified like mercury toxicity and like a tuna fish? That's one way of putting it. Yeah. Okay. Yeah.

Yes, yes. You are encoding those biases and you are exaggerating them. Yeah. The technical drawback is that so we train a given AI for a next world prediction. For example, it's based on...

You know, this massive amounts of data that kind of tells you how people text, how people use language for English, for example, how people, you know, construct a core and sentence. That data, that training data comes from actual people activities, people interactions. That is your baseline, so to speak, in when you are modeling how, you know, language operates.

But now, as you said, as the World Wide Web is filled more and more with, you know, synthetic text or syntax data that comes from generative AI system itself, then your AI system has no frame of reference. It tends to forget. So the quality of the outputs starts to deteriorate. So this is called model collapse. Okay. Does this keep you up at night?

I mean, I know that it's like, don't be afraid, don't be afraid. But it's also like, this is very new territory for humanity, right? Yeah, but at the end of the day, I mean, people should be in control. If an AI system starts producing outputs that is rubbish, that is irrelevant, I don't think it should scare us. It should make people like, okay, that's not helpful to me anymore. So I'm not going to use it.

Maybe the more it unravels and crashes out, the less people will rely on it. But of course, that hinges on being able to tell that it's spitting nonsense at you. And in this day and age, the world is so profoundly absurd that truly anything is believable.

And Dr. Burhani says that public education is key and just getting the word out that a lot of what we think about AI's capabilities are just big corporations pumping out hype and PR. But the auditors on the inside, like her in her lab, know that, boy, howdy, hot damn, it is a bunch of horse bucky flim flam and not to believe the hype. The actual performance is nowhere near perfect.

what the developers claim. So these are the facts that we really have to communicate. A lot of the AI systems that we are interacting with are actually subpar in terms of performance, in terms of what they are supposed to do, in terms of what people expect them to do. Because these big corporations have really mastered public communication and PR, a lot of like the failures or the drawbacks of AI systems are

are new to people when you actually communicate it. But this should really be like common knowledge. And if people want to use AI, they should know both the strengths and what they can do with it, but also where the limitations are and what it can't do for them. Does abstention work? Does not going on meta and giving them more fodder, does not using chat GPT, does any sort of like boycott work?

Yes and no. On the one hand, so a lot of these AI systems have really cleverly been integrated into the social infrastructure. So, for example, I'm not on Facebook. I haven't been on Facebook for over 10 years. But the apartment complex I live in,

can only be communicated via Facebook groups. I still refuse to create a Facebook account, but situations like this really gives you very little option to abstain, to not use these platforms. And you can't avoid Google, for example. Google search and Gmail and Google Docs. It makes it really difficult. If you want to apply for a job, almost all companies now use Google.

some kind of AI to filter your CV before it reaches human. So in some ways,

Census, you don't even have the option to opt out. If you are, you know, someone looking for a job, you can't say, oh, I don't want you to use an AI system to see through my CV. It's just like... It's going to happen. Yeah, it's going to happen. Dr. Burhani says that it's pretty unavoidable. And I have asked tech lawyers and even they don't read Apple's terms and conditions. They're like, I just checked the box.

So instead of using WhatsApp, which is owned by Meta, which, you know, really gathers all your text, all your information, we can move to other messaging apps like Signal. So Signal has, you know, end-to-end encryption. There is no backdoor. Nobody can access it, not even governments. This is one of the things Meredith Whitaker, the CEO of Signal, has been really strong in standing up to large governments is that

that nobody should have a back-end access that gives them the opportunity to gather data. And yes, Signal is run by a nonprofit foundation, signalfoundation.org. And Meredith Whitaker is Signal's president, and she had worked at Google for 10 years, and she was raising concerns about their AI. And she was also a core organizer of the 2018 Google walkout in protest of sexual harassment there and pay inequities. She also advises government agencies on AI safety and privacy laws.

So signal, good. Yay, signal. And many recently laid off government staff that I know of will only communicate via signal, which is kind of telling in terms of their own safety concerns. But yes, use signal. So we can do some things. We can use less and less of these large corporations' infrastructure and we can use, you know, more open source tools,

But also sometimes, you know, it's just out of your control. But every little helps in every awareness, you know, it kind of culminates in it will eventually lead to, you know, this massive switch. I hope at least. That's encouraging. I hope. Yeah. And can I ask you some questions from listeners?

Is that okay? Yeah, for sure. But before we do, let's give away some money. And this week, Dr. Bahani selected the cause, the municipality of Gaza and UNRWA, which directly supports Palestine refugees and displaced families in Gaza. They say every donation, no matter the amount, helps them reach families with life-saving food assistance, shelter, health care, and more. And for more info, you can see donate.unrwa.

unrwa.org, which is linked in the show notes. And for more on the ongoing humanitarian crisis in Gaza, please see our Genocidology episode with global expert in crimes of atrocity, Dr. Dirk Moses, which we will also link in the show notes. So a quick break now.

Oh, Mother's Day gifts. More slippers? How many feet does she have? Probably not more than two. What does your mom want? Love and appreciation. This means an aura frame. Trust me. So an aura frame, it's a digital photo frame. You upload via an easy app any pictures you want on there. And then this beautiful digital photo frame, which was named the best one by Wirecutter, means that your mom, instead of having to look on her tiny phone, her aura frame,

or a frame just sits there on her dresser and it scrolls past all these wonderful photos that you probably forgot you took. You can even just keep uploading more photos. There's unlimited storage. I got one of these for my dad. No joke, we brought it with us to the hospital when he was sick.

Aura Frames are such a good gift. And if it's for your mom, not only will she be grateful it's not another cardigan, but she'll also love that an Aura Frame means that she gets to see more of you. And Aura has a great deal for Mother's Day. For a limited time, listeners can save on the perfect gift by visiting auraframes.com to get $35 off plus free shipping on their best-selling Carver Matte Frame. So that's auraframes, A-U-R-A, frames.com. Promo code is Ologies.

You can support the show by mentioning all the G's at checkout. Terms and conditions apply. Stripe is the go-to choice for AI companies, from early-stage startups to scaled enterprises. 78% of the leading AI companies use Stripe to go to market quickly and scale globally. That includes pioneers like NVIDIA, OpenAI, and Perplexity. Stripe has developed cutting-edge tools to improve everything from fraud detection to checkout optimization.

Whether you're aiming for incremental gains or planning for enterprise transformation, see how Stripe can help at stripe.com.

If you don't know about our flyer deals on Instacart, this message is for you. Flyer deals are like strolling through your favorite store looking for deals, but instead, you're scrolling on your phone. Because getting delivery doesn't mean you have to miss out on in-store deals, like the creamer that doesn't upset your stomach or the pasta sauce that you can't not buy when it's on sale. Download the Instacart app, shop flyers, and never miss a deal. Plus, get delivery in as fast as 30 minutes.

Okay, we are back. Let's run through some questions from your real squishy brains made of human beings out there. There's some great ones.

Jobber placement. Carla DeAzevo, Alia Myers, Red Tongue, Jennifer Grogan, Ian, Jenna Congdon, Rosa, Rebecca Rome, Other Maya, Sam Nelson, Howard Nunes. All these people wanted to know, in Ian's words, will all jobs be obsolete soon? Did the people working on AI give any thought to compensating people for the lost income?

Jenna Congdon said, when will AI get so good that human writers are basically crowded out of a job? This goes for visual art as well. In a capitalist economy, when you've got to hustle to make money, as it is, what is going to happen job-wise, do you think? Or do AI experts such as yourself think?

So some of the worry about job displacement is genuine and grounded on, you know, real worry. You hear even the so-called good fathers saying things like,

You shouldn't bother learning code or like the job of software engineering, for example, will become obsolete and so on. So whether you're a software developer or a writer or an artist, Dr. Brahani says. I don't think AI will fully automate. AI will fully replace human task force because at the end of the day,

What even the most advanced AI systems do is really kind of aggregate information and kind of outputs a very mediocre, whether it's image or text.

Some of them are so good, though. Some of the art is so good. And you're like... But some of the art, it's not just the pure, the raw output. People have tweaked probably like a thousand times. People have tweaked it. People have spent hours tweaking.

perfecting the right prompt and so on. So there is always people in the loop. There is always, whether it's data preparing, you know, data annotation, data curation, to building the AI system itself, to then kind of ensuring the output is something appealing. You really, you need people through and through. So for me, as a former newspaper journalist, and I was also a newspaper illustrator, I...

I'm not as optimistic. So, so many writers are copywriters who are making content and articles for websites to raise their profile. And now I'm hearing from those people that articles are just written by AI and they are full of shit. And just doing this aside is making me depressed.

And my chest hurts, but Dr. Birhani is an expert, so I'm going to try to find some bright spots. And before she had mentioned that lawsuit with OpenAI and the New York Times, and I was looking for it, and I found a recent article. This was published literally yesterday, which had the headline, AI is getting more powerful, but its hallucinations are getting worse. A new wave of reasoning systems from companies like OpenAI is producing incorrect information more often. It's

Even the companies don't know why. That's the headline. And this New York Times article explained that AI systems do not and cannot decide what is true and what is false. And sometimes they just make stuff up, a phenomenon that some AI researchers call hallucinations. And in one test, the article says hallucination rates of newer AI systems were as high as 79%.

And I also want to note that my spellcheck tried to get me to change the it's in the headline to one with an apostrophe, which is incorrect. So computers, what's going on? But yes, Dr. Bahani says that a lot of journalism has been replaced by AI, even though we all know that the generative system is unreliable.

It hallucinates. A lot of the time it gives you information that sounds coherent, that seems factual, but it's just absolutely made up. It just, there is, it even sometimes gives you citations and so on of things that don't exist. So we always need people to babysit AI, so to speak. So a writer might be, you know, your hours might be reduced and you might be getting paid less.

and your company might be bringing AI to kind of do the bulk of the job, but still you can't put out the raw outputs because most of the time it's not even legible. So the role of writers and artists and journalists and so on becomes more of kind of a babysitter for AI, verifying the information that's been put out, kind of ensuring it makes sense and so on. That's right, Kenny. The babysitter is dead.

To some extent, the answer is yes and no. Humans will always remain at the heart of AI. The minute human involvement ceases, the minute AI stops operating. Because AI is human through and through, as I said, you know, from the data that's gathered from humans and...

And so much work goes into data preparation, data annotation, cleaning up the data, detoxifying the data. And unfortunately, a lot of these tasks are located to, you know, the developing world. So you have a lot of data workers in Kenya, in Nigeria, in Ethiopia, in India, for example, in

that really do the dirty work of AI. There are even a bunch of stories where you have Amazon checkout, for example, AI checkout or self-checkout where Amazon was introducing this AI where you can just collect groceries and your items and just walk out and the AI is supposed to kind of identify what you have picked up and charge you from your credit cards for whatever you have used.

But then it turns out that it was actually data workers in India that were scanning every item you are picking up. So...

Oh, man. What a world. Yeah. And I mean, McDonald's also recently partnered with IBM or one of those companies to have like an AI drive-thru where AI systems take order. And they have to close it within the next few weeks because people were getting orders of like, you know, bacon on top of ice cream and things like that. You added bacon to my ice cream. I don't want bacon. What else can I get for you?

Why raise the national minimum wage for the first time since 2009 when you can just spend billions of dollars tweaking unpaid machines? Like, welcome to the future, maybe. So I guess the point I'm trying to make is like you always need humans for AI to function and operate as it's supposed to.

Because at the end of this day, these are like really mere machines that don't have, you know, intention, understanding, motivation and so on like we humans are. So maybe our jobs will look different, but there will be jobs. Yes. I know a lot of people, myself included, wanted to know the environmental impact of

Lily, a bunch of folks, and first-time question askers Eleanor Bundy and Megan M. And we also did a recent episode for Earth Day with this climate activist and humanitarian rights lawyer, Adam Mett, who said that AI could be solving some environmental concerns, which is optimistic. But what does an AI expert's take on that? Megan Walker asked, environmentally, how bad is AI when compared to the current computing we do? Yeah, what's going on? Yeah, yeah, yeah. How much energy does it use?

Yeah. So again, like we have very little information about training data, the kind of energy consumption used by AI systems is very opaque. There is very little transparency. Okay. But what is the damage generally? So generative AI really consumes massive amount of power compared to traditional AI. For example, if you are using Google to...

put a prompt, say, you know, how many glasses of water should I drink per day? And if you do the exact same prompt and you ask the generative system, such as ChatGPT, people estimate you use about 10 times more energy to process that query and to generate answers. I wanted to go straight to the source. So I used Google AI.

and ChatGPT for the first time, asking them both, how much energy does ChatGPT use as opposed to Google? Now, Google AI said in what I hope is a snotty tone that ChatGPT consumes significantly more energy per query, five to 10 times more electricity than a standard Google search. Now,

It cited a 2024 Goldman Sachs report titled, AI is poised to drive 160% increase in data center power demand. Then I asked ChatGPT, and it said that its version 4 can use up to 30 times more energy than a basic Google search. And it also noted, I like to think defensively, that Google has had decades to optimize for a lower footprint.

Now, Dr. Bahani says that the energy consumption of generative AI systems has become indeed a big issue and that in countries like Ireland, the data centers are power hogs. The compute resources required to running AI systems equals or is more than the total amount of energy that's required to run Irish households. But in places like Texas,

Sometimes that energy consumption is taken away from households to run data centers. Wow. So it kind of results in reduced energy for households. And this is before we even get into the massive tons of water that you need to cool down data centers.

Oh, yeah. I didn't even think about that. Yeah. And the water also has to be pure because you can't use, say, you know, ocean water or seawater because of the sea salt that might damage the servers and so on. So again, there is competition. It tends to be when you use water that is used for households as a result.

You know, people tend to pay for the consequences of that. So yeah, water consumption is another massive area as well. And do you think that more companies will look toward some sort of nuclear power for their supercomputers? Or is that still too highly regulated? Yeah.

I think companies like Google are actually talking about using nuclear power. But yes, that option is being considered. Yeah. How about healthcare? Several people, Benjamin Breneschwert, Annalise de Jong, Emile, Nikki G asked, how can AI be ethically applied to healthcare like data analytics, treatment options, medical imaging interpretation, second opinions? Is there some hope there for it?

Yeah, I think there is some hope. There is some hope for sure. I think there is some hope in, you know, numerous domains for AI to be useful. Okay. However, that just remains a theory. It's possible in theory. Oh. But the problem, there are a bunch of problems. One of them is that...

Generative systems are fundamentally unreliable. So, for example, there is a new audit that came out, I think, towards the end of January, where they looked at this new AI tool where the system kind of records your conversations between, say, a healthcare provider and a patient.

And it summarizes the conversation and it kind of, it's supposed to reduce a lot of the work for nurses and so on. And what they found was that in some cases, eight out of the 10 summaries were hallucinations. So generative systems tend to be unreliable. And the other thing is because a lot of these

Tools that are supposed to be used in healthcare tend to be built by businesses with the objective of maximizing profits. They tend to have a different kind of objective than, say, you know, what's good for the patient. So another famous case is UnitedHealth that is in court at the moment where they were using a suit of about 50 algorithms to look at mental health services and

And what they found was that they were using, you know, cost as a proxy rather than the need of the patient as a proxy. And they were cutting a lot of services, a lot of like, you know, therapy services and meditations and other necessary services. Again, because they are, you know, they are looking at the wrong motivation, the wrong proxy. They are looking to save the company money rather than to ground what they do in the needs of the patient. Let's go!

So if we correct, for example, hallucinations and biases in AI systems, and if we kind of, it's impossible to strip down all, you know, capitalist motivations, but if, you know, capitalist motivations come second to, you know, the needs of the patient, then it's possible to kind of develop AI systems in various areas of healthcare, right?

That prioritize patients, that prioritize people as opposed to just, you know, inserting technology for the sake of having technology. Yeah. And also for using technology to maximize profit as opposed to ensuring patient safety. Which is, once again, good luck. I mean, our healthcare is...

Above and beyond frustrating. But a lot of people wanted to know, Samwise, Emily Heard, Amalia, Magda, Kousaoka, wanted to know, Kira Hendrickson asked, first time question asker, how do we feel about AI and chatbots and using them in high schools? Schoolwork, Samwise, AI use in schools, thoughts, is there a way to flag it? Are we doing education and injustice? Mm-hmm.

Yeah, so on the one hand, I know some people that find using AI chatbots really helpful.

You give it a prompt, it gives you just a bunch of answers. Of course, these are people that know how to craft the perfect prompt, that know where AI can be useful and where it might fail you. So with all that in mind, it can be useful, but you need to be an expert.

Having said that, for young kids, studies are starting to emerge that, for example, they did a control study, I think, with over 3,000 students where some of the students were given chatbots to help them with, I think it's maths problems.

The others weren't. And they did a test. So what they found was that the kids that had chatbots did better than the kids that didn't have. Then they performed another test a few weeks later and they found that the kids that used chatbots performed way worse than the kids that didn't have.

So people are realizing that these systems inhibit learning. Of course, you know, education is not just information dissemination, the teacher going into class and just like telling the students facts, but rather it's an interaction. It's a two-way street, both for the student and the teacher developing the skills that

especially critical skills to analyze and to decipher fact from fiction, you know, information from misinformation and so on. And when you use AI chatbots without knowing their limitations, you tend to kind of

trust the output, you tend to treat it as fact, but also it inhibits your learning, it inhibits your critical skills. And if you don't have the knowledge to begin with to verify the answer, you have no way of knowing what you have, what you are getting is correct or incorrect.

So in the long term, studies are coming out to show that they might seem helpful in the immediate term, but in the long term, these chatbots might be inhibiting the learning process.

Last listener question, and I know my husband has this question too. DVNC, Sherry Rempel, and Chelsea and her dog, Charlie, want to know, Chelsea asked, why does AI do better when you threaten it? Is that ethical? Because it doesn't feel like a good precedent to set in any part of life. DVNC asked, is it weird that I feel the need to say please and thank you when talking to chatbots? Will the AI overlords be nicer to me when they take over? No.

I assume all of my conversations will be logged for eternity. And Jarrett, my husband also, doesn't use chat GPT very much. But when it came out, he was trying to teach it to be civil. And I was like...

Boy, I don't think that's going to work. Do manners matter? Yeah, yeah, yeah, yeah. So for models like ChatGPT, they have what they call a knowledge cutoff date. So the training data, your interaction won't really input into the learning system of the model. It's the training data set for, I think, ChatGPT that ends, I think, around 2021 or 2022. So

ChatGPT, for example, can't give you a coherent answer for any event that has happened recently. So different models are using data from different timelines. They have to collect it and clean it and process it first. So it's not as real time as I thought it was or as some people might expect.

For I for kind of when you speak to it, was it aggressively threatening? Yeah, threateningly. Why does why does it do better? Threaten it. So it's the first time I'm hearing this. So I should I should try it out. I should check it out and see if that also happens to me. But yeah, it's it's the first time I'm hearing it.

Okay, let's hit the books for this. Specifically, a 2024 study about how to interact with large language models, and it's titled, Should We Respect LLMs? A

a cross-lingual study on the influence of prompt politeness on LLM performance. So this abstract explains that they did testing in English, Japanese, and Chinese language models, and that as the politeness level descends, the answers generated get shorter. However, on the far side of the rude scale, impolite prompts often result in poor performance, but overreacted.

But overly polite language does not guarantee better outcomes. And the best politeness level is different according to the language. And they say that this suggests that LLMs not only reflect human behavior, but are also influenced by language, particularly in different cultural contexts.

So what about the future? The researchers say that it is conjectured that GPT-4, being a superior model, might prioritize the task itself and effectively control its tendency to, quote, argue at a low politeness level. So as it matures, it just won't engage. It's like this AI new generation has been to therapy, if it were a person, which

which it's not. And as to the AI overlords, again, I mean, it's just models. It's just, you know, data sets and algorithms and, you know, connection of networks. Okay. There is no kind of all-knowing, God-like...

all seeing AI. But of course, you know, the people that are running AI companies come close to that because they have access to the data, because they have access to the algorithms. So you might worry about those people, you know, using your data. It has like

Almost invisible, but very nuanced downstream impact. So in the US, for example, authorities are forcing companies like Meta to give up data so that authorities are hunting down women that had abortions, for example, in areas where abortion is prohibited. Law enforcement is working with Amazon, for example, for, was it Amazon ring bail? Oh, right. Yeah.

This is that camera-enabled and Amazon-owned Ring doorbell. So law enforcement like ICE uses that kind of data from those to do something, to even deport people. So what we should worry about is not really AI overlords, but these companies working with powerful entities to really kind of identify people that might be in trouble with the law or that might be doing something

You know, that violates the law because that data gives them access, gives them the knowledge about the whereabouts in the interactions, in the activities of people. And previously, according to a 2020 Newsweek article titled, Police are monitoring Black Lives Matter protests with Ring doorbell data and drones, activists say. It's reported that Amazon Ring has video sharing partnerships with more than 1,300 activists.

law enforcement agencies across the U.S. However, in January 2024, Ring said that it would stop letting police departments request and receive users' footage on its app. Now, on the flip side, some Ring doorbell owners are posting on the Ring Neighbors app when ICE raids are going down locally and they're alerting their community. Now, Ring, of course, notes that those are user-generated posts. It has nothing to do with them. Whether or not they'll censor those user-generated posts is like

anyone's guess. Hey, let's take a welcome departure from reality for a sec, shall we? Do any movies get it right? Like, does The Matrix get it right? Does AI, that old Spielberg movie, does anyone actually get AI right? Or does that drive you absolutely insane to watch TV? So I love science fiction, actually. Like, The Matrix is one of my favorite movies. I knew it. I knew it. It's a good one.

But also that's like, that's nowhere. You have to treat science fiction as science fiction for the sake of like, it's really, some really good science fiction really brings you into a world that you couldn't even envision. So I love that element about science fiction. But a lot of this like robot uprising, Terminator-like...

Movies are really just for entertainment. There is nothing that can be extrapolated and said, oh, this could happen to real AI. But you have...

kind of very nuanced sci-fi movies that nail it. So you have Continuum. It's not a movie, it's a series. It was on Netflix a while back, a few years back. So what happens here is that, you know, as AI companies become powerful,

they take over government and they become, you know, the bodies that really govern society. So that kind of sci-fi is very close to reality than, you know, Terminator-like movies. Yeah. How about Black Mirror?

Oh, Black Mirror is so good. I mean, Black Mirror, there are some things that are like, now when Black Mirror came out, it was just like, wow, this could happen. And now it's like, oh, that has happened. Or it's like, oh, yeah, this is, you know, this is what's happening with this and that government. So, yeah. Wow. Yep.

And the last two questions I always ask are always worst and best. I guess your most loathed thing about AI and your favorite thing about it. I guess we've talked a lot about cautionary, but like in terms of what you do or in terms of your job, worst and best thing? Yeah. So the worst thing is really just the hype.

As a researcher, I have my own research agenda, but the hype is so destructive. You see something that's not true being disseminated, going viral. And, you know, as an expert, it's really troubling. So you have to stop what you are doing and do some work to kind of correct it or at least attempt. So, yeah, a lot of the hype really is what really...

gets on my nerve and it becomes also a problem in terms of getting my own work done. But what excites me about AI is...

I'm still extremely optimistic about AI. But unfortunately, a lot of the AI I get excited about is not something that results in, you know, massive profits. So, you know, using AI for disaster mapping, using AI for soil health monitoring and so on. These are things that really excite me, but there is no monetary value in developing AI for this system. So these are the things that really get me excited, that really...

make me feel like, wow, this is powerful tool that we can use to actually do some good in the world. Yeah. We could make sure that everyone is fed and has healthcare and that resources are allocated in a way that's fair. Yeah.

And we just don't because of money. Yeah, because it doesn't make you money. Yeah, which is, I think, once again, money is the root of all evil. Yeah, yeah, yeah, yeah. Thank you so much for doing this. This has been so illuminating. And it's great to talk to someone who knows their shit about this. Thank you so much for having me. I really enjoyed our conversations. So ask real people, real not smart.

And important questions, because how else are we supposed to learn anything? So thank you so much to Trinity College's Dr. Rahani for sitting down with me and making the trip to Ireland so eventful. I loved this talk. And you can find links to her and her work in the show notes, as well as to the cause of the week. We are at Ologies on meta-owned Instagram and on Blue Sky. And I'm giving my data as Allie Ward with just one L on both.

And our website has links to all the studies we talked about, and that link is in the show notes. If you're looking to become a patron, you can go to patreon.com slash ologies, and you can join up there. If you need shorter, kid-friendly versions of Ologies episodes, we have them for free in their own feed. Just look for Smologies. That's also linked in the show notes.

Please spread the word on that. And we have Ologies merch at ologiesmerch.com. Thank you to Aaron Talbert for admitting the Ologies podcast Facebook group. Aveline Malik does our professional human-made transcripts. Kelly Ardoire does the website. Noelle Dilworth is our flesh and blood scheduling producer. Human organism Susan Hale managing directs the whole show. A live editor Jake Chafee helps put it together. And the connective tissue lead editor is Mercedes Maitland of Maitland Audio.

And Nick Thorburn made the theme music using his brain and ears and fingers. And if you stick around to the end of the show, I tell you a secret. And this week, it's two. One is that I think I'm going to be shooting something next week. And I will tell patrons about it first, but also I'll do some posting on social media if and when it happens. I'm really excited. I don't mean to be secretive, but just send good vibes next week. I'll tell you as a secret after it happens.

And the other secret is before I went to Ireland, I got a couple of those film cameras disposed because it's like, ooh, what is this? There's film in this. And I took all the pictures and I haven't gotten them developed yet. And I kind of feel like the longer you wait to get them developed, the more you'll like them. And so I don't know what the appropriate amount of time is to forget about this disposable camera and then get it developed. If it should be like

a couple more months or if I should get it developed in a year. And so now I just have this disposable camera in my backpack and I don't know how long I should... I also don't know where to get it developed, if I'm being honest. But anyway, if anyone has thoughts about that, feel free to advise me. That is a very analog update here for me. All right. Please do not use ChatGPT to write...

or illustrate anything important, hire an illustrator if you can. Illustrators, writers, artists, musicians, please let them live. They are alive. Okay, be good. Bye-bye. Hackadermatology. Homeology. Cryptozoology. Meteorology. Marveled at our own magnificence as we gave birth to AI.