We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Janelle Shane on the Weirdness of AI

Janelle Shane on the Weirdness of AI

2021/5/25
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Krenkov
J
Janelle Shane
Topics
Janelle Shane: 本人创建的AI Weirdness博客旨在通过幽默的方式展现机器学习算法的局限性。博客涵盖了各种实验,例如使用文本生成神经网络生成搭讪语,以及对图像识别算法进行对抗性测试等。这些实验揭示了AI算法在处理非预期输入或超出其舒适区的情况下的局限性,以及算法出错的滑稽或令人不安之处。算法生成的文本或图像虽然在表面上模仿了人类语言或图像的结构和节奏,但却缺乏更深层次的含义和情感。此外,博客还探讨了AI算法在实际应用中,例如自动驾驶汽车,也存在处理边缘情况的问题,这可能会导致严重后果。 在撰写《你看起来像个东西,我爱你》一书的过程中,本人对AI的理解得到了深化,特别是对AI中的人为因素和对劳工的影响有了新的认识。本书旨在以一种有趣且易于理解的方式,向大众解释AI的工作原理及其局限性,并帮助人们更好地理解AI。书中总结了AI怪异的五个原则:1. AI不够聪明;2. AI的智力水平相当于虫子;3. AI无法真正理解要解决的问题;4. AI会严格按照指令执行;5. AI会选择阻力最小的路径。这些原则概括了AI为什么会产生奇怪的结果,以及如何在实际应用中避免这些问题。 本人认为,公众对AI的认知存在误区,认为AI应该非常聪明,而实际上AI在超出其舒适区时表现会很奇怪。通过博客和书籍,希望能够帮助人们更好地理解AI的局限性和实际能力,避免对AI抱有不切实际的期望。 Andrey Krenkov: 通过与Janelle Shane的对话,我对AI的理解更加深入。Janelle Shane 的工作揭示了AI的局限性,即AI并非像人类智能那样聪明,而是具有自身独特的特性。AI在处理预期情况时表现良好,但在处理异常情况时则会暴露出其能力的局限性。此外,公众对AI的认知与实际情况存在偏差,部分原因是媒体宣传和营销炒作。许多公司将自身品牌与AI关联起来以吸引投资者,但实际上AI的运作可能依赖于人工操作。

Deep Dive

Chapters
Janelle Shane describes AIweirdness.com as a machine learning humor blog where she explores the limitations and breakdowns of algorithms, especially when the inputs are unusual.

Shownotes Transcript

Translations:
中文

Hello and welcome to Skynet today's Let's Talk AI podcast where you can hear from AI researchers about what's actually going on with AI and what is just clickbait headlines. We release weekly AI news coverage and occasional interviews such as today. I am Andrey Krenkov, a third year PhD student at the Stanford Vision and Learning Lab and the host of this episode.

In this interview episode, we'll get to hear from Janelle Shane, the creator of AIweirdness.com and the author of You Are a Thing and I Love You. Janelle Shane works as a research scientist in Colorado where she makes computer-controlled holograms for studying the brain and other light-steering devices. She is also a self-described AI humorist.

On AIweirdness.com, she writes about AI and the sometimes hilarious, sometimes unsettling ways that algorithms get things wrong. Her work has also been featured in the New York Times, The Atlantic, Wired, Popular Science, and more. And she has also given a TED Talk, The Danger of AI is Weirder Than You Think.

Her book, You Look Like a Thing and I Love You, How AI Works and Why It's Making the World a Weirder Place, uses cartoons and humorous pop culture experiments to look inside the minds of the algorithms that run our world, making AI and machine learning both accessible and entertaining. So very excited to have you as a guest. Thank you for joining this podcast, Janelle. Hey, thanks so much for inviting me.

And I'm sure our listeners will also find this really interesting. So let's just jump in. I think like many people in the AI sphere, I've come across your work through Twitter.

and particularly through what you do on AIawareness.com. So maybe can you give a quick summary of what that website is, what it is you do there first? Sure. Put most simply, it is a machine learning humor blog. And that can range from playing, I play a lot with text generation, some things that

generate or classify images. So I'm kind of poking at the edges of what these algorithms are good at and where that sort of breaks down, especially when the inputs get weird. The real world can be adversarial if you probe in the right way. And I really kind of delight in finding those examples.

Right, exactly. So yeah, what have been some examples that you think are kind of exemplify what you do there?

Well, you know, the one that ended up being the title of my book is a project where I was using, you know, a text generating neural net to generate pickup lines. So this was I've tried it a couple of different ways. The way I tried it first actually was when I.

I was training one of these neural nets from scratch. So I had a list of hand collected pickup lines from around the internet, which were terrible by the way. So I kind of regretted the project at that point, but, you know, fed them to one of these things that learns to predict the next character in a sequence. And so with no prior training, you know,

This looked at my couple of hundred pickup lines and learned to reproduce some of the key phrases and remix some of the other ones. Most of the pickup lines were completely incomprehensible in a sort of delightful way. And my favorite line out of those was, you look like a thing and I love you. And so that ended up, like I said, choosing that for the title of my book.

Right. Yeah, I was looking over that post and kind of it was interesting how despite pickups being somewhat cringy, sometimes what the AI sped out was kind of wholesome and innocent. Right. Yeah, because the the yeah, the cringy stuff is a lot of that is in the subtext.

which kind of flew right over the metaphorical head of this AI. It could manage some of the phrases, the surface appearance, the rhythm, but not really the deeper meaning, which is where this easiness was. Yeah. And of course, for listeners, this project, AI Weirdness, has been running for years. So there's a lot of different

things that have been applied to. So a lot of recipes, a lot of naming of things like superheroes and Pokemon, Dungeons and Dragons creatures. So lots to explore and to delight in there. So going to the very beginning, I guess, how did you start with playing around with neural nets and kind of stumble upon their weirdness?

Well, I first actually encountered them when I just started college. And this was in 2002, starting undergrad school.

before people were using these things commercially. And I happened to attend a talk by Eric Goodman at Michigan State University who was doing some work with genetic algorithms. And just the examples you described were really cool, like trying to come up with a more efficient flywheel and ending up with something really alien looking or trying to design a car bumper that would crush in just the right way and then

in an accident and having something come out that works, but we don't quite understand why. And, you know, you almost then have a solution. You have to go back and discover how the solution actually works. And sometimes the answer is that the thing kind of hacked your program, hacked your reward function, came up with a slightly different solution, you know, reinterpreted the problem. And I just found that so interesting.

almost in examining a natural phenomenon sort of way. So I started working in that lab, looking at genetic algorithms, and then ended up, since optics is another of my interests, I ended up applying these sorts of search algorithms to shaping ultra-short laser pulses and tried to blast apart chemicals in particular ways. And

That turned into, well, actually, when we understand the laser problem better, we realize we don't actually need the learning algorithm. We've gathered the insights that we need just through getting this background information. So that the machine learning aspect of that kind of dropped away from what I was doing. And I was doing more straight up optics from that point on.

But I'd had that, I remembered that interest. And so years later, about 10 years later, I,

I happened upon a blog post by Tom Brew, who had used one of these text generating recurrent neural networks to come up with new recipes or to try to imitate recipes. And these things were garbled in the most hilarious way. And, you know, that kind of came flooding back how interesting I thought.

these algorithms were and how weirdly your solution can come out sometimes. So that was what sparked me to, oh yeah, I have now read all of the article, all of the recipes on this website. I want more. The only way to get more is to generate more myself. I guess I better learn how to do this. Right. Yeah. That makes a lot of sense. I think

A lot of people probably had found new one that's interesting when first encountering them. They're pretty fascinating and AI is in general. And then it's been very interesting to see as it's become more explored, becoming more accessible and applicable and many people picking it up again and exploring it.

How did you find the experience of sort of getting into it, putting together the code and then just running it? Was it straightforward? Back in those times, it wasn't straightforward. So there was quite, yeah, there was a learning. You had to be determined to learn.

do this. So this was at the time I was using char RNN by Andre Carpathi and you know you have to install that there's no GUI. I was training it on a 2010 MacBook Pro so there was a lot of waiting involved as well as this thing would churn through its calculations and you know things would break

or it wasn't clear. The documentation, it was not really designed as a GUI user interface thing. So things are a lot different now where you have these pre-trained algorithms already have a lot of basic internet training and you're putting them, you're often interacting with them through like a web browser even. So it's not running on your computers, running on the cloud anymore.

You're clicking to interact. There's limited stuff you can do with it, but the algorithm itself is a lot more powerful. Yeah, it is a lot different. And then you have programs like RunwayML that have a desktop program that you can run a whole bunch of these different models in different domains and not have to do any coding or wrangling to get that installed.

Yeah, that's true. I've played around with those myself a bit and as someone who hasn't found the time to play with GANs, even though I could install it using those tools is a lot nicer.

Yeah, I mean, yeah, lowering that barrier, you know, even for somebody for whom that's not ultimately a barrier, like you could learn to do it if you were determined enough. But how do you know if it's worth your time? So having these kind of sandbox things, having people publish articles.

web demos along with their papers so that you don't have to try and install everything reproduce it from scratch you know that is such a nice thing when people do that and then of course it opens up it makes this then accessible to people who are not computer scientists so you get more artists using these tools and that's really exciting to see too for sure um and i think um

That's also an interesting aspect of your blog and your book, which we'll get to soon, is in exposing the weirdness of AI, you're really also exposing kind of the reality of AI, which is that it's not super smart. It's not really even similar to human intelligence. It is its own kind of thing.

So in running a blog, have you found that you had people outside of AI discover it and be surprised by it or grow to understand AI better because of it? Yeah, and that was a comment that I get a lot, kind of.

Along the lines of what you're saying just now is that I would get people surprised, like, isn't AI supposed to be smart? How can this be the same thing that is being used commercially and that has, you know, is making medical decisions or that's, you know, recognizing faces as self-driving devices?

Car navigation, you know, how is this all technology coming from the same place? Surely they're all at different levels. And it isn't. It's, you know, it's really showing what happens when you get these algorithms a little bit out of their comfort zone, kind of where that veneer of technology

predictability, human predictability sort of phrase. Like I go looking for the edge cases and I know, you know, a lot, especially if you're trying to publish a paper or sell a product, the inclination can be to go for, to stick in the areas where there's

this algorithm, whatever it is, is doing its very best work and kind of showing what it can do, showing it off. And I'm kind of occupying a different space right now since I don't have a paper to publish. I don't have a model to sell. I'm not trying to impress anybody. I'm just trying to find something interesting. And there is a lot of interesting things to find when you poke at these models in certain ways. You...

You know, there's one experiment I did with image recognition

gave it a photo that I'd taken on vacation in Scotland. And, you know, it's one of these, it's the Isle of Skye, there's sheep, there's green hills. And the photo captioning program was getting all of that. Like, I think the thing that really blew my socks off was that it knew which hills on the Isle of Skye were there in the background. Because I guess where I was standing taking the picture was a very popular spot.

A popular enough spot to stand and take the picture that this had sort of seeped in through scraping photos from the Internet that, you know, it knew that spot. But then pushing it a little further, like doing the AI weirdness angle on it, I wondered because I had kind of seen some other things people had mentioned that made me sort of wonder, well, what would happen if I photoshopped out all of the sheep?

And so I did that, like I used that healing tool, replaced every sheep, really zoomed in, got every dot. And they were still there in the caption. And that kind of led me on this, wait a second, you know, how, you know, how can it remember that there used to be sheep there?

is this, you know, did I miss a sheep? Is this homeopathic sheep? What's going on here? And so that kind of led me to this series of experiments where I showed other pictures of Scotland that did not have sheep in them, that never had sheep in them. And the sheep would appear in the captions or sometimes there would be horses in the caption or cattle grazing in the caption. And like in one particular spot on a wet, misty day, it

Tag that there is a rainbow in the picture when there definitely wasn't. And so, yeah, this kind of exposes there may have been some shortcuts. There may have been a fundamental missing the point of what a sheep is versus that. It's not that this landscape is sheep. So, yeah, I like doing experiments like that and interacting with the engaging with the public community.

you know, in the course of that. So in doing these kind of adversarial sheep experiments, not only did I send it landscapes that could have sheep in them and don't, but I also got people to send me pictures that, um,

were sheep in unusual places. So, you know, what exact, I think one example I gave it was a picture of somebody's sheep in a car and it was identified as a dog.

And then there is another picture where a kid was holding a goat in their arms and the goat was identified as a cat. And another picture where there was goats that have climbed up into a tree, I guess, to eat the leaves. And that was identified as birds and giraffes. Yeah.

These are extremes of competence when you're doing the expected thing. Standing in Scotland, see sheep, take picture of sheep, upload versus doing the unexpected. Yeah.

Right. And I think, yeah, as you said, AI these days tends to work well when you use it as you expected. But then what you will get when you stray out of there can be quite interesting, which definitely your blog showcases.

Yeah. And this is, you know, has implications in so many areas where the tough part really is these edge cases, the bits that stray from the unexpected. So you have self-driving cars that seem to be doing a good job until there's a billboard with a stop sign on it or a truck drives by with a picture of a bicycle on it.

Or sometimes you may, you know, the people setting these up may set them up, the programs up, so that a kind of not very rare case is still classified as unexpected. So if you set it up that pedestrians can only be detected in crosswalks, if you have a pedestrian who's not in a crosswalk, that becomes an edge case. That becomes a potentially deadly problem.

Exactly. Yeah. So on that topic of sort of the implications for AI in general and kind of the takeaways for understanding AI and some of the more serious aspects of it, we can...

get to talking about your book, which I think is a really good overview of not just weirdness of AI, but really kind of how AI works for non-technical people where it really does go through how AI works, uh, but in a very accessible way. So the book again is you look like a thing and I love you, which is an attempted pickup line by AI. And, um,

Yeah, maybe first you can dive into when did you get the idea to write it and what was the motivation there? Mm-hmm.

I guess the motivation really is trying to communicate in an interesting, accessible way. The answer to this problem isn't AI supposed to be smart because, you know, trying to dig into under which conditions AI is smart and where can we trust it and where should we what kinds of questions should we ask?

Then when I started writing the book and even more so now, everyday people who are not computer scientists are still having to make important decisions about AI. So setting policy on what kinds of algorithms or applications should be allowed or what the regulation should be, or even being trying to

Someone's trying to sell you a product and you have to decide, is this legit? Could this possibly work? Or, you know, could it potentially have serious bias issues that are going to prevent it from working in the majority of cases like this is.

These are all important things that people are encountering every day. And to kind of gain an intuition, to have these stories to remember, the stories of this sheep or the stories of the kind of head slapping, self-driving car mishaps, you know, and having these in the back of your head or the story of, you know, an algorithm that,

decided that the most important characteristics for success at a company was that the candidate should be named Jared and play lacrosse. All of these kinds of real world examples, I think having these stick in people's heads, having the

uh, the process of learning this stuff, be enjoyable, be funny, have there be cartoons on every page. Like this is, I, you know, I did feel like it was not only a thing I wanted to exist because it would be fun to write, but as a public service as well. Yeah. And, um, as a, you know, person who is pretty knowledgeable, but I definitely saw that in terms of

not only is it fun, but it actually does convey how AI works in a very accessible way that is still accurate. So that's really good to have out there for sure. It's been used in classrooms, business classrooms, in courses that are not focused around AI or not computer science courses. And so that's really cool to see, too, to see

educators pick this up and say, oh, yes, we can use this. I see. That's very cool. So the book covers a lot of ground. It covers kind of how AI works, some of these limitations, dangers, various aspects of it. But one thing that stuck out to me early on is you cover the five principles of AI weirdness, which is kind of summarizing in a way

why or how AI can get weird and result in the sort of strange things you have on your blog. So I'm curious, you know, how long it took you to think through this sort of summary of five principles and... Yeah. So, yeah, that's always the challenge is distilling the book down into a few bullet points and having those be succinct and sticky enough. So...

And this was, I don't always remember which five principle, what the five principles are because they really are intertwined in so many interesting ways. But I kind of, yeah, I ended up distilling it down to like a progression of things that kind of follow for one another. You know, starting with its overall idea that, you know, the danger of AI is not that it's too smart, but that it's not smart enough. And...

People are trying to get AI to do things that it has no business doing, and it's not going to tell you that. And to give people an idea, like really roughly, what do I mean when I say it's not smart enough? The next principle on that list is AI has the approximate brainpower of a worm.

And, you know, the worm, you know, you can debate between various invertebrates or maybe simple vertebrates, depending on how optimistic you are. Like, we don't understand how smart a worm is. So this is a moving target. But I wanted people to be thinking differently.

Earthworm thinking simple, you know, in terms of number of neurons and complexity of the way these artificial versus neural nets are are connected. We are much closer to talking worm level than to talking squirrel level, for example, when we're talking about the AI we're going to be having. So I wanted to really get that across to sort of reframe expectations of how much this worm is going to help you out.

And also how much it's going to understand the problem you want it to solve. So that was number three was AI does not really understand the problem you want it to solve. And that takes a broad understanding of the world, the context. And that is a really big, complex problem that we face.

encoded in these algorithms. And then the next thing on the list then is that, you know, given that, number four, AI will do exactly what you tell it to do, or at least it will try its best to do that. So if you tell it to, you know, minimize the number of sorting errors in this list of numbers, if the AI can delete the list, therefore getting you zero sorting errors, that's what...

It will do because as far as it knows, that's what you've told it to do. And I get I run into that all the time that I'm trying to get it to generate a new example of something. I give it existing examples and it says, oh, you mean more real Disney princesses, for example? I'm like, yes, I know I gave you Disney princesses, but can you imagine some new ones like how to how? Yeah, that's.

That is technically what I asked for. And then, of course, number five related to that, AI will take the path of least resistance. So if there is a sneaky hack that it can land upon, then that is the answer you get, you will get. So if it is easier to break physics and kind of slide along the floor than figure out

figure out how to use like complex legs and stuff in your simulation then that's what the ai is going to do and you'll get weird gliding you know robots in your simulation you'll get silly walks and stuff uh which can be entertaining can also be incredibly frustrating so that's yeah yeah yeah yeah yeah do you have any favorite examples of that

Oh, I mean, it's funny because early on in the research, one of my first research projects, we had a project on kind of creating 3D shapes. And while debugging this algorithm and making it develop, the shapes we were making were like chairs. And...

it wasn't quite working. So I got all, all these super weird, deformed, scary looking chairs, which, uh, I think exemplified, you know, it's not smart enough. It doesn't mean no problem, but I think it also kind of touches on AI weirdness because I took screenshots of these chairs and shared them because they were so fun. Um, and I also know, uh, you had this, uh, example in your Ted talk of, um,

the 2d walker where instead of learning to use its legs it could modify its body and then just get super tall and fall over which i think is a real classic yeah and and i yeah and i love yeah i love that one because as you say it is classic so the first example i found i think it was uh one of carl sims's work where you know this was discovered to be a way that

the walker would solve this problem. And then it happened again shortly before my book came out. David Ha was doing a similar kind of problem and didn't set the constraints at first on how large this thing was allowed to make its legs. And so, yeah, you end up with this hilariously tall,

single leg that this robot has built and is at the very beginning of the simulation is balanced on top of it. All you see is a foot and the rest of the thing is off screen. And then once you hit play, you get this really comically slow collapse tipping over of this giant structure before it lands. Finally, many seconds later at the finish line, it sort of bounces slightly. Yeah.

Basically, and technically what you asked for. Exactly. Yeah. And it's kind of funny how I think AI researchers are sort of super well familiar with these principles because that's half of what we try to avoid is being aware of them and working around them. So I definitely have seen a lot of sort of

cynicism about media portrayal of AI because sort of the view we have as experts is colored by knowing that AI is flawed and limited and often very hard to actually get to do what you want without cheating or doing these least resistance routes.

which most non-technical people don't get because of their kind of previous perception from science fiction or popular culture. So this effort to sort of make them understand is pretty interesting. Yeah, it is. It is an interesting challenge because, you know, we had AI and robots in science.

long before we had anything kind of resembling them in science. We still don't really have the kinds of AI that people think of from pop culture. And there's this huge gap and not much in between, between, you know, what we have and this human level, mostly or above science fiction AI. And so trying to, and then, but then,

In our language, the word AI tends to be used for both, and that does create some confusion, I think, that some people capitalize on and would like people to say one kind or happy for people to believe it's another kind. And doing science communication in that sort of environment with that sort of setup is definitely an interesting challenge. Yeah.

Yeah, it reminds me, I once presented a little talk about the limitations of recent AI accomplishments and sort of highlighting the intricacies of a problem and what made it easy and hard. And I kind of mentioned that these sorts of challenges persist in real problems like self-driving. And then one of the audience questions was like,

well, aren't we almost to level five fully autonomous self-driving cars? And it really caught me aback because, you know, this person was quite skeptical of my explanation that we are nowhere near that level and that there's all these challenges. But it's interesting how there's this discrepancy of perception. Yeah. Yeah. And people kind of get

fooled by the marketing hype or fooled by these lines getting blurred. I went to a presentation where the person giving it was convinced that Sophia, the robot, was real and that this was what the current state of AI was, is that, you know, we have a robot citizen. What does this mean? What's the future of humanity? And did not get that this is

a puppet. And so I think, you know, trying to do science communication in that, uh, in that environment, yeah, there's all these complications. You get people who want to sell stuff to investors who are amping up the, you know, the capabilities of current AI. And it is, that's, that does make it a challenge too, for sure. Yeah, for sure. But, um,

Hopefully, you know, now that AI is becoming a big deal, it does seem like there's more efforts to delineate the science fiction and nonsense from what's actually the case. Yeah, there's, you know, there's... I'm always interested when I come across those cases, too, of people kind of using AI in a story that is current level AI as opposed to science fiction level. So, you know, one of the...

One of my favorite examples, there's a novel by Robin Sloan called Sourdough and Robin Sloan's done some AI work with AI before. And so as written in a machine learning controlled robot arm that is doing realistic things and like the threshold for progress and for what is groundbreaking, what is exciting, what's a challenge.

It was very realistic and still made for an interesting story. But of course, then the robot arm is not really a character. You have to do that work with human characters. And yeah, I do have lots of science fiction stories I like where the robots, the character, and we're often in science fiction, AI land. I'm always interested to find these examples kind of in that area in between because there aren't many of them.

That's very true. Yeah, I've seen that myself as well. So getting back a bit from the broader culture topic to to more of what's in the book, you know, a good deal of it is explaining kind of principles that presumably you sort of become familiar with when working on AI weirdness dot com.

But I was curious, you know, in the process of writing it, which must have been pretty labor intensive, you know, putting together a whole book is a lot of work. So I'm curious, sort of, if you discovered anything or sort of had any insights that were new in that process? Well, I, you know, I did definitely learn a lot. And, you know, I had

On my blog, up to that point, I had not very much words to work with. So I could not go into depth on how these things worked and how they learned. I think the chapter in which I really went into kind of detailed thought experiment and how you would actually train a neural net step by step and trying to keep that

and have there be a neuron that did something and add another layer and have that still be understanding and understandable as a neuron that does something intuitive. For me, that was really an eye-opening kind of experience. I'm like, oh, I actually...

get this at a deeper level than I did before. I mean, back when I was learning AI, I learned more about genetic algorithms and about neural nets because people were not, not as many people were doing neural nets at the time. And of course, now people are combining them. That's interesting to see too. I would say the things that

Another thing that kind of surprised me that came up as part of my book research was just how much current AI is humans under the hood and how much the performance relies on crowdsourced workers or how much of what is billed as an AI application is maybe people remotely controlling something and how easy it is to...

you know, on a programmatic level, swap out a call to an AI for a call to, you know, remote crowdsourced worker. And yeah, just kind of seeing that pop up almost as I was writing the book in real world examples, meal delivery robots that an article would come out and say, oh yeah, these are not autonomous robots.

Or maybe they're autonomous for six seconds at a time between waypoints that a human driver lays down. And to see, yeah, to see that, I found that really fascinating and also disturbing kind of in labor rights and, you know, removing labor from your site sort of way as well.

Yeah, that's definitely been an interesting trend. As we discussed earlier, AI has a lot of hype, there's a lot of marketing. And I guess from what I understand, companies brand themselves as AI to appeal to investors, but then to actually get AI to work, as you said, you just have people do what AI is supposed to be doing and then maybe eventually transition.

I believe you even wrote an article for Slate on kind of figuring out what is a human, what is an AI. Yeah. And I concluded, yeah, in many cases you can't figure it out because if you're communicating with customer service and you don't know it's a robot, it is incredibly rude to mess up the interaction process.

If there is, in fact, a human on the other end of the line. So I concluded, you know, it may be possible to tell, but it may be unethical to do the experiment in some cases to kind of find out this answer in any given interaction.

Yeah, yeah, it makes sense. One of the things I learned from the book and found interesting is this concept of giraffing and also coming across the different giraffe images by Van was pretty surprising. So I wonder if you can tell our audience what that is conceptually.

Yeah, so, you know, giraffes have become a bit of a running joke in machine learning. And there's, you know, a few things, ways in which they popped up. I think one of the more prominent ways is you get image recognition algorithms identifying things as being giraffes.

far more often than they actually are giraffes in real life. Like you give them an empty picture of a fence or something and you may get giraffe back as your caption.

And so that's a partly reflection of just how prone people are to take pictures of giraffes. It's kind of a relative of this idea of taking pictures of sheep in Scotland, where they're overrepresented in the data set of pictures that people take and post, as opposed to random scenes that you might see.

So having giraffes pop up there as a kind of a running joke already and then kind of playing with that as, you know, can you throw a giraffe into the input to an algorithm and ask it, you know, if you have a chat function.

chat bot that is supposed to have a conversation with you about what's in a room. Can you start asking it about non-existent giraffes? And I found that with some of them, if you ask it how many giraffes are there, it will respond with a non-zero answer because it's been

trained on the ways that humans answered that question and humans didn't tend to get asked the question, how many giraffes are there when the answer was zero. So they popped up in that case and then OpenAI also did a robot manipulating a Rubik's Cube and one of their demonstrations that they used for

that this thing was robust to environmental perturbations is that they had what they called a plush giraffe perturbation where somebody owned a giant stuffed giraffe toy and had it nudge gently this Rubik's cube that the arm was trying to manipulate. And, you know, it was the cutest thing ever. Giraffes are awesome. Hedgehogs

Having them be a running joke in machine learning is, I think, a delightful thing. And so I knew I really wanted it in my book. I think at that point, too, I was really getting a lot of enjoyment out of asking that one chatbot about non-existent giraffes.

Yeah, that makes a lot of sense. I do remember that plush giraffe from OpenAI and how delighted all the research committee was to see that. I would, yeah, for all you researchers listening, please, you know, I'm sure your lab budget can accommodate a plush giraffe. You must get one. You must use it in as many ways as you can think possible in your publication so that I can cite them.

Okay. Yeah. I'll be sure to let my, uh, collaborators know and try to do it myself. Yeah. Um, uh, so maybe, yeah, for a bit, um, I think that's, that's a pretty good overview of a book. And again, I, I think the listeners, there's a lot to it, so, uh, you can just look it up and take a look. Uh, but maybe, um, as a last, um, thing on it, um,

It's been a couple of years since, and I'm curious, sort of having gone into use technologies like GPT-3 or newer algorithms, do you think your view on AI has largely stayed the same or is it evolving or is it continually evolving as you continue to play with it? Mm-hmm.

Yeah, it's interesting you bring up GPT-3, GPT-2, because that is, I think, the only section of my book that I had to edit as it was going to press. Because that really did, you know, I didn't think somebody would chuck that much data and computing power at one of these, you know, text generating algorithms. And so I did have to update to say, okay, actually, you know,

Here are some examples. When you talk about coherence, I'm going to take now a larger view of what kinds of coherence you can have across a text more than just a sentence length, for example. But that was really the only thing where I, looking back, say, oh yeah, I had to reframe my thinking on that. Because a lot of the other thinking I had before

In my book, a lot of the case examples are from the 1990s. A lot of the sort of general principles of machine learning and how to approach solving problems with machine learning and thinking about this

Were things that you learn even with really simple programs that remain true as these get more complicated because we're still dealing with narrow artificial intelligence. We're still dealing with something that has a reward function that's supposed to be optimizing and everything depends on how you set that up.

And so it actually does make me feel pretty good about the project to look back and say, no, you know, this is still a good overview. This is, you know, there are lots of details that have changed. There's new applications that have popped up. But in terms of these general principles of AI weirdness, I think that

I'm still feeling really good about those. And I think for hopefully for quite a few years still in the future, this will still be useful. Yeah. That makes a lot of sense. Having just read it, I definitely thought it held up. Even if we get, you know, crazy new things all the time, we're still kind of the same kind of new thing. Um, so, um,

I do wonder though, in playing around with, you know, ever evolving tools, I'm sure you've made played with many models, you know, gotten results from many different algorithms. Have you found yourself kind of surprised when you did various experiments in the sense that, you know, you know, to expect something weird? So do you find yourself kind of taking it back still sometimes by what you actually end up getting?

Oh, for sure. And that is one of the delights of working with machine learning algorithms like this is I can have enough experience now to sort of guess what might work or what might produce something interesting. But I still do get surprised. I will say, well, surely I'll only get like really cliche stuff if I put this in as a prompt and then I'm surprised, you know, actually coming back to

coming back to pickup lines was one of these cases where I figured, oh, now that we've got GPT-3, if I try to get it to generate pickup lines, it's just going to copy lists of pickup lines from online. Like it's just going to be as terrible as human written ones. And I really didn't want to revisit that experiment now that the algorithms had gotten bigger and was, well,

incredibly surprised and delighted when it turns out AI generating pickup lines are still weird. You know, I, one of the, one of the lines from GPT-3 was I'm losing my voice from all the screaming your hotness is causing me to do. Or another one is, you know, do you like pancakes? Yeah.

Or another of the pickup lines was, you know, I will briefly summarize the plot of Back to the Future 2 for you. Yeah, that surprised me. That really, truly surprised me. Yeah, it's, as you said, these are not the sort of things as we highlighted by...

by researchers. So it's nice that as these things are getting more accessible, people can actually explore and in some cases find the weirdness, in some cases point out problems like recently happened with Twitter's cropping tool. Curiously, people just experimented, found there was a problem, and then that actually led to a solution. And being informed, I think, is partially why that's important. Yeah.

Yeah, for sure. And yeah, this accessibility is a big part of that. And I do think I, you know, I've been seeing attitudes of the general public change or the amount of awareness and especially awareness of algorithmic bias is really increasing. And I think that's a really good thing. Definitely. Because as I'm sure you agree, you know, there is a lot of perception that

Yeah.

Yeah. So I think we've touched on a lot of things. One last thing that I thought to kind of bring up is zooming out from your own work. One thing you've covered in a book also is this aspect of human AI calibration, where the AI is really being used as a tool rather than just being run autonomously. And that's another area where I think

maybe the public doesn't quite see that as much or understand as much as is the case actually in practice. Even researchers building models, that's really humans in a loop. So, yeah, can you speak of your view on human-AI collaboration? Do you follow any sort of AI artists or other creative people that use AI as a tool? Yeah, there's the...

You're right that there is a temptation among some people to discount the human contributions to what is called AI-generated art and to realize how much human intent there is in every step of the way from deciding what to do

work with to, you know, the curation and presentation and crafting a story around what it is you have generated. And so, yeah, to see that acknowledged a bit more, I think is only going to be a good thing for artists and for our perception of what AI is capable of and how much, you know, creativity there really is and where that's coming from.

There are some, there are people doing some really interesting work

I tend to follow a few AI artists who, you know, the ones I like are the ones who are kind of remixing their own work or their own photography in interesting ways. So there's, you know, Anna Riddler, Helena Saren. They're both doing really interesting things with remixing their own artwork and photography. There's...

Tim White, who does interesting things with GANs that have very limited palettes with which to work. And they're trying to basically build adversarial examples or come up, illustrate some kind of concept that's going to trigger a

on an image recognition algorithm, but maybe they only have a few simple shapes to do that with. And so you get these very interesting evocative abstracts. And so that's an example of, you know, you really have to set up your problem, like to think of that as a problem and to set it up with the right sorts of tools that the AI has something interesting to work with or an interesting subject to try and illustrate, uh,

I really like the artists who give you some idea of what that process is and where the artist's contribution begins and ends. I think that for me anyways, really kind of enhances my enjoyment of what's going on.

Yeah, absolutely. And I also follow a lot of AI artists. So I really liked that section of your book. And also just seeing your own work as an AI humorist has been quite interesting.

So with that, I think, yeah, we've done a really interesting conversation about your work. I've enjoyed this a lot. Thank you again for joining us on the podcast. Oh, thank you so much for inviting me. And I'll, you know, it's fun to talk to somebody who kind of is working in this area. I will definitely be looking up your weird chairs. Fantastic. Yeah.

And for listeners, once again, we have been talking to General Shane of AIweirdness.com. Just head on over where AIweirdness, most of these blog posts are pretty quick, pretty entertaining. So it's fun to take a look through. And then if you find it interesting and want to delve in deeper, you can, of course, look up her book, You Look Like a Thing and I Love You, which I do recommend. Even if you are aware of AI, it's pretty fun.

And with that, thank you so much for listening to this episode of Scan It Today's Let's Talk AI podcast. You can find articles on similar topics to today's and subscribe to our weekly newsletter with similar ones at scanittoday.com. Subscribe to us wherever you get your podcasts. And don't forget to leave us a reading if you like the show. Be sure to tune in to our future episodes.