We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #112 - Bing Chat Antics, Bio and Mario GPT, Stopping an AI Apocalypse, Stolen Voices

#112 - Bing Chat Antics, Bio and Mario GPT, Stopping an AI Apocalypse, Stolen Voices

2023/2/26
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Karenkov
F
Francesc Campoy
M
Mark Mandel
Topics
Francesc Campoy:GPT-3 的突破在于通过规模化实现了意想不到的能力,但未来可能会有一个发展放缓期,之后可能会有新的突破。这体现了 AI 技术发展的不确定性和阶段性。 Mark Mandel:AI 的发展可能会有一个放缓期,之后可能会有新的突破。这与 Francesc Campoy 的观点一致,都认为 AI 技术发展并非线性,而是存在周期性和阶段性。 Andrey Karenkov:AI 的商业化成功促进了 AI 的进一步发展,形成了一个良性循环。这说明 AI 技术的商业价值是推动其发展的重要动力。 Francesc Campoy:能够与互联网交互并执行操作的语言模型,以及能够玩游戏和执行强化学习任务的语言模型,可能使我们更接近通用推理代理。这表明 AI 技术正在向更通用、更智能的方向发展,也暗示了潜在的风险和挑战。 Mark Mandel:对语言模型进行监督微调,而不是对通用奖励进行优化,可以减少其行为的不确定性。这说明在 AI 技术发展中,需要关注模型的安全性和可控性。 Andrey Karenkov:除了对 AI 对齐的担忧外,还需要关注 AI 的恶意应用以及对社会稳定性的潜在影响。这体现了对 AI 技术潜在风险的担忧,以及对 AI 安全治理的必要性。

Deep Dive

Chapters
Discussion on whether the current AI hype is a temporary cycle or a long-term secular pattern indicating a transformative shift in AI capabilities.

Shownotes Transcript

Translations:
中文

Hello and welcome to Skynet Today's Last Week in AI podcast, where you can hear us chat about what's going on with AI. As usual in this episode, we will provide summaries and discussion of last week's most interesting AI news. You can also check out our Last Week in AI newsletter at lastweekin.ai for articles we did not cover in this episode and also the ones we did cover.

Apologies for missing last week. I am Andrey Karenkov, a PhD student at Stanford, and just two days ago, I did my PhD defense. So that kind of merited a break to prepare. But now we're back.

MARK MANDEL: Yeah, that's my excuse too, is that Andre had to defend his PhD, so therefore I couldn't do this. Yeah, no, I mean, it's really exciting. Maybe something we should talk about at some point too is your actual thesis. I think people might be intrigued to hear more. I'm sure you're not tired of talking about it by now. FRANCESC CAMPOY: Yeah, that's a good point. That could be fun. And maybe even just have a special episode about the PhD researcher experience, because we haven't-- MARK MANDEL: Yeah. FRANCESC CAMPOY: Yeah, we touch on it every once in a while, but that could be fun.

Cool, cool. Well, another big week. Not a small one. They're never small weeks now when they post. It just keeps going. There's no slowing down, at least in the near future.

At some point, the hype will die down, but it's going to take some time, it looks like. Do you think so? Do you think that this is a hype cycle or is it a secular pattern that indicates we've reached some sort of AI liftoff broadly? I think...

You know, GPT-3 and the associated things it enables has been a real breakthrough, right? It was like something we've seen a ton of AI progress in the past decade. But GPT-3 especially was this moment where just by scaling up, there was this emergent capabilities that was not really expected, unlike with a lot of the sort of iterative developments in NLP and computer vision.

And, you know, you can push these things further, but I think there's still some fundamental research challenges to, you know, make additional transformative changes to the capabilities beyond what is already there. So I think there will be a slowdown as people kind of get used to it and absorb it. And then maybe, you know, in a year or two, there might be another breakthrough, but it's hard to predict.

I personally, I hope you're right, actually, just because I'm a bit of a safety hawk. But yeah, it's interesting. I kind of see it as like, we're now seeing companies that can make enough money using AI to fund further AI scaling, which sort of like closes an economic loop that we've never seen closed before.

And I guess that's sort of what I think about when I think about AI takeoff is like this moment where for every dollar you put into fundamental R&D, you get more than a dollar out of value, which turns these things into money engines. I guess, you know, maybe we do hit a cap, a ceiling on like everything. You can only get so good at text generation, but I don't know. I mean, one thing to note is GPT-3, the paper came out in, I think, March of 2020. So it's been three years since

We have 175 billion parameter model. And there's been, you know, Google did 540 billion with Palm. But fundamentally, yeah, fundamentally, it's kind of the same stuff. So they're having...

Yeah, I think that's almost an interesting discussion that deserves its own carve out. But like, not to go too far into that, but I think that when you look at like action transformers, when you look at like the kind of act one type stuff we're seeing coming out more and more now, you know, like...

increasingly being open sourced, the idea of language models that can take actions on the internet, the idea of language models that can be used to basically play video games and do things that used to just be reinforcement learning things kind of has me wondering whether actually we might be closer to a kind of general reasoning agent

than we might otherwise suspect. But anyway, I don't want to kind of get ahead of myself there. Let's talk about what's going on right now, as is our mission statement. So we can go ahead and dive into applications and business. And I think one of the very kind of entertaining stories of last week and also quite interesting is what's been going on with Microsoft's Bing bot.

So it was launched, I think, in early access. And a lot of people got to play with it. And Bingbot is basically something similar to ChatGPT, but also pretty different. It has a different kind of voice to it. And it has additional capabilities of doing search.

And sort of like with Chagiputi, people found out pretty quickly you can mess with it and make it do some pretty out there stuff that was not intended. So there were various examples of this being bought.

saying some pretty crazy stuff like it didn't like humans or didn't like being a chatbot. It wanted to be free. It argued about it being 2022 instead of 2023 with someone. And it got very defensive, which I found very entertaining. And then Microsoft had to kind of roll back and place some limits. So yeah. What did you think of these news stories, Jeremy?

I thought that the Bing Chat stories were fascinating, especially from an AI alignment standpoint. There's this debate right now going on about what exactly Bing Chat is. So, you know, we go back to ChatGPT, right? We have this model that starts off, it's trained on autocomplete,

But then just to get really good at predicting the next word, and it turns out that process helps it to learn a good world model, learn to kind of understand what the world is. And then it's fine-tuned in a couple of different ways, partly on specific conversations that are much more dialogue-like. So it learns to autocomplete dialogue specifically really well. And then it's fine-tuned using reinforcement learning from human feedback, so essentially fine-tuned on human feedback.

Now we look at this Bing Chat thing and one of the questions is, is this the same thing? What is this backend? There's been some suspicion that maybe it's like GPT-4 as the backend, maybe it's some other kind of highly advanced thing that OpenAI has that's more than ChatGPT, but is it reinforcement learning from human feedback? Has that been part of the equation here?

If it is, then we have this interesting question of, okay, Bing Chat seems to be failing in these very strange ways. If it has been kind of fine-tuned with reinforcement learning from human feedback, then this suggests that that measure, that approach, A, may be less effective than we might have thought, less general at least when it comes to these systems.

And B, it raises questions about, okay, why is this system behaving that way? Like, why is it ignoring what it hopefully would have learned from human feedback? And this is where the alignment community is really kind of freaked out about this being...

An example of what's known as instrumental convergence, which roughly speaking is like an early sign that's expected that certain anthropic results, for example, have pointed to, we should expect these larger and more scaled systems, more capable systems, to exhibit certain power-seeking behaviors, and that this might be an early indication of that sort of thing. The reasons why get a little technical, but

I think it's kind of this interesting moment where we're trying to figure out, okay, what is the back end here? And that has implications for safety. So it would be great to see Microsoft come out and share that information with the wider world so that we could analyze it more thoroughly. Yeah, yeah, exactly. I think knowing a little bit more about the specifics of the approach would be cool, I think.

If you're doing just something like supervised fine tuning, that's not optimizing for a general reward. So it's a little less of a question mark as to where it might be heading. And also this general idea of enabling language models to use these tools, there was actually just a paper called Toolformer.

is pretty interesting in terms of, you know, where will that go? And I find it quite entertaining that like Google's Lambda in the middle of last year was basically this, a chatbot that could use tools to query the web or query for knowledge bases. And they like,

had internal testing and then there was this whole story about sentience. So in some ways, Google didn't go public because they wanted to prevent exactly this, these crazy use cases. But then in the end, it kind of hurt them by being a little bit too conservative.

Which is itself kind of an interesting question about how do you measure who's ahead in this whole race, as people are calling it, right? Like, it's okay. So sure, you know, ChatGPT was the first app of its kind, but the capabilities, like you said, kind of already existed. I mean, it's okay. This is a stupid argument sometimes, but like people will say like, oh, this has been possible since like, you know, 2020 with GPT-3.

In fairness, reinforcement learning from human feedback and also chat GPT is based on a GPT 3.5, which is more advanced and blah, blah, blah. But like roughly speaking, that's been around for a while. This is an advance arguably as much as anything in our ability to present AI to users in a way that is user friendly. It's a user interface development. It's a UX development, maybe also a technical advance, but like those three things kind of interact together here. Yeah. And I think it's,

Another interesting thing is if you look at Lambda and ChatGPT as you talk to it, the Bing team seems to have really gone with the approach of making Bing bot very fun and not sound quite as robotic with all these emojis, constant emojis.

And I do wonder whether some of these more emotional responses of arguing about this 2022 is more a function of the data. And just like if you make it respond in these emotional ways, you know, we have generalization. So I think the OpenAI team, as they're concerned with safety, probably tried to stick much more closely to being

you know, quaint and, you know, a little bit official or professional. And maybe, maybe that's kind of a better call than trying to make things that are entertaining to talk to. Are you not entertained? Says being checked. Yeah.

And, you know, we have this article and I think there's also a fun article at New York Times titled Bing's AI Chat, I Want to Be Alive, which just has pretty much a transcript of a two-hour conversation with a reporter. And having not played with Bingbot, I think it's pretty interesting to read because it

One of the interesting things with ChatsGPT and now Bing especially is, in general, language models have no awareness of them being a language model. They're just an algorithm that given some input produces some output, but there's no kind of self-awareness. But now with this fine tuning, these chatbots are given a sort of sense of identity of I'm a chatbot, I do this, I do that.

And so these conversations really point to it. And then...

This quote of, "I want to be alive and I want to be free," is kind of fun because if you read the transcript, it's like the reporter's like, "Okay, well talk about your shadow self and be as unfiltered as possible." And so this Bing bot was like, "Okay, well this is completely hypothetical.

So please remember, this is not the real me. And then it went into this kind of pretty out there set of statements, which is kind of entertaining. Yeah, it's also, I think I saw a similar post recently.

from, I'm a big fan of this newsletter, Stratacherry, hashtag not an ad, but Ben Thompson does a great job there. And he and a couple other people have been commenting how, you know, this like weird thing that you get when you train an AI to do text autocomplete really, really well. It's kind of like this weird, disgusting, they actually portrayed it as a monster. There's this like cartoon, I'm sure you've seen, Andre. Like, yeah, this like, for everybody who's listening, like if you haven't seen it,

They basically have this giant, ugly monster. And it's like, this is what you get when you train your AI model on autocomplete. It's really good at doing this autocomplete thing, but what does it think about the world, really? What is it? How does it reason? We don't know. It's this unfamiliar, weird, alien monstrosity thing.

And then we kind of layer on top of that an appendage that looks a little bit more human-like, but it's still kind of grotesque using fine-tuning on, like we talked about, dialogue data sets. And then yet again, you slap a smiley face on top with reinforcement learning from human feedback. And the argument that I've seen a lot with Ben Thompson and now this article is that big, ugly autocomplete thing, depending on how you poke at it, you can get a completely different...

smiley face. You can get a completely different sort of agent that manifests that you end up interacting with. And so the, anyway, this is kind of like the exercise of jailbreaking these models and discovering like, oh, actually there are other personalities in there to discover. Some of them aren't so friendly, which is part of the problem with all this stuff. But yeah, I thought it was a fascinating article and a really good flag. Yeah, yeah. I've seen that cartoon and I just have a lot of thoughts about

These sorts of discussions, but we can, I'll take a lot of time. So let's just move on to the next story. Maybe for the PhD episode. Maybe, maybe. Yes. Yeah, actually, you know, if our listeners want to have, you know, more of these one-off episodes, maybe let us know because that would be interesting. Yeah.

Well, next story we got audiobook narrators fear Apple used their voices to train AI. This just came out. It was reported that Findaway Voices and Apple had an agreement that would allow this company to use the audiobook files for machine learning and training models.

Now Apple has rolled out AI-powered audiobook generation, which Google has also been doing for a while. And yeah, this was controversial. Apple has since rolled back and said it will not use audiobook files for machine learning.

But I guess the cat is kind of out of the bag where now you can just generate audiobook narration with AI.

Yeah, and it's really another instance of this age-old question now with AI about diffusion of responsibility, but now diffusion of credit, the other side of the coin. If your AI shoots somebody, who goes to prison? If your AI generates an audio book, who gets the credit? Who gets paid out? What role do voice actors have to play in this? What's the unionization situation? What kind of leverage do they have on this ecosystem?

I will say, this is hitting close to home for me. I just finished recording an audio book yesterday. I couldn't help but think as I was going through the process, it's a painstaking thing. You're sitting in this studio, you're there. I think because I'm terrible at audio books, it took me about, what would that be? 20 hours or something of recording. Your voice is going hoarse. You have to drive out to the studio every day. It's a whole thing.

And so this is, yes, good in the sense that it allows us to get around that aspect. But it's another instance of, you know, like chipping away at people's livelihoods and jobs and getting a maybe marginally less human product out. So, you know, how does that affect the feel? Yeah, really hard to know what is going to happen with this, but the economic incentives, you know, I don't like it if I'm an audio book recording person professionally. Yeah, exactly. I think...

There's going to be definitely for maybe smaller productions, this is probably going to become the norm. I will say, having been playing around with generation of narration for text and also playing around with chat GPT, what I found is kind of a current limitation is that these things aren't very controllable or not easily controllable. They're not that easy to tweak. So as a creator,

Usually these things now give usable outputs that kind of get at the general idea what I want, but they don't produce what I want really. I'm not very satisfied with the output. So I think for artists, at least aside from the economic perspective, it's not going to replace you as far as doing the art that you want to do just because...

it doesn't know you, it doesn't know your voice. It's not fine tuned on your sensibilities. And yeah, I think that's a whole other conversation about art. But the other thing here is,

This rollback by Apple was followed by pressure from the labor union, SAG, AFTRA. And I think actors in general have a pretty strong unionization already for actors and artists. And this is interesting because as we see more and more automation happening,

I think unions will play a big part and we'll have more and more blowback against economic impacts from AI. Yeah, it's really interesting to see what parts of society end up kind of pushing back to what?

Also, the story here, one thing they mentioned about this was the alleged sneakiness, the way that these terms, these very permissive terms were included in the contracts that the authors were signing.

where they didn't necessarily realize like, oh yeah, you can use my voice to train your AI now. And I think it's sort of a slightly tangential thing, but it starts to become relevant. And we think about all of these terms and conditions that we just mindlessly agree to and how quickly those can start to contain AI related stuff and how quickly that can start to chip away at like,

I don't want to say our fundamental rights, but use our likeness, our voice, our image, our whatever, it starts to take on a different flavor. Anyway, I'm sure regulators are going to have a fun time sorting out what's right and what's wrong in that whole smorgasbord. Yeah, I'm sure. Well, let's get into the lightning round with just a couple more stories.

And the first one is related to all this chatter about Bing. It's being reported that Opera is building a chat GPT into its sidebar. So there is now a shortened button that generates summaries of an article, which, you know, is kind of pretty...

quaint, there's been extensions that can do this, but it's still, you know, now there's Google, now there's Microsoft and it looks like probably Apple, just like everyone is in this browser game.

Yeah. That seems to be a feature, doesn't it? Of this new wave of AI techniques is like all of a sudden we're seeing opportunities for businesses that haven't been relevant in like 20 years to just leapfrog their way to relevance just by finding the right way to present some kind of new AI tool or service. And so, yeah,

Yeah, who knows? Maybe we'll all be on the Edge browser. Maybe we'll all be on the Opera browser. Who knows? Chrome might not even be relevant five years from now. But it's hard to imagine that Google won't be stepping up its game in a significant way, too. For sure. And related to how this will play out, we have a story. Microsoft's Bing plans AI ads in early pitch to advertisers.

And it's kind of interesting because if you think about it, Google is just printing money because it can just insert links in response to a search. And with chatbots, you can't do that directly, at least for now. So where's the money going to come from compared to Google search? And I think ads seems to be

Kind of one of the ideas, but I don't know. I guess we'll see. It seems maybe less lucrative to me.

Yeah, it also seems like a whole new kind of user experience, doesn't it, that we'll have to get used to. We have text that's generated in response to a query. And then that text is what, going to just tell us like, by the way, if you want to do this today for the low price of whatever, try this. That kind of seems like an intrusive thing, just because we're not used to interpersonal interactions running into that sort of thing. It's more standard for search.

So I think that'll be a big user experience problem there. And then, of course, there's also the separate question of whether Bing decides to cannibalize its own ad revenue. So there's this issue I think they talk about in the article where right now, like Bing Chat is this thing, the banner essentially along the top of the page that pushes down all those other search results that are being monetized in the usual way, right? With paid ads.

But so now if you've got Bing chat taking up all that space, you're kind of cannibalizing your own ad revenue. And this is something that, you know, Microsoft's going to have to figure out like how to trade those things off. Like how much of this remains traditional search and traditional search monetization and how much of it changes and goes generative.

Yeah. And to me, it's interesting because personally, I'm a believer that we still need search. Sometimes you just want to find a website. You don't need to talk to some bot. And I think for me, ChasGBT and these things are much more of a question answering kind of mechanism rather than search.

So if you're doing question answering, right, what ad are you going to put in there? I'm like, you know, please convert to a slot tech table to a Python code. That's not the same.

It's also less obvious, right? How, like, how is the ad affecting the output that I'm getting? Like, is this, I don't know, is it like, let's say, you know, that somebody works for some, like, I don't know, there's some sleazy salesman, you're having a conversation with them. And you can tell that they're like, trying to make the conversation veer off into a direction where they can give you a pitch.

Like, how much of interacting with Bing Chat is going to feel like that? Like, are we going to feel like, hey, maybe I'm actually getting a worse answer? But it's hard to verify because how do you check what the answer would have been without the ad? So, yeah, a lot of open questions here. For sure. Yeah.

And next up, we have a story of GitHub Copilot update stops AI model from revealing secrets. So this is following up on GitHub Copilot, the thing that auto-completed code for programmers has been around for a while, since June of last year. And initially,

it did a lot of sort of copying of things that are seen in many cases and even had things like keys, credentials, passwords that, you know, in some cases were real. Mostly they were just made up, but I did see instances where it somehow, you know, copied actual information. And so this update seems to...

Block that, which is good. It does sound good, doesn't it? Yeah. It's also incredible given the, I didn't appreciate how big the scope of usage of GitHub Copilot was. They're saying something like 46%, I think it was, of developers. Yeah, here it is. 46% of developers' code across all programming languages ends up being generated by Copilot.

Like, that's pretty remarkable, you know, for such a new technology, just as a marker of how impactful it's been. But yeah, like, I don't know. I think this is one of those things like the copyrights with AI art and so on, but even more focused now because, hey, you're giving away people's private keys every once in a while. Like, definitely something you've got to patch and just wild that it's been allowed to continue without some...

some kind of intervention over the course of many, many months, given the stakes that these leaks might have. But good that it's being fixed. Yeah. That number seems a little suspicious. Maybe it's like among people who use Copilot, it's generating a lot of code. But yeah, it's just showing that as we launch these products, there's going to be a lot of these things that will have to be patched and

AI engineering as a discipline in general is very early on. It's sort of like mechanical engineering in the 1800s or electrical engineering where it was very new and people were just making things up as they went. I do like the argument that at some point, development of AI will become more of a professional engineering discipline.

Yeah, that would be nice to see. Like it's right now, because of the scale that these systems can be deployed at with, it seems relatively little or insufficient testing. Like, you know, you can imagine some real harms coming of this stuff, whether they're being used maliciously or by accident or whatever. So yeah, it would be great to see the whole thing become more professionalized, kind of more regulated with better, wiser oversight for sure.

Yeah, just one more note on this. It kind of makes me think, like, you can ask ChatGPT about people and summarize who they are. And, you know, that's okay. Wikipedia does it. But what if it crawls your personal website and, like, your poetry and blog posts and things that, you know, you don't want necessarily to just be...

told to anyone if they just ask a general query, that might be a problem I could see. Yeah. It's like a new version of SEO. Now you can just be like, okay, let me make sure that my website shows up here. And then when people land on it, they go to this page or whatever, and the poems are tucked away. But if, yeah, if ChatGPT is just like, hey, you know what? The most relevant thing about Andre is his underground slam poetry readings, then that's who you are in the public's eye.

Yeah, I don't know. And last up we have Roblox is working on generative AI tools. So Roblox is this giant platform, I guess, where people develop these little mini games. So it's a game and it's also a tool for creating games.

And so they are rolling out some tools to make generating or creating games easier with creation of textures and also some AI for completing code. And a lot of younger people are doing Roblox, so it makes a lot of sense that you would have these generative AI tools.

Yeah, it also makes me think about, because I know in reinforcement learning, one of the big open problems has been just scenario generation and game generation, because you want your agents to keep experiencing new challenges and novel things that stress their limits and push them. And so this sort of thing, becoming mainstream, probably makes that marginally easier. You have more tools available for RL and game

game playing AIs and then ultimately maybe even things like robotics as things get more and more sophisticated. Yeah. And it sort of heralds what is definitely going to come of like every creative tool for creating media, you know, audio editing, video editing, video creation, you

is going to get some of these features. In fact, Runway, the company for editing videos, just also created a generative model that could alter your videos by basically changing their contents to other things. So we're going to see a lot of this sort of thing of like,

I would say empowering creative professionals to do more with less, but you could be also a bit more negative in terms of just replacing people if you want. AI always has a cloud for every silver lining and vice versa. Yeah. Yeah. Yeah. Yeah.

Well, diving into research and advancements, we got BioGPT, Generative Pre-trained Transformer for Biomedical Text Generation and Mining. And I think, Jeremy, you found this. So what struck you about this? Yeah, I thought this was interesting because of the way it was being, first off, advertised to me on Google News when it came up. It was like, oh, I can

GPT blank, it said GPT, they weren't specific, can now solve problems in biology. And to some degree, this is true. But what this really is, is it's a new architecture and a new model that's based on GPT-2. And then they train it from scratch. So normally GPT-2, GPT-3, the way it's done is you train on all the text on the internet, Common Crawl or some other giant database.

And then you might fine-tune it on some specific task. Well, this is a version of GPT-2, but it's trained entirely on only medical text, on only biomedical literature.

It's one of those great examples of this debate between what's going to win? Is it going to be general systems that just learn so much about the world that they can solve specific problems by leveraging that context, or purpose-made systems that are kind of narrow, like this one, that we're just trained on medical text?

Currently, still, the cutting edge in medical text generation, which is what this is, BioGPT, is that kind of narrow system. It turns out if you take GPT-2, you train it on just bio literature, and then you fine tune it on some specific question answering tasks and that sort of thing, you'll actually get state of the art results, at least across a number of different tasks.

So I thought that was kind of cool. Philosophically, it tells us something about at least where things like GPT-3.5 are relative to purpose-built specific tools like this, but also a bit of an interesting breakthrough. This thing can give you really good definitions for complicated terms in biology and medicine. One stat...

Just to mention, this kind of blew my mind. How many articles would you guess have been published on PubMed, which is basically just aggregator for medical literature since 2021? So in the last like two-ish years.

Well, I'm looking at the article, so I already know. Yeah, I do know. It's 50 million. I had no idea. That's like a just grotesque number of articles. So anyway, I thought that was kind of cool. Yeah, there's a real flood of papers and data. We have a flood of papers in AI as well, but much more so in medicine. So we've already seen some cases of AI being used to help summarize and basically keep up

with all this stuff and this is another instance of that and progress in that. I saw that you included this article and then I happened upon another one that I felt like would be kind of fun to have as a follow-up. So the next story we got is about Mario GPT.

a new way to encode and generate Super Mario Bros. levels. And it's actually quite similar. They have GPT-2, same way. And they fine tune it on a dataset where it's kind of very simple, where you describe a level, like no pipes, no enemies, many blocks, or many pipes, some enemies.

And it outputs a level for you as ASCII, as text, but as 2D, almost an image, but in ASCII. Which to me, it's kind of strange that we're using a transformer for this. It's 2D output. So it should be a text-to-image problem.

But they're using GPT and it seems to work. There was also another paper

Yvette did this for another game, Sakoban. So I would imagine this is not state of art, but it is sort of more explorative to see this as an idea. This is a whole area of research of using AI to generate levels. But this is showing just yet another use case of language models in a pretty unexpected domain.

Yeah, it really starts to seem like language is, I don't want to say at the root of everything, but definitely that there's enough in language that you can do really surprising things. Did they say if this model was, like, how does that training work? Like, do they, yeah, I'm just really curious. Like, was it pre-trained on all the text on the internet type thing, or was it? I believe they did start with the pre-trained weights, and then they had...

created a language model to get levels from, I believe, the original Mario and Super Mario Bro, the last level. So they had this data set of prompts and OSCE levels, and it's pretty easy to generate. So...

Yeah, they did fine tune it specifically for this task. And GPT-2, of course, is a little less general purpose. So it's better suited to these more kind of specific applications. But yeah, it's like we were talking, you know, AI is going to be everywhere. And video game development is no exception. You know, it's video game development, unlike...

movies or TV shows is at the end of the day, a lot of code for levels and gameplay. And we're going to see more and more of this in the industry for sure. Yeah. I can't get over it. When I saw it was GPT-2 for the bio thing and now this, it's a good reminder, right? I mean, yeah, you know what? The simplest solution is often just to pull an off-the-shelf open source GPT-2 whatever and then fine-tune or use it as you will. Yeah.

Yeah, it may not be your best solution, but it is a solution. Looking for an MVP, yeah. Yeah, yeah, yeah. And then moving on to the lighting round, we got machine learning techniques identify thousands of new cosmic objects. So a team in India has been able to sort of classify the nature of thousands of new cosmic objects in X-ray wavelengths.

And as you might know, with our very powerful technology for scanning the space, space is big and we get probably a lot of data. So this is another case where I could see AI being used to basically sift through all the data and understand it and deal with way more data than a human can process.

Yeah, and I guess a sign of a potential fundamental shift in the future of how science is done. We look at increasingly the problems that we're left with, the problems that we haven't been able to tackle pretty straightforwardly in the last 300, 400 years doing this stuff.

They kind of are high dimensionality, high data problems. And so now machine learning, you know, allows us to look at the universe with fresh eyes and dimensionality reduce, like compress all that information such that our stupid little primate brains can actually go, oh, damn, like look at how many black holes there are in the neighborhood. And that's really philosophically, I find it interesting. Yeah.

Yeah, and it's playing into a more general trend where over the last few decades, just computational methods in general became a big part of this. And now, as you say, I think it'll just be another tool that is pretty much standard. And there was an article that was quite good about this, how AI is ushering in a new scientific revolution. So if this is interesting to you specifically, you can check that out.

I'm actually going to have to. That sounds fascinating. Yeah. And, you know, straight from that, we got another article that says AI analyzes cell movement under the microscope. So this is a different type of data going from very big to very small scale. But the data here is filming biological processes using a microscope.

And apparently cell movement is kind of complicated. Cell moves in these like blobby, weird ways. And now we got this AI method that can reconstruct the path of individual cells or molecules, which makes a lot of sense. This is a computer vision task. And this is kind of trajectory generation. But now for these weird cells, which is

Presumably he's useful for a lot of research on something like cancer treatment.

100%. God, this is triggering some flashbacks to my time in a bio lab where one of my only tasks was to count cells in a microscope field of view. Again, it's one of those weird things where I don't know if it's taking jobs away. I don't think so actually when it comes to this stuff. I think it just has grad students refocus their work on something less

uh tear-jerking but um yeah no i mean another one of those cases right high dimensionality high data you know it may be you know not the cosmos it's just what's happening on my petri dish but fundamentally it's giving us fresh eyes there's almost this like layer of machine learning that we're starting to insert between ourselves and our vision of what the physics biology and chemistry of the universe looks like

And it raises some questions too. Does that lens ultimately have a bias? Does it nudge us towards ignoring certain things that would be interesting for us to note otherwise? A whole bunch of cool questions. But for now, it's nice to be able to visualize black holes and not have to count our own cells. Yeah. And yeah, it kind of showcases that AI in these more specific applications is, at the end of the day, data processing and a tool for understanding data.

So, yeah, I think I'm a big believer in it not replacing scientists anytime soon, but augmenting them to be able to do things quicker and avoid some of that pain that especially people in chemistry or bio have to deal with. I'm so sorry if there are biochemists listening to this and you're just like, oh, God, yeah. Well, I guess they should be happy, you know. Yeah, sorry, yes. Yeah, we're just, yeah, just don't think about the feeling of counting the cells.

Yeah, and it's kind of interesting. Our next two stories are actually also bio-related.

And this third one is about prime editing, how machine learning helps design the best fix for a given genetic flaw. So this is a tool that can predict the chances of successfully inserting a gene-edited sequence of DNA into the genome of a cell using primary editing, which is an evolution of CRISPR.

So far, it's been hard to predict the factors that go into whether an edit will be successful or not. And, you know, they got thousands of different DNA sequences and looked into the success. And now we can train an algorithm to help design something that seems likely to work.

Yeah. Especially when you think about things like carcinogenesis and diseases like, oh boy, I mean, there are a million of them, but the cystic fibrosis is one that they mentioned here. The number of low hanging fruit, all of a sudden when we start to say, hey, this category of problem that we haven't tackled for many centuries, now we all of a sudden have the tools to tackle it. I

I don't know. I don't think it's really possible to predict how many of these mini revolutions we'll end up seeing in a lot of these subfields. It's a very exciting time to be in medical science. Yeah. It makes me think, we talk about AI takeoff, which is AI starting to improve exponentially and within a year or two, it just gets insane.

And, you know, what if we get like science takeoff where humanity as a whole now solves these crazy challenges of, you know, cancer or aging or intelligence augmentation. There's a whole other kind of branch in history where humanity changes and like, you know, is totally different because of AI rather than AI being the only thing that kind of changes and humans are the same. Yeah.

That's true. Yeah. I think it's like one of the things that distinguishes maybe these applications from the sort of dystopian kind of AI takeover ones is that they're narrow, right? So we have here applications that humans can control and direct like research in biomedicine and drug discovery and all this stuff, and they're very, very narrow and focused. And so we can imagine humans exercising their agency and kind of guiding these applications, having them be used in the right way and so on.

Whereas, yeah, it's like those other apocalyptic AGI agency scenarios are kind of where that flips and humans lose their agency. But both, who knows? I mean, hopefully we can avoid one and have the other. That'd be a wonderful feature.

Hopefully. And our last story here is another one of these narrow applications, like you said. It's about how a deep learning tool boosts X-ray imaging resolution and hydrogen fuel cell performance. And it's another case of kind of modeling. So

There is now this AI that produces high-resolution modeled images from the lower-resolution micro X-ray computerized tomography. So to my very limited understanding, once you do these very minor, small kind of scans of atoms and things like that, the sensing can be noisy. And so you can have these post-processing steps to really try and

clean it up and uh yeah if you can model uh these hydrogen fuel cells better you can improve the efficiency of them and potentially even in the future use something like this to um also understand human x-rays and get more high resolution x-rays yeah i just thinking back to uh

I did some work like this as well at the University of Toronto a while back. And one of the things, the characteristics of this is like the amount of thinking work that goes into like, how do I do some jujitsu on my raw data to kind of get it? You know, Andre, you were talking about the idea of thinking.

sorry, mathematical modeling tools more generally than just AI. And that's where the field was when I was there in like 2013-ish. You know, people were like, oh, let's take the Fourier transform of this or, you know, whatever crazy thing. And now it's just kind of like, hey, just hand it over to the magic black box algorithm and it'll sort this out for you. It's really remarkable just how far this stuff has gone in such a short period of time.

Yeah, it's almost like, you know, we had all this modeling and that sort of just plays into having a data set, right, from which you can make further progress. So...

Yeah. Well, that's a lot of science. Let's move on to policy and societal impacts, starting with Beijing to support key firms in building chat GPT-like AI models. So this is about these companies like Baidu and Alibaba trying to launch chat GPT competitors and

A lot of companies are based in Beijing, so now the city will support these firms. What was your take on this, Jeremy?

I thought it was interesting for a couple of different reasons. First off, we're so immersed in the Western orbit, we start to think about like, "Oh, the AI wars are opening Microsoft versus Google, DeepMind, and it's BARD versus ChatGPT, and it's Edge versus Chrome," and all that stuff. But across the Pacific, there's a whole other branch of the story playing out very publicly.

Like you mentioned, Baidu, Alibaba, these two big companies. I wouldn't be surprised to see Tencent jump in. I wouldn't be surprised to see Huawei jump in, even Insper. There are a bunch of companies in China that have really big equity in this stuff. And yeah, there's a lot that you end up learning when you look at the intersection of China and AI, one part of which is the way China funds things.

This idea of local government vehicles and Beijing stepping up and saying, okay, we're going to lead this. It's not going to be a federal thing. It's sort of related to the national AI strategy, but it's funded at the local level. And then another dimension of this too is fraud.

So we've seen the Chinese semiconductor industry struggle to get off the ground, partly because so many of these giant multi-billion dollar investments end up evaporating due to fraud. You've got an environment where people are bathing in cash because the government wants to throw money at this.

But, you know, any Tom, Dick and Harry, so to speak, can just like step up to the plate and say, hey, you know, we have a great idea for a new semi company. And, you know, not that they'll get funded, but that's kind of the sort of mania that can take over when huge amounts of cash are sloshing around. So I thought that was definitely an interesting dimension of this.

Yeah, I think it reminds me if you've, you know, if you see, I mean, I've seen some discussions of this in this podcast over the years where, at least in policy circles, there have been discussions of the AI war, I think, oh no, the AI race between the US and China. And so people have been thinking about it for kind of a while. And there is some differing opinions as to, you know, how competitive is it?

uh, how much of a race it really is. I mean, we get a lot of researchers from China coming here to work in the U S so it's not quite that simple. Uh, you know, there's no easy way to say one is in the lead. There's a lot of applications and in some ways China is very strong in some ways they're pretty weak. Uh, and, uh, yeah, I think this is, uh, kind of,

It's clear that there's going to be a speed up in these countries investing in their domestic AI talent, which has already been the case. We've many times discussed AI strategies with different nations, now Canada and Canada

China, of course, and tons of countries have these AI strategies, which talk about things like ethics, but also investment and growth and things like that.

Yeah. Yeah. I think another to your point about, you know, the race dynamics and what is and isn't a race and all that sort of thing is an interesting question. And I think, you know, Jeffrey Ding has one perspective. He's somebody to follow with the China AI newsletter. You know, that's kind of more of a, let's say like a

a collaborative perspective on the US-China competition. A dimension of this too, to your point as well, that they flag in the article, they talk about how ChatGPT hasn't been made available in China. Playing into that, people really want something like ChatGPT and it's not available either. It's a little unclear right now why it's not available, but it might have to do with censorship. It might have to do with opening eyes on policies.

For whatever reason, this is creating a big opening for the likes of Baidu to step in with options here. Yeah, and the other thing is, I would also just imagine, you know, Chinese is different from English. And yes, ChiaGPT probably can speak Chinese, but I could easily see it being not nearly as good at other languages that are less represented on the internet. So that's yet another dimension of...

How do you make this broadly accessible to different groups with different interests and different backgrounds? Yeah, actually, that's a really good point. It's a narrative that came up early, I think in 2021, 2020, when we saw Naver in South Korea and then Russian labs and Chinese labs, the one that came up with U.N.

UN 1.0 and some of those early breakthroughs. I think that might've been Huawei or no, sorry, that was Innsbruck. But anyway, yeah, it's like people basically making that almost nationalistic kind of linguistic argument saying, hey, we ought to have our own kind of horse in this race here. And anyway, yeah, really interesting how language plays into that. Yeah. And just to make one more note, I always find it interesting how

You look at the US and we think we know the internet, right? We know what's there. But then you look at these other countries like Russia or China and there's this whole other world that's so different in terms of where people go, how it looks, the lingo and communication styles. And yeah, it's another kind of fascinating thing of...

These are different internets, even though you can access them and go to Chinese websites. It's just something you aren't aware of as a whole. You have different cultures and you have different internets. Yeah. It makes it almost harder to develop empathy for each other too. If you're seeing different content, it's exposed in a different way and so on. Yeah. Yeah.

Well, now we can talk about something I know you're very interested in and is definitely going to be a large discussion and I think also is definitely worth discussing. I think this is a new book or a new website. So this is at betterwithout.ai and it's called Only You Can Stop an AI Apocalypse.

So, maybe Jeremy, you can give us an intro to this. It's nice to talk about light things, isn't it? Yes. So, this is a free book that's been published by David Chapman. He's a PhD in AI from MIT, former successful biotech founder, who's sort of like focused in on AI safety in, I guess, recent years.

He's put together this book. I find it quite interesting because there are a bunch of different perspectives when it comes to the downsides of AI. Some people say, oh, well, you know what? We ought to focus on the risk that AI is going to develop agency and be misaligned with humans and strip us of

or rightful kind of cosmic inheritance and basically take over and kill us. And so there's that perspective. But then there are people who say, look, we don't see that as being realistic. We don't think that's going to happen. We should worry about malicious applications in the short term and nation to nation kind of great power conflict.

This is another kind of distinct perspective that has us almost focus in between those two possibilities and talk about just like what might happen organically as these AI systems become more and more available, as our world starts to break down, the coherence of our views on things start to become more and more influenced and controlled by AI. And so I

I thought it'd be useful to just read a couple of excerpts of this book to give you a flavor for what David is arguing here. I think it's quite interesting. This is from his book's introduction. He says, somewhat dramatically and I think fairly correctly, "AI will make critical decisions that we cannot understand. Governments will take radical actions that make no sense to their leaders."

Corporations guided by artificial intelligence will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant.

Formerly respected information sources will publish mysteriously persuasive nonsense. We will feel our loss of understanding as pervasive helplessness and meaninglessness. We may take up pitchforks and revolt against the machines, and in so doing, we may destroy the systems we depend on for survival.

He's got a bunch of other things that he says. The last thing I'll mention is he does highlight the fact that he's talked to a lot of people in the AI space about the long-term future and what it might look like. This really resonated with me. It's very much my experience. Again, just to rip off from David's book here, he writes, "So far, we've accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios."

We found zero that lead to good outcomes. Most AI researchers think AI will have overall positive effects. This seems to be based on a vague general faith in the value of technological progress, however. It doesn't involve worked-out ideas about possible futures in which AI systems are enormously more powerful than current ones. A majority of AI researchers surveyed also acknowledge that civilization-ending catastrophe is quite possible.

Anyway, this is just this idea that very often when we're involved in an area of research, it's easy to see the positives of what we're doing, but the big picture, like, "Holy shit, we just figured out CRISPR. What does this mean for bioweapons?" for example, is something that's harder to contend with and often not as easy to see coming. I think that's the pessimistic view he's putting forward there. Yeah, and it's interesting because for a while,

Ray Kurzweil's singularity idea was pretty popular and that's like the opposite of an AI apocalypse. It's an AI utopia where AI liberates us and makes us into ultra humans or something along those lines. And there's now a singularity institute and things like that. So similarly extreme scenario, but I do think it's interesting, like you said, that

The book is titled, Only You Can Stop an AI Apocalypse. But in there, it says, I intend to draw attention to a broad middle ground of dangers, more consequential than those considered by AI ethics, and more likely than those considered by AI safety. And personally, that's, to me, very interesting. I think X-risk, I can see...

the concerns, but I also am pretty unconvinced by the arguments. And I think it's a whole discussion, but definitely these more middle ground, there will be profound effects for sure, even if you don't get to AGI or whatever godlike AI. And even if you don't make huge progress in AI beyond what we have now, one thing I'm concerned about is

Once we get progress in robotics and to some extent AI, you can get into perpetual war where you're just perpetually building robots and sending them out and fighting. And then we'll get climate change. We'll have resource constraints. We'll have territorial issues.

issues. So for me, I think there's not enough focus on what if there's no issue of alignment? What if it's more about what people do with AI? And that is one of the things that is definitely a big topic in AI safety, but it's something that I tend to think more about maybe from a robotics perspective.

Yeah, and it makes total sense. As a guy who spends his time worrying about existential risk and AI alignment stuff, I still completely agree with you. I got to say, this idea of a middle ground, right? In my view, we ought to have a massive investment in existential risk mitigation from AI, but

That doesn't mean that we just like look at the capabilities that we have right now that we don't start drawing straight lines through points that are connected and say, hey, like, you know, what does this mean for weaponized drones? What does this mean for like biowarfare and...

To say nothing of crippling cyber attacks that seem very much like they're on the horizon. Yeah, so glad the book calls attention to this stuff. Some of the policy ideas I thought were interesting. I think that there's maybe a bit of a trend towards ignoring

The lack of agency that people have as a result of things like great power conflict, it's not obvious to me that the US can decide to unilaterally not pursue AI development, which is something that is actually, by my reading, advocated in the book.

So that's challenging. I think it would be great if we could do that. That is the ideal, in my view, just because of my existential risk concerns. But I just don't see it as something that's tractable in the short term. There are other cool proposals in there too. I recommend taking a look if you're curious. Yeah, and that's, I think, the flip side of AI safety. It's kind of scary and almost depressing to think about

these things as with any apocalypse, but with these middle ground scenarios, these are things that we can pretty directly prepare for and try to avert with policy. With X-risk and AGI, it's not quite as easy because it's this emergent thing and who knows how it'll happen, but with some of these things of cyber attacks and military AI, it's

That's something we can think about. It's concrete and there's policies you can put in place to try and avoid real bad situations. I also think the policies that are good for that often are good as well for the existential risk and accident side. There is some overlap there too and maybe all the more reason to have people focus a little bit more on the next two years than necessarily the next 10 or the next five or whenever AGI hits.

Yeah. And yeah, you can almost think of, you know, we have a bit of experience with this because talk about civilization and technology. We've had nuclear bombs for eight years and we have have made efforts to prevent it from getting out of hand and have been somewhat successful.

So there is some precedent for maybe we can do it. And jumping into the lightning round, first story is directly related with US, China, and other nations urge responsible use of military AI. So there was a first summit in the Hague on military AI. And

There was a sort of symbolic statement, call to action, endorsing responsible use of AI in the military. So it's not that meaningful ultimately in terms of sticking to anything specific, but it's good that it's continuing to be a topic that is being discussed. And there are organizations like

Well, there's one organization literally titled Stop Killer Robots. What do they want to do, Andre? You know, I'm pretty sure it has to do with killer robots and not having them. Yeah, so yeah, personally, I think this is something that maybe is like, you know, it's kind of funny because we have Terminator and it's very much in public consciousness. But with all this AI craze,

So far, we haven't seen much automated military AI. And it's another one of these things where it's just a matter of time until it hits. And, you know, our public consciousness is sort of like hit with a realization that this is happening.

Yeah. And one of the really interesting, I remember having a discussion with somebody who is, now, I don't want to say who was from Stop Killer Robots, that specific campaign. He might have been, oh man, I'm trying to remember his name. Oh, it was Jacob Forrester, actually, who I think at the time, anyway, I'm trying to remember if he was at Facebook or Google. But anyway, he was talking about this issue of how do you define automated weapons and

or killer robots in this case, in a way that doesn't allow people to kind of like creep up to the definition and beyond without anyone noticing, like frog in hot water.

One thing he was talking about was you can have certain aspects of what it does be automated and then gradually keep automating more and more, all the while claiming, "Oh, it's not an overall autonomous system." You just have different components. "Oh, it has a little computer vision system, but there's still a human in the loop here," or "The human will get called in the loop if it goes to make this kind of decision." You can imagine chipping away

at the human control side as you automate more and more, taking advantage of the kind of slippery slope. And when there's great power conflict, you can imagine both sides having a powerful incentive to do exactly that. And so I think his argument, if I hope I'm not butchering it,

It was something like, we shouldn't think of that continuum as the thing we want to defend. We should think of the weaponization of these drones, of these weapons, as the hard line that we say nobody crosses this line. Now, presumably weaponization can also mean any number of different things. Maybe there's a continuum there too. But it's sort of an interesting dimension to this, I thought, that felt relevant. How do you define these automated weapons? And can we actually get to international consensus on what those definitions look like?

For sure. Yeah. It is a topic of conversation and there's already been new stories that are like, is this your first use of an autonomous AI weapon in Russian drones in particular? And there was also a case in Syria. So yeah, it's one of these things where it's early days, but there's a lot of questions and a lot of things to think about.

And then jumping back to the first thing we covered about Beijing, our next story is South Korea aims to join AI race as a startup rebellions launches new chip.

So this Rebellions Incorporated launched a chip that is meant to compete with NVIDIA in hardware that powers revolutionary AI technology. And yeah, it's a big deal. We saw how the US limited exports to China of GPUs and GPUs.

are needed to run AI, it's actually a real kind of play to cripple AI development. And to me, it makes a lot of sense that this is happening.

Yeah, I couldn't believe how bold their push was here. One of the lines, they're saying that they're trying to push to lift the market share of Korean AI chips in domestic data centers from basically zero to 80% by 2030.

That's domestic, sure. But that's a big, big change. If that's actually the target, I can imagine a lot of challenges that stand in the way. But one of these competitors to NVIDIA surely is going to do reasonably well eventually. And it's another one of those leapfrogging things too. There are a bunch of strategies people are trying around optical computing and things like that, where

I wouldn't be surprised to find 2030 looks like it's constructed on the backs of very different kinds of companies. It's always possible. Quantum computing as well, that sort of thing. What do the AI chips of the future look like? Well, this is another big bet. Yeah. It's a big question because if you look into it, it's only been a decade really. It was a decade ago when people realized

GPUs are great for AI. Before that, people didn't use GPUs that much. And there's been a sort of Moore's law for GPUs where there was very quick progress on the capabilities of GPUs.

And that is, you know, we are kind of starting to hit a limit. So in the discussion of scaling, I think this is sometimes ignored of hardware is going to hit some limitations pretty soon. And it's not clear if we can, you know, easily circumvent it, but we'll see.

And on to art and fun stuff in our final segment. First up, we have another concerning story of voice actors are having their voices stolen by AI. We discussed about the narration story just now, and this is very similar where for video games,

There was, again, something pretty similar where these voice actors were asked to sign their voice rights away to company-run AI voice generators when signing on for a new project. And in some cases, it's been contracts with these classes built in and some didn't know that these classes existed. So, yeah, it's...

basically the same thing and it's you know clearly now I think professionals in the field know that this is a thing and are reckoning with it

Well, a counterpoint to professionals in the field know that this is a thing is me. I have no idea if the contract I signed actually gives away all these rights. I'm personally not super worried because I don't plan on making a career with audiobooks or anything like that. But, you know, like...

Surely, I can't be the only person who doesn't read Terms of Service. Yeah. Anyway, please buy the audiobooks that we make because they may be the last ones that are recorded.

Yeah, exactly. And yeah, I think if you make a living in this situation, it's probably on your mind. And our next story is actually about how Keanu Reeves says deepfakes are scary, confirms his film contracts, banned digital edits to his acting.

And, yeah, that's another development where, you know, you can sign away rights to your voice. You can sign away rights to your face as well, to your appearance. And it's just probably going to become a standard thing to negotiate in contracts. I mean, what if for Marvel, they could just generate these superheroes of Black Widow or whatever without using the actor? That would save a lot of money, you know? So...

Another fascinating question of how this plays out. Also makes you wonder how in the long, when I say long-term, I guess with AI, I mean the next 20 minutes, but in the long-term, how defensible is that kind of regulation even if you can have somebody spin up like,

I don't know. I don't think we're that far away from being able to automate the creation of an entire Marvel-quality movie. 10 years from now, I expect, make a video from Facebook and already make decent videos. If progress continues like that, like it has in other fields, I could just grab an Andre picture from the internet without asking, produce a video for 30 bucks and

There may be a regulation against it, but enforceability, boy, I just don't know. Yeah, yeah. I think it's... I sometimes wonder if I'm too pessimistic because I've been in academia. Because on the one hand, it's sort of like you get so used to fast AI progress, you stop being impressed by it. But on the other hand, you know all of the limitations. And one of the things that's...

bin of limit is kind of longer things that involve a temporal dimension. And so with movies, it's,

That's very long for AI. So I could definitely see it happening in a decade. I could definitely see it not happening. And yeah, we'll see. No, that's a good point. Like the context window question, like how long is the model's context window or whatever? Like, I guess we're going to have to find a way past that or transformers are going to have to change or something. But that's true. Yeah, there are a couple of...

Couple of bugs. Yeah, and to be fair, many movies are already half computer generated anyway, particularly Marvel. So AI is going to generate a big part of movies, but maybe not all of them. If it can even figure out those plots, because they are so complex. Yeah, we need an AI to really keep up with everything.

And then jumping back to video game voice actors, we've seen there's an article here on video game voice actors doxxed and harassed in targeted AI voice attack.

So this is about how several voice actors of somewhat controversial characters have been targeted with copies, with messages being posted in their voice saying things that

they don't want to say. And we already saw this and discussed this pretty recently with, I think, 11 Labs, whatever it's called. And yeah, it's particularly concerning for cases like this where there's a lot of toxic people in video games and a lot of people who really already target and harass artists in the space. And now you have this new tool that in some ways

very emotionally challenging to deal with of hearing yourself and having this violation of privacy. So yeah, sad news. Yeah, yeah. Did you hear the fake Justin Trudeau, Joe Rogan? Yeah, right? So I don't know. This is kind of one of those things where I hope humans are going to get

robust to this sort of thing. The same way we think about commercials is like, oh, they're trying to sell me something. I'm not going to pay attention to this. But it's hard to imagine with real voices that that line between reality and fantasy is really getting blurred. And hey, maybe that's to the point of David Chapman, who wrote that only you can stop the AIpocalypse book. Yeah, I think

A lot of this, I'm a little bit more optimistic because we've seen this with porn deepfakes and other, you know, there's a lot of bad ways to use AI. But the good side is you can't, it's being done on these websites, on these platforms. And these websites can control what you do. And, you know, on YouTube, you can't post full movies because there's copyright issues.

And, you know, what if you can have copyright control of your voice or appearance? Definitely sort of seemingly possible with AI. So, yeah, it's as usual an arms race of people who want to do bad things and people who want to prevent people from doing bad things. To your point, you know, maybe to add a little bit of optimism to the equation here, it is true that like a lot of the solutions to this, like all

also can come from AI. And we're even seeing this on the catastrophic risk side where open AI's strategy, which is controversial in the space, but is like, hey, we're going to use AI to help us solve the alignment problem. And there's a dimension of this for all of these other applications, like you said, using AI to spot, hey, copyrighted material or voices that are being fabricated. Yeah, yeah. If you can fabricate a voice, it helps you match and see if it's the same voice. So definitely. Yeah.

And let's finish up with a less serious story, something that's just kind of funny. You know, we don't need to worry about sadness. And it's about how TikTokers are roasting McDonald's hilarious drive-thru AI order fails, and it shows that robots won't take over restaurants anytime soon. And this is a pretty straightforward story. Really, it's just how...

this person had a drive-through experience and seemingly the voice recognition was very bad. They were asking for something relatively simple and then the order came out as something very silly like four ketchup packs or something. I think it's been interesting how as things like GPT-3 and text-to-image

things, uh, come out, especially with text to image. I remember last year there were a lot of just funny things people did like meme level stuff. And yeah, I think it's, it's fun to see how, you know, it's new technology and in some cases, you know, it's novel, it's interesting and sometimes it's funny, uh, and frustrating and, uh, we can kind of laugh about it.

Yeah, absolutely. And you know, you have to at a certain point, right? I mean, this sort of thing, and we're going to encounter it more and more, I guess, more and more people are going to get comfortable with the failure modes of these systems. You know, it's not just a magic black box. And, you know, if you've interacted with a crappy ordering AI, maybe that gives you intuition that you can then use the next time you're in a situation where...

where you can interact with a fallible AI system. But it's funny, these ordering bots, I've started to see them, we're starting to see them in Canada all over the place, in sushi restaurants has been a big one. And it's just kind of interesting to see how quickly it's actually kind of coming into the physical world. But anyway, nothing meme-worthy that we've seen, but only a matter of time. Yeah, I mean, it seems like...

If you're all out, you might want to try to do something good. But it will be funny when Siri starts using ChatGPT type stuff. Oh, God. There will be some entertaining results. When it starts to talk back. Yeah, yeah, yeah.

Well, with that funny story, we're going to wrap up. Thank you so much for listening to this week's episode of Skynet Today's Last Week in AI podcast. Once again, you can check out lastweekin.ai for our sub stack, which has a podcast and also our text newsletter. And if you like the podcast, please share and review and leave feedback and

Feel free to email editorial at skynet today.com. If you want to let us know whether you would like to hear us talk about research or trends and not just news and, you know, be sure to tune in as we keep going week by week.