We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode An AI Serial Killer, Scary Good Voice AI, Robot Rock, Israel‘s AI-powered Weapons

An AI Serial Killer, Scary Good Voice AI, Robot Rock, Israel‘s AI-powered Weapons

2021/7/9
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Krenkov
D
Daniel Bashir
S
Sharon Zhou
Topics
Andrey Krenkov: AI语音技术已经发展到能够根据少量人类表演生成新的游戏角色对话的程度,这引起了视频游戏演员的担忧,但也为小型游戏开发者提供了新的机会。这项技术可以模仿任何人的声音,并可能对许多领域产生重大影响。 大型科技公司正在利用AI技术分析大量的音乐数据,以预测下一个流行巨星或热门歌曲。AI可以帮助发现有潜力的音乐人,但它也可能缺乏人情味,并导致人们试图操纵算法以获得成功。音乐的流行性是不断变化的,这使得AI难以预测新的音乐类型或声音的流行程度。 大多数主要的AI研究论文很少考虑AI技术的负面影响,即使在2020年创建了NeurIPS影响声明后,这种情况也没有得到改善。AI伦理研究关注AI的负面影响,但大多数AI研究论文更关注性能和准确性。 Sharon Zhou: AI语音技术可以生成新的对话,无需演员参与,但这引发了演员对其艺术被取代的担忧。AI可以帮助发现有潜力的音乐人,特别是那些在Spotify或SoundCloud等平台上发布作品的小型艺术家,但这可能会导致缺乏人情味,以及人们试图操纵算法以获得成功的现象。AI在音乐选择方面更客观,但人们可能会试图操纵算法。 机器学习可以帮助预测免疫疗法对癌症患者的有效性,并帮助医生更好地理解哪些患者会对免疫疗法有反应,从而提高治疗效果。 Daniel Bashir: 谷歌AI发布了一个基于机器学习的框架,游戏开发者可以使用它来快速有效地训练游戏测试代理。谷歌发布了一个翻译维基百科传记数据集,用于评估大多数翻译模型中是否存在性别偏见。埃隆·马斯克承认自动驾驶汽车比他预期的更难。一个AI狼羊游戏的研究表明,AI可能会以意想不到的方式优化目标,这突显了预测神经网络行为的难度。美国需要建立一个更广泛、更包容、更强大的AI创新生态系统,以促进AI研究的发展。

Deep Dive

Chapters
The episode discusses the advancements in voice AI technology and its impact on video game actors, highlighting both the benefits and the controversies surrounding its use.

Shownotes Transcript

Translations:
中文

Hello and welcome to Skynet today's Let's Talk AI podcast, where you can hear AI researchers chat about what's going on with AI. This is our latest Last Week in AI episode in which you get summaries and discussion of some of last week's most interesting AI news. I'm Dr. Sharon Zhou. And I am Andrey Krenkov. And on this episode, you can expect us to touch on AI for voice acting and music, on some research for values in AI research,

and some stuff on Amazon firing AI and Israeli military's AI.

So let's go ahead and dive straight in. First up, we have this article called "Voice AI is scary good now, video game actors hate it." And so, yeah, voice AI here refers to the ability to generate new dialogue for a game character just given some overperformance by a human actor, but then you can generate more lines. And this was demonstrated by Modder, who created a mod for a popular game which had

additional dialogue from the main character without having the actor involved. Yeah, so this is quite cool. We listened to the audio and it did seem pretty impressive. What did you think about it, Sharon? I thought it was really cool that it was released in this very informal way. So the modder, the user, trained an AI model on the voice actor who did the lines for a certain character, the main character, and

And then, of course, that character could then say a lot of different new things without needing that actor involved. Of course, the part that, you know, feels not so great is on the actor side. The actors feel like their art is being taken away or taken advantage of or being replaced in many ways. And the article goes in great depth about, you know, that controversy.

Exactly. It has some really nice interviews with actors and touches on that quite well. It does have a positive note in that smaller developers who are making games, this makes it more possible for them to actually have dialogue based on AI. And so maybe there's a best of both worlds scenario where it's not used for main characters, but it's used to fill in some of the blanks for side characters or whatever.

And I think we can expect more and more that AI for audio or AI for speech can very much just mimic these, uh, anyone's voice, your voice, my voice. I actually have a model that has my voice, but, uh,

It can learn anyone's voice and then that could be licensed in some way or in this case it wasn't quite licensed. It came out in this less formal way. But I think we can expect that trend to continue. Did your model of your voice sound good? It does sound like me, especially when it's on topic, so in distribution. When it's talking about Gans, it sounds like me. Yeah.

Yeah, I think you're right. The technology is there and it is something that, especially for voice actors, obviously will be relevant, but probably for a lot of areas in smaller ways, the ability to generate plausible audio will have a significant impact. So just another way that AI is changing how things are done this decade.

Yes. And on to our next article from The Guardian. Robot Rock Can Big Tech Pops Next Megastar? So this article basically touches on a bunch of different companies, in particular companies that can maybe analyze a huge database of different technologies.

different performances and be able to pick out or predict who might be the next big pop star, the next big hit without having people to manually sift through and listen through a ton of different songs, probably tens of thousands, maybe even more than that. And this has been getting quite a bit of buzz since some very, very large groups, including Warner Music Group,

where Madonna, the Red Hot Chili Peppers came from, have been thinking about this too and been thinking about how do we use AI to identify talent more quickly.

Yeah, I think this is pretty interesting. I mean, this was sort of an assumption, I guess, he could make that AI is being used to analyze music databases. But this article has a good overview of kind of the state of things and some of the kind of justifications. So obviously, some of the sort of people who have looked for music acts, you know, have

There's an argument to me that it's a bit inhuman to use an algorithm to pick out promising musicians.

But at the same time, this article makes a good case that it can be kind of a positive thing in terms of picking out smaller artists who just post their stuff to Spotify or SoundCloud who would never get picked up. And maybe even if they don't become superstars, it can lead to moderate success, like a million streams or something. It had a case of relaxing instrumentals, which

which would not be a megastar, but can get a lot of streams. So yeah, interesting and kind of another case where, you know, this is just going to become standard, presumably.

So on the one hand, I think, you know, this AI is much more objective. It's going based on, you know, certain types of metrics that we care about when it comes to probably just like likes or views or some kind of, you know, bounce rate, blah, blah, blah. But.

But I think there's also the other side where I wonder if people are going to start to try to game it and try to find a way to become the next pop star because now it's an automated process and there's not as much of a human touch or human in the loop. So that could be very interesting. That could be interesting. But this article also makes a good point that I found interesting, which is

you know, as a comparison to be made for money ball, this book, right, where baseball was made more statistics based for recruiting players. But the thing about music is it, you know, what we consider good or interesting or novel constantly changes. So that's kind of a benefit for musicians is if you invent a new genre or new sound,

AI might have a hard time predicting that to be popular, even if it is. So, you know, still some room for people to surprise the AI, I guess. Absolutely. And on to our research section. The first article is from VentureBeat. Study finds that few major AI research papers consider negative impacts of

All right. So probably not surprisingly, there's a paper that looks at, you know, all these different AI research papers and finds that we don't really consider the negative impacts that much. Not even, you know, 98%, I think, don't even mention the negative potential. And 1% does mention it, but doesn't discuss it. And 1% does discuss it and

0%, they claim, deepens our understanding of negative potential, which, you know, I guess it depends on how you define that. But it is kind of sad that that isn't very much the case in any of our AI papers. That being said, I believe the only thing I know that does fulfill that is Black Mirror in the next show. So that is the only thing. Any thoughts on this, Andrej?

Yeah, I mean, as you said, this is unsurprising. Obviously, you know, AI ethics as a subfield, which would mostly be concerned with these sorts of things. And here, you know, they compared what papers focus on and most of them focus on things like performance, like accuracy or building on past work. And very few mention these kind of more ethical things like interoperability or bias and so on.

So yeah, very unsurprising. But what I did find surprising was that they also showed that even when the NeurIPS impact statements were created in 2020, there was a trend where negative stuff was still not mentioned. It was still only positive, which is maybe kind of a good point. And I also think that

This is not surprising, but it will be interesting to see if they do a follow-up, if things change over time. I don't know, would you expect things to not be quite so slanted over years? I think we'll improve, especially with studies like these, bringing stuff like this to light. It also takes time to change things

how we even craft narratives for papers, right? It's a whole culture around it. It's a whole design around it and a whole understanding of how to write that paper, which is,

you know, arguably isn't the best way forward, but it is and doesn't encourage diversity in terms of, you know, having thoughts about negative potential. But I think people are realizing that it is not just a bright shining light and it is, in addition to being a tool, could be weaponized and that we are allowed to or even encouraged to now put these into our publications.

Yeah, exactly. So these sorts of papers, even if it's not surprising now, by quantifying it do encourage moving in that direction, which I'm pretty hopeful about.

And onto our next research focused article, we have researchers turn to machine learning to predict when immunotherapy will be effective. So this is about this paper, Interpretable System Biomarkers Predict Response to Immune Checkpoint Inhibitors. A little tricky.

But basically, the idea here is that there is an immunotherapy which helps assist the immune system in fighting against cancer. And it's very beneficial, but it doesn't work for everyone.

So we need a way to understand why some patients don't respond to a particular type of immunotherapy. And basically this, this paper shows that you can use machine learning if you're clever about it to be able to understand, uh, when patients will respond to, uh, this sort of treatment, this ICB treatment. And, uh, yeah, you know, it's, it's quite positive results. And, um,

Seems like another example where, you know, AI can benefit and augment doctors work as opposed to replacing them. Right. Based on my understanding, it was the fact that we don't have a lot of data on the actual responses to immunotherapy.

Instead here, the authors are looking into substitute immune responses from the same data set such that that can help the model generalize and give the model much more data about a much larger space of different possible things. And I think that that is kind of the quote unquote trick or hack that they exploited to get much better results on predicting immunotherapy response, which is huge because we

I think if you don't, if we know or if we can better predict who can respond best to different types of therapy, especially when it's very expensive and could take a lot of time or even hurt the patient if it doesn't work or takes a long time to work, it's a big deal. Yeah.

Yeah, exactly. And one thing I did find interesting here, they used like 7,000 patients' data, but one of the approaches they used was just multitask linear regression. Linear being the opposite of deep learning. So, you know, not everything needs to be deep learning and machine learning. In some cases, something non-deep also works.

Yes, though, as we know with machine learning research, things are going back towards the MLP, the multilayer perception route. That's true. We found basically that in AI research, maybe we could just choose one of the simplest models, just scale it really big, have a lot of data in it.

almost works just as well if not better right turns out all our research is is just a waste of time we should have just done the simple things it helped us learn more about the space and what what doesn't work perhaps maybe yeah

Well, on to our next section of ethics and society with AI. Our first article is titled Amazon is using algorithms with little human intervention to fire flex workers. It's from Ars Technica. And there is the spicier titled article from Bloomberg titled.

Fired by bot at Amazon. It's you against the machine. All right. So this article is clearly or the set of articles are clearly about how Amazon's Flex program, which was a program that began during the pandemic to hire people, mainly as contractors, these drivers to help deliver different items for them.

The problem is they're using this AI algorithm to determine whether or not people are hired, but also fired. And sometimes that can lead to very unfair results, or sometimes that can lead to just not having flexibility in terms of why someone was fired.

And Amazon says, you know, we're making it very transparent, you know, what your rating is. But it appears from a lot of these interviews that a lot of these drivers don't have a lot of say when something goes wrong. For example, if an Amazon locker didn't open for one of the drivers and then their own ranking tanks because it seemed like they weren't dropping something off.

Yeah, this is pretty, you know, a little bit black mirror, actually, because, you know, it sounds a lot like what Uber does. You just sign up, you upload your documentation and you can do it. But here, you know, people are actually getting a job from sound of it. You're getting hired, you're getting fired. And it's all handled by this app that, you know, apparently has these different features.

metrics that the drivers can see like you know how fast you go how fast your deliveries are made but then all these actual human factors um get in the way and then the algorithms are not able to account for it so i don't know i guess this is not surprising but it is very depressing i would say

Yeah, it's sad that this is what essentially gig work has reduced to even when it is kind of this full-time type of gig.

Yeah, it really makes you wonder, you know, if things will get even more crazy and how we can get a good balance of making it possible for people to get some side income while not, you know, having algorithms kind of make all the decisions and be unfair towards someone who's working hard for Amazon or another company.

Exactly. And on to our next article, Israel used world's first AI guided combat drone swarm in Gaza attacks. And this is from New Scientist.

All right, so the Israel Defense Forces, IDF, used a swarm of small combat drones in the Gaza Strip to attack militants. And this was the first AI-guided drone swarm. It was still operated by a human.

Yeah, so the idea here is a human can sort of direct the collection of drones, where to go, what to do. But then the AI takes care of coordinating these multiple drones. So that's why it's called the swarm. And actually there's a whole field on swarm robotics. Yeah. Yeah. So it's semi-autonomous. It's not quite autonomous, you know, killer machines, but yeah.

It's still showing that there is some increased use of AI in war. And there was also this article noted as semi-autonomous robot with machine guns, which, yeah, is semi-autonomous, but has some autonomous functionality. So...

Yeah, it's interesting, I guess, to just see these to start to emerge kind of slowly. You know, it's not Terminator, but it seems like maybe we'll see things gradually become more AI enabled in these sorts of ways. When the article says robot with machine gun, you imagine Terminator for sure, but the picture of it is actually just a tank.

Yeah, it's like a little six wheel tank, which as far as AI, it just kind of helps a human aim. So, you know, don't don't worry too much. You know, the AI isn't like Terminator, but it's sort of... It's assisting like, yeah, different functionalities like like aim, for example, if it has a lot of sensor data, it can it can do much better aiming than a person.

Yeah. Yeah. So interesting, maybe a little worrying, but at the same time, if you get into the details, it's not as bad as, you know, science fiction might make you think.

And onto our last piece, and as usual, we like to lighten the tone a bit with something a bit funny. We have this article called Google's Algorithm Misidentified an Engineer as a Serial Killer. So if you do any Google search, you know that often you have these little pop-ups, knowledge pop-ups with information about someone like a public figure. And in this case, it was Guy Hristo Georgiev.

who was named a serial killer, even though he's just an engineer. And this was, you know, not nice for him, but also a bit funny. And, you know, another case where we see AI can be a little wacky in weird ways.

And so what was happening was that because he had the same name as an actual serial killer, all the bio was correct. They just happened to pick probably...

the closest or easiest photo to get her something and used it as the photo. But that photo was actually of him, not of the serial killer. So that's quite unfortunate. Yeah, I guess it's already unfortunate you share a name with a serial killer also known as the sadist, which doesn't sound like a good thing to be associated with. Yeah, but the good news is that...

Google did correct this pretty quickly once they got contacted. So at least, you know, in this case, someone was available to correct this AI mistake.

And that's it for us this episode. If you've enjoyed our discussion of these stories, be sure to share and review the podcast. We'd appreciate it a lot. And now be sure to stick around for a few more minutes to get a quick summary of some other cool news stories from our very own newscaster, Daniel Bashir. Thanks, Andra and Sharon. Now I'll go through a few other interesting stories that we haven't touched on. Both our stories on the research side come out of Google AI.

As Market Tech Post reports, "Google AI recently announced a machine learning-based framework that game developers can use to train game testing agents quickly and efficiently. The system requires no ML expertise, works with a number of game genres, and can train an ML policy that generates game actions from a state in less than an hour."

Google also provided an open source library to show how these techniques can be used. The second story concerns neural machine translation, which has made serious progress in the past few years, but the natural and fluid translations it produces have often been marred by bias in the data translation models are trained on.

Also reported by MarkTech Post, Google has released the Translated Wikipedia Biographies dataset, which evaluates whether gender bias is present in most translation models. If it works as claimed, it could help ML models better focus on pronouns and lessen gender bias to a great degree.

In our only story on the business side today, Elon Musk deserves a huge congratulations. The indomitable genius has finally realized that self-driving cars are a more difficult problem than he had estimated. But, as The Verge reports, this didn't come in a sudden term of humility.

Rather, it was in a Twitter announcement about a new version of the software that he and Tesla still call "full self-driving." Our first story about AI and society begins in 2019 in China, when two university students built an AI project that involved a wolf versus sheep game.

As Lawrence Eng writes on Medium, "Two wolves and six sheep would be placed at random in a game space, and the wolves would have to catch all the sheep in 20 seconds while avoiding boulders." As in a reinforcement learning setup, a point system was programmed to incentivize the AI wolves to improve their performance. And the goal was to see if the AI wolves would maximize their scores.

But the researchers found that instead of catching sheep, the wolves mostly ran themselves against boulders to commit suicide. Since the wolves received a negative penalty for every time step they were moving and not catching sheep, they optimized by simply minimizing those negative rewards.

The story went viral and sparked discussion, and there are two main takeaways: the bizarre behavior was the result of programmed rationality and objective optimization; and it's hard to predict what conditions matter or don't matter to a neural network.

In our final story, the Biden administration has been making moves in the AI space. Recently, it followed through on a congressional mandate to create a National AI Research Resource Task Force. The task force is dedicated to strengthening America's foundation and spurring advances in AI.

But as Fei-Fei Li, a member of that task force, writes for The Hill, the US needs to build a more expansive, inclusive, and robust innovative ecosystem that expands beyond the industry giants who have most of the money to fund AI research.

That ecosystem should include academia, civil society, and the federal government. These groups don't have access to the same infrastructure for AI R&D available to large tech companies. But without the foundational research that takes place in academia, Lee warns, innovation could dry up quickly.

Lee believes the National Research Cloud, which would balance the needs of parties involved in AI R&D and democratize AI research, is a necessary start to stimulate novel research. Thanks so much for listening to this week's episode of Skynet Today's Let's Talk AI podcast. You can find the articles we discussed today and subscribe to our weekly newsletter with even more content at skynetoday.com.

Don't forget to subscribe to us wherever you get your podcasts and leave us a review if you like the show. Be sure to tune in when we return next week.