We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI for Shaming Politicians, the New AI Art Scene, DeepFake Phishing

AI for Shaming Politicians, the New AI Art Scene, DeepFake Phishing

2021/7/16
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Krennikov
D
Daniel Bashir
Topics
Andrey Krennikov:一个比利时艺术家使用AI技术创建了一个项目,通过识别政客在工作中使用手机的行为来公开批评他们。该项目利用公开的视频和图片数据,自动发布推文,指出这些政客的注意力不集中。这是一种新颖的AI应用方式,虽然略带娱乐性,但也引发了关于AI监控和问责的讨论。 Sharon Zhou:AI生成的艺术场景正在蓬勃发展,黑客们开发了新的工具,例如VQGAN+CLIP,使得用户可以通过文本提示生成图像。这些图像通常具有超现实和非比寻常的风格,展现了AI技术在艺术创作中的潜力。 Andrey Krennikov:Google发布了一个数据集,用于研究机器翻译中的性别偏见。该数据集包含不同语言中人物的传记,旨在分析机器翻译中常见的性别错误,例如代词使用错误和性别一致性错误。这有助于改进机器翻译模型,减少性别偏见。 Sharon Zhou:EleutherAI的一周年回顾文章总结了该组织在过去一年中取得的成就,包括发布开源GPT模型、与Google合作使用TPU以及在AI安全方面的长期目标。这展示了开源AI社区的活力和贡献。 Andrey Krennikov:一项Mozilla的研究表明,YouTube的推荐算法仍然存在问题,例如推荐极端内容、错误信息和不当内容。这项研究使用了众包的方法,收集用户对推荐视频的反馈,揭示了算法中存在的缺陷。 Sharon Zhou:虽然文章标题略显耸人听闻,但微软、普渡大学和本·古里安大学的研究人员进行的一项调查显示,攻击者可能会利用AI生成的深度伪造技术进行网络钓鱼攻击。这突显了AI技术在网络安全领域带来的新挑战。 Andrey Krennikov:埃隆·马斯克承认自动驾驶比他想象的要难,特斯拉车主们则因其错过的最后期限而取笑他。这反映了AI技术发展中的挑战和现实。 Daniel Bashir:Facebook AI团队训练了一个能够在行走时实时适应不同地形条件的机器人。该机器人通过试错和环境信息学习如何快速适应。 Daniel Bashir:IBM Watson在2021年温布尔登网球锦标赛中被用于实时生成比赛集锦,通过分析球员反应、观众情绪和比赛数据来选择精彩片段。 Daniel Bashir:Mozilla基金会的一项研究表明,YouTube的视频推荐算法仍然存在问题,该算法会推荐极端内容和错误信息,Mozilla呼吁加强监管和透明度。

Deep Dive

Chapters
An AI system is used to identify and publicly shame politicians who use their phones during work, highlighting a novel use of surveillance technology.

Shownotes Transcript

Translations:
中文

Hello and welcome to Skynet today's Let's Talk AI podcast, where you can hear AI researchers chat about what's going on with AI. This is our latest Last Week in AI episode, in which you get summaries and discussion of some of last week's most interesting AI news. I'm Dr. Sharon Zhou. And I'm Andrey Krennikov.

And this week, we'll discuss some interesting things around AI-generated art really exploding bias in machine translation, as well as YouTube's recommender AI still being a little bit of a horror show.

All right, let's dive straight in. Well, actually, the first article is titled This AI Publicly Shames Politicians But Don't Laugh Just Yet. So this is from the Next Web. And there is an entire Twitter and Instagram dedicated to this, which is the fact that a Belgian artist actually...

devised a new way to catch misbehaving politicians. He took basically video feeds of politicians or pictures of them and tagged the ones who were using their phones at work. Very interesting type of surveillance. Maybe arguably the type of surveillance that is okay. I'm just kidding. But definitely is able to point out, you know, potentially negligent politicians or ones who are looking at their phone. Yeah, it's pretty funny, right?

You know, obviously it's a bit of an art project. And if you go to Twitter, you can see it automatically publishes these tweets that say, Dear Distracted. And then it's the Twitter handle of the politician. And it says, please stay focused. And there's a video of them using their phone. And it works pretty well. So this reminds me, you know, there's been some eyes for tracking student, you know, focus. I think they've been proposed.

which are definitely worse in some case. But yeah, this is a fun, you know, funny little project, not too serious and a pretty novel use of AI I would not have thought of. So it's neat. I definitely agree.

And then onto something also neat, also kind of fun, but a bit different. We have next article, AI generated art scene explodes as hackers create groundbreaking new tools.

So if you follow AI at all on Twitter, you've probably seen this happen. Definitely, I've seen it a lot, which is basically there's a new way to generate artistic images using AI techniques. In this case, it's called VQgen plus Clip. So the basic is, you know, opening, I had this model called Clip, which can tell you how well an image matches up to some text.

So that, you know, you couldn't really generate images at all. But people wanted to try it out because OpenAI also had an image generation thing. And so some hackers devised a way to basically combine these image generation GAN type things with Clip.

And now anyone can just enter a prompt of text, like, I don't know, a flying unicorn above a volcano. And then an image that sort of does that, shows that, gets generated. And it's really surreal and really cool. So a lot of people have started playing with it, including myself. That's awesome. Yeah, I mean, I think it is really cool stuff, especially since...

OpenAI has not released their DALI model, uh, which is, which directly enables, you know, entering text and then generating images because they release the weights of clip. People are, you know, melding together different models, you know, with BQGAN here. Uh,

to generate those scenes. And there are some really trippy things that I've seen. And it's, it's, it's really cool just how creative people can be. Um, someone showed like the clock emoji and this generated like lots of clock emoji like things. Um, it's very good at generating, I guess, like patterns or finding something that's highly patterned.

Yeah, it's quite interesting. DALI created these very sharp images. So famously, there was an avocado chair and it created these very kind of plausible avocado chairs. Whereas because this is kind of hacked together, you get these...

Kind of messy and almost always surreal and not quite proportionally correct images that turn out to be very interesting. They convey kind of the intention of a text, but in a very kind of, you know, you could say artistic or unexpected way. And yeah, you can do any problem with text. One of them, as an example, is an abstract painting of a planet ruled by little castles.

And it produces something pretty cool. So yeah, this article is a good overview of that. There's a lot more to get into. You should check it out. It's really fun.

And related to that, there's a Berkeley blog post titled Alien Dreams and Emerging Art Scene, which is referring to essentially using clip to generate art. And it's mentioned in the blog post that it's really great to have this out there because artists feel like they still have creative control over their art. And that even though they're putting in natural language input, it still feels like

They're wielding the words, so to speak, to generate that art. Yeah, yeah. This blog post came out a couple months before the news article and has a really good history of how this all came about and a bit of an explanation of how it works and a lot of examples of images as well. So probably both of these are good reads to get a primer.

And then, yeah, I got into it myself. I found this little collab notebook. It's really easy to get started. You know, there's instructions. You don't need to be technical at all. And it's really easy to mess with. And I've certainly been having fun. So if that sounds interesting, you know, take a read.

But onto some slightly more serious stuff with our research news articles. First up, we have Google AI introduces a dataset for studying gender bias in machine translation. And yeah, it's kind of what it sounds like. Google created this translated Wikipedia biographies dataset, which

which is very specifically designed to analyze common gender errors in machine translation, which include incorrect gender choices in pronoun drop languages, incorrect possessives, and incorrect gender agreement. So basically, this has a data set of biographies in different languages that treat gender differently. So we know, for instance, there's a lot more gender issues

in some languages like Spanish versus English, where you maybe don't use gender in some verbs or various cases. So here, yeah, they basically have a bunch of biographies of people,

And then there's also some rock bands and sport teams. And yeah, there's, oh, and they've been professionally translated, which is interesting from English.

And so, yeah, this data set, because you have professional translations that don't have errors like pronoun drop and gender agreement, you can then analyze machine translation models to see if they have these errors. So very specific to machine translation and gender bias. Actually, it's a little hard to understand if you haven't thought deeply about these issues, it seems to me.

Right. And I think it's really important that they're bringing this to light and working on this. It makes it feel constrained, the fact that it's around translation and just gender bias specifically. And I think that's I like how it's a very like it's not it's not a small problem since the scale of it.

and what it could influence, but I like how it is contained such that it is this problem that I, it makes me feel like because we have maybe this data set and maybe some benchmarks around it, we can actually move a needle towards it. And it is specific enough that I feel like we can get there. Yeah, that's exactly what I thought. You know, machine realization is super common. You know, it's everywhere. Every big company has it. Well, a lot of big companies have it.

And so you can easily see how this becomes something standard for their deployment, where they can now use this to analyze for some very specific types of errors, which are hard to catch without, as you said, something very specific to these types of errors. And obviously, this is pretty important because it's a major feature of different languages. So yeah, I think it kind of points the way to how we might be able to actually test this

you know, AI models in deployment, maybe by, by honing in on the major types of errors. Right. And onto something way less certain. The next article is what a long, strange trip it's been. Eleuther AI one year retrospective. So Eleuther AI as the nonprofit organization that's been putting out, uh, these open source GPT models, uh, namely GPT Neo and GPT J recently of, uh,

The latter being 6 billion parameters and trying to move towards actually 1 trillion parameters and exceed OpenAI's GDP rate.

And this is just a blog post of their retrospective of what happened this past year, all the crazy stuff that happened because it was so crowdsourced and their Discord channel really blew up. And then they got use of a ton of TPUs from Google. And just there's so much behind this massive effort that has now made these large language models open and available to the public.

Yeah, I've been loosely following Eleftheria AI, so this was a very interesting read to find out more. Also a very fun read. It's written in a very non-corporate way with a lot of memes and silly internet speak. So on top of being interesting, it's also quite fun and almost funny in some cases.

I learned quite a bit. So as you said, they have released some GPT-3 type models, but they also are involved a lot in creating art. So VQGAN plus Clip, as we discussed just now, they mess around with a lot on their server. That's a lot of what their Discord is about. And they have some long-term goals about AI safety I wasn't aware of.

So yeah, very interesting read and quite a fun read. I would recommend looking it up. Did any tidbit from the retrospective stand out to you, Sharon? Honestly, the memes.

The memes are hilarious, so I really encourage you to go check it out. I've also been following them a little bit, maybe more. So I feel like I kind of know what's going on. I haven't been on their Discord as much. I realize I actually had signed up back in the day. But I have been playing with the GPT-Neo models quite a bit. Yeah. Yeah, the memes do kind of make this worth reading, even if you know the story.

And onto some more serious stuff. We get to our section on ethics and society, basically real applications of AI out there in the real world. And first up, we have this article from TechCrunch. YouTube's recommender AI is still a horror show, finds major crowdsource study.

So famously, one example of kind of possible negative impacts of AI that may be unintentional is YouTube's recommendation algorithm that has been found to do various things like it has

recommendation to make someone more extreme uh to you know show them angry and anger kind of views from a particular viewpoint so they can be more extreme towards i don't know a certain group a certain belief system etc and there's also been examples of recommendation engine you know um doing really weird stuff with children making them see quite uh i don't know uh

weird things and maybe inappropriate things. And so in this study, Mozilla had a crowdsourced approach. They allowed people to install a browser extension called Regrets Reporter, where people could kind of flag YouTube videos that were recommended to them and say if they had a regret about it for things like COVID-19, fear-mongering, misinformation, or inappropriate children cartoons. And

Short story, the study found that video commanding AI is still really bad and there's still a lot of problems. So yeah, it's good that Mozilla is really keeping an eye out and pointing this out because obviously this is pretty bad. I think it's really funny that Mozilla is pioneering this research.

Because obviously Google's not going to publish something like this, as we know. But Mozilla definitely can and is almost incentivized to. So great that this is out there. It is sad that this is still the case since I know the news broke of this a while back, a long time ago actually, is one of the first, I would say, AI problems that we've seen. And the fact that it's still not fixed...

to a really big degree is concerning. So unfortunately, this is still a problem. And I wish people could have a bit more license over algorithms themselves. Exactly. Yeah, there's a lot of interesting stuff here. It's quite a long article and the study itself is quite detailed. One detail that I found interesting is that

They have this metric that videos that have these sort of regrets attached to them have a full 70% more views per day than other videos. So, you know, there's a common argument that the problem is YouTube optimizes for engagement, which selects for really, you know, extreme and misinforming content that gets clicks. And yeah, this study supports that.

It's really interesting and presumably also a good study to put some pressure on Google to change it. And onto our next article in Ethics, Attackers Use Offensive AI to Create Deepfakes for Phishing Campaigns. This is by Venterbeat.

So, Andrea and I chatted about this. This is a little bit of a clickbaity title since there aren't clear, very, very, very specific examples cited in the article. But they do mention that the fact that deepfakes are becoming more rampant, both in images and text, bots making phishing calls are going to be much more effective.

Yeah, this is kind of interesting because, as you said, there's nothing happening. There's no attackers using offensive AI. So the title is misleading. But it is about this recent survey published by researchers at Microsoft, Purdue, and the Ben Gurion University. We have others, so it's a big group.

that explores the threat of this kind of offensive AI. And in a sense, this is good, right? Because in cybersecurity, you want to be prepared for any potential attacks ahead of time. It could be that this is actually a good thing that they thought about it ahead of time. And kind of interesting, I think, that now there's this intersection between cybersecurity and AI here, which I haven't really seen before. I mean, presumably there's been some research, but

This seems to be a very big study for an actual real world impact that might happen. And to cap things off, we have our usual final article selection, which is something pretty much funny, not very serious, not very impactful. So here we have our funny article. Elon Musk admits self-driving is harder than he thought as Tesla owners troll him over missed deadlines.

So you discussed this quite a bit, and I think a lot of people know this, that Elon Musk has continually made claims as to when self-driving will be accomplished. So I think he had a lot of predictions. You know, I think in 2015, it was like a couple of years. Then it was supposed to happen in 2018 and then in 2019.

In recent years, he got a little more careful, but he's still making these claims as to when things will come out, even this year, saying it'll happen in June and then it happens later.

And so, yeah, the funny part here is that this article has some examples of how Tesla owners kind of made fun of some statements like renaming their account to something like two weeks, having some quotes, you know, pretty funny and something I think I've been annoyed with as far as these claims.

Yes. So, I mean, I guess it's very known that he doesn't meet deadlines. But I will say, that being said, he does actually execute, like he does actually get to what he promises, just not in nearly the right timeframe, which is...

It's still I still appreciate because I think there are a good number of people who claim they'll do something and never do it. And I think like in that case, they generally don't set a deadline either. So I still commend him for for execution and actually getting it done eventually, even if it is in a much longer timescale.

Yeah, there's something to be said about, you know, shooting for the moon and then not quite hitting it. So as you said, I also agree that, you know, it's a minor thing as to these predictions, but the actual progress has been very impressive. Then again, one funny thing here is the story is partially because of this response he had.

So yeah, he said to this journalist, basically generalized self-driving is a hard problem as it requires solving a large part of real world AI. I didn't expect it to be so hard, but the difficulty is obvious in retrospect. Nothing has more degrees of freedom than reality.

So a lot of people kind of made fun of him because the difficulty is obvious in retrospect part. A lot of people are saying his predictions were ridiculous from the outset. So it's kind of funny that now he's catching up with the common view among people in AI, many people in AI. But yeah, kind of just a funny little thing, not that big a deal.

Right. And that's it for us this episode. If you've enjoyed our discussion of these stories, be sure to share and review the podcast. We'd appreciate it a ton. And now be sure to stick around for a few more minutes to get a quick summary of some other cool news stories from our very own newscaster, Daniel Bashir. Thanks, Andrea and Sharon. Now I'll go through a few other interesting stories we haven't touched on. First off, one story on the research side.

Together with researchers from Carnegie Mellon and UC Berkeley, Facebook's AI research team taught a robot how to adjust to conditions in real time while walking. The robot, created by Chinese startup Unitree, adjusts its gait as it moves through different terrain like stones and stairs.

As CNET reports, researchers tested the robot's balance by pouring oil on plastic to create a slick surface and dropping weight on the robot's back. Each time, the robot recovered and continued forward. One of the researchers said the robot learned how to adapt quickly through trial and error, as well as information from its surroundings. This robot doesn't have computer vision.

so it learned to navigate from how its body reacts on different surfaces, much like how a human might. The researchers trained the robot in a computer simulation before testing it in the real world. They called their breakthrough "rapid motor adaptation." Our next story is on the application side.

The 2021 Wimbledon saw Novak Djokovic with the trophy again in men's singles, and Ashley Barty with the gold in women's singles. If you're a big tennis fan, you might have spent lots of time watching the matches themselves, but the highlights reels are often a good way to catch up on the most interesting bits of the tournament. How were those highlight reels created?

According to the World Economic Forum, IBM Watson was watching every game simultaneously to create highlights packages within two minutes of a match finishing.

Rather than having a team of editors spend hours to compile highlights packages, Watson continuously tracks the action and ranks every point in the tournament by watching player reactions, listening to crowd excitement levels, and analyzing gameplay statistics.

IBM AI technology is also powering new player performance and fact sheet tools for Wimbledon fans. With these new applications, we might just see a shift in how people watch and play tennis in the years to come.

And now for our stories on AI and society. YouTube's video recommendation algorithm has stood accused of fueling division, conspiracy theories, and a host of other societal ills by feeding users an AI-amplified diet of extreme content. While Google has sometimes responded to the negative publicity,

TechCrunch observes that it's not clear how much better the platform has become. New research by the Mozilla Foundation suggested that YouTube's AI system continues to serve content purely intended to attract attention by selling polarization or spreading disinformation.

Mozilla gathered information for its study using a crowdsourcing approach with a browser extension that let users self-report YouTube videos they regret watching. The tool can then generate a report, including details of the videos the user had been recommended, along with earlier video views, to help build a picture of how the recommender system was working.

to fix YouTube's algorithm, which is clearly not performing much better than it used to, Mozilla is calling for common sense transparency laws, better oversight, and consumer pressure. Thanks so much for listening to this week's episode of Skynet Today's Let's Talk AI podcast. You can find the articles we discussed today and subscribe to our weekly newsletter with even more content at skynetoday.com.

Don't forget to subscribe to us wherever you get your podcasts and leave us a review if you like the show. Be sure to tune in when we return next week.