We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Mini Episode: AI Fashion Models, AI for Job Hopping, Facebook Simulations, and Weird A.I. Yankovic

Mini Episode: AI Fashion Models, AI for Job Hopping, Facebook Simulations, and Weird A.I. Yankovic

2020/7/26
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
D
Daniel Bashir
Topics
Daniel Bashir: 本期节目讨论了AI技术对多个领域的冲击,包括时尚行业、招聘和社交媒体,以及AI作品的版权问题。AI生成的时尚模特对传统模特的工作构成威胁,同时也引发了伦理争议。AI在招聘中的应用存在偏见风险,可能加剧招聘中的不公平现象。Facebook利用AI模拟有害用户行为,以改进平台安全。AI生成的恶搞视频也引发了版权争议,凸显了AI与版权法之间互动的问题。 Suneed Bovell: AI生成的时尚模特对传统模特的工作构成直接威胁,因为AI可以生成能够完成相同工作的数字模特,从而减少对现实模特的需求。 Shudu Graham: 作为AI生成的虚拟模特,Shudu Graham的存在引发了伦理问题,因为其观点和身份并非真实存在,其背后创造者的意图和动机也值得关注。 Nathan Newman: AI驱动的招聘工具可能被用来压低工资和打击工会活动,其使用需要透明度、问责制和严格的同行评审。 Mark Harmon: Facebook利用名为DubDub的模拟器来识别和预防有害用户行为,通过大规模模拟,探索如何改进平台以抑制有害行为。 Mark Riedel: AI生成的恶搞视频是否属于合理使用,这是一个有待解决的问题。AI技术快速发展,类似的版权争议可能会越来越普遍。

Deep Dive

Shownotes Transcript

Translations:
中文

Hello and welcome. This is Daniel Bashir here with Skynet, today's Week in AI. This week, we'll look at AI-generated fashion models, AI for predicting job hopping, Facebook simulations of bad user behavior, and whether AI can violate copyright. We're used to seeing CGI when we watch movies like The Hobbit. They create terrifying orcs and other things we'd never see in real life.

But how would you feel about seeing CGI modeling clothes when you open a fashion magazine? Model Suneed Bovell writes for Vogue that companies like DataGrid, a Japanese tech outfit, are developing algorithms that directly threaten her job. Something called a generative algorithm can create CGI models that are able to perform the same poses that make real models most of their money.

One CGI model, Shudu Graham, hopes to champion diversity in the fashion world. But these beliefs and opinions don't actually belong to Shudu because Shudu doesn't exist. Shudu's creators might not share her South African ethnicity, which also raises questions about their using her as a platform to speak on issues.

Ethical issues aside, Bovell acknowledges that digital models drastically reduce the environmental imprint associated with photoshoots and could become a symbol of individuality and inclusivity. Algorithms fed enough data could easily show you yourself or someone who looks like you wearing the clothes you want to buy.

It seems that models like Bovell, just like many others, will have to prepare for a digital transformation in their line of work. As with any technology, it doesn't do to be overly optimistic or overly pessimistic about what these changes will look like. But the changes in the fashion industry that digital technology portends will be interesting to watch.

As the pandemic has made companies and job seekers more averse to in-person interviewing, a growing number of firms have turned to using AI to assist with their hiring. Screening tools include face scanning algorithms and questions to help determine which candidates to interview, but plenty of scholars warn that the dangers of AI bias will only perpetuate bias in hiring.

The MIT Technology Review reports that companies like Australia-based Predictive Hire introduce problems that go beyond biased algorithms and misleading advertisements. Predictive Hire is focused on building a machine learning model that tries to determine a candidate's likelihood of job hopping, or changing jobs more frequently than an employer desires.

Nathan Newman, adjunct associate professor at the John Jay College of Criminal Justice, wrote in 2017 how big data analysis and research like predictive hires has been used to drive down workers' wages. Machine learning-based personality tests have been used in hiring to screen out potential employees who are more likely to support unionization or ask for higher wages.

But according to Newman, the algorithms are simply doing what employers have historically done to suppress wages and break union activity. Other researchers agree that such tools are troubling, but don't advocate tossing them out altogether. However, their use should require transparency, accountability, and rigorous peer-reviewed evaluation. Currently, it seems that none of these tools are meeting that standard.

In other corporate news, a team at Facebook's AI department in London has developed a new method to identify and prevent harmful user behavior, like spreading spam or buying and selling weapons. The Verge reports that the team simulates the behaviors of bad actors by letting AI-powered bots run loose on a parallel version of Facebook, a simulator whose name is spelled WW and pronounced dub dub.

By applying blocks to the harmful actions and observing what the bots do, the Facebook team wants to explore what changes they could make to the real platform to inhibit such behavior without affecting normal behavior.

DubDub only reports numerical data and can't model user intent or simulate complex behavior. But Mark Harmon, the engineer in charge of the team, says the power of DubDub is its ability to operate on a huge scale, letting the team run thousands of simulations to observe how minor changes in the platform will play out.

The team is focused on simulating harmful behaviors that Facebook has seen on its platform before. But Harmon says that the bots have also produced unexpected behaviors, which might allow the team to get a leg up on preventing harmful behavior that hasn't occurred yet. "It's a positive step that companies like Facebook are working to see how they can mitigate harmful behavior. We can only hope that they'll do all they can to prevent the spread of disinformation and inflammatory content as well."

Finally, should AI be subject to stringent fair use policies? Weird Al Yankovic's digital twin, Weird AI Yankovic, generated a lyric video featuring the instrumental to Michael Jackson's Beat It that was taken down on July 14th.

Vice reports that Twitter received a Digital Millennium Copyright Act takedown notice for copyright infringement from the International Federation of the Phonographic Industry, which represents major and independent record companies. Weird AI's creator and Georgia Tech researcher Mark Riedel thinks his videos fall under "fair use" and has contested the takedown with Twitter.

While using the instrumental to beat it seems to have triggered the takedown, Riedel is still convinced of his case, arguing that he does not require permission from the copyright holder to publish parody content. AI is rapidly changing and becoming more powerful, and it seems likely that cases like Riedel's will only become more common. This case raises a question of how machine learning and copyright laws interact, and there doesn't seem to be an easy answer to the question.

That's all for this week. Thanks so much for listening. If you enjoyed the podcast, be sure to rate and share. If you'd like to hear more news like this, please check out SkynetToday.com, where you can find our weekly news digests with similar articles.