Hello and welcome. This is Daniel Bashir here with Skynet, today's Week in AI. This week, we'll look at AI-generated fashion models, AI for predicting job hopping, Facebook simulations of bad user behavior, and whether AI can violate copyright. We're used to seeing CGI when we watch movies like The Hobbit. They create terrifying orcs and other things we'd never see in real life.
But how would you feel about seeing CGI modeling clothes when you open a fashion magazine? Model Suneed Bovell writes for Vogue that companies like DataGrid, a Japanese tech outfit, are developing algorithms that directly threaten her job. Something called a generative algorithm can create CGI models that are able to perform the same poses that make real models most of their money.
One CGI model, Shudu Graham, hopes to champion diversity in the fashion world. But these beliefs and opinions don't actually belong to Shudu because Shudu doesn't exist. Shudu's creators might not share her South African ethnicity, which also raises questions about their using her as a platform to speak on issues.
Ethical issues aside, Bovell acknowledges that digital models drastically reduce the environmental imprint associated with photoshoots and could become a symbol of individuality and inclusivity. Algorithms fed enough data could easily show you yourself or someone who looks like you wearing the clothes you want to buy.
It seems that models like Bovell, just like many others, will have to prepare for a digital transformation in their line of work. As with any technology, it doesn't do to be overly optimistic or overly pessimistic about what these changes will look like. But the changes in the fashion industry that digital technology portends will be interesting to watch.
As the pandemic has made companies and job seekers more averse to in-person interviewing, a growing number of firms have turned to using AI to assist with their hiring. Screening tools include face scanning algorithms and questions to help determine which candidates to interview, but plenty of scholars warn that the dangers of AI bias will only perpetuate bias in hiring.
The MIT Technology Review reports that companies like Australia-based Predictive Hire introduce problems that go beyond biased algorithms and misleading advertisements. Predictive Hire is focused on building a machine learning model that tries to determine a candidate's likelihood of job hopping, or changing jobs more frequently than an employer desires.
Nathan Newman, adjunct associate professor at the John Jay College of Criminal Justice, wrote in 2017 how big data analysis and research like predictive hires has been used to drive down workers' wages. Machine learning-based personality tests have been used in hiring to screen out potential employees who are more likely to support unionization or ask for higher wages.
But according to Newman, the algorithms are simply doing what employers have historically done to suppress wages and break union activity. Other researchers agree that such tools are troubling, but don't advocate tossing them out altogether. However, their use should require transparency, accountability, and rigorous peer-reviewed evaluation. Currently, it seems that none of these tools are meeting that standard.
In other corporate news, a team at Facebook's AI department in London has developed a new method to identify and prevent harmful user behavior, like spreading spam or buying and selling weapons. The Verge reports that the team simulates the behaviors of bad actors by letting AI-powered bots run loose on a parallel version of Facebook, a simulator whose name is spelled WW and pronounced dub dub.
By applying blocks to the harmful actions and observing what the bots do, the Facebook team wants to explore what changes they could make to the real platform to inhibit such behavior without affecting normal behavior.
DubDub only reports numerical data and can't model user intent or simulate complex behavior. But Mark Harmon, the engineer in charge of the team, says the power of DubDub is its ability to operate on a huge scale, letting the team run thousands of simulations to observe how minor changes in the platform will play out.
The team is focused on simulating harmful behaviors that Facebook has seen on its platform before. But Harmon says that the bots have also produced unexpected behaviors, which might allow the team to get a leg up on preventing harmful behavior that hasn't occurred yet. "It's a positive step that companies like Facebook are working to see how they can mitigate harmful behavior. We can only hope that they'll do all they can to prevent the spread of disinformation and inflammatory content as well."
Finally, should AI be subject to stringent fair use policies? Weird Al Yankovic's digital twin, Weird AI Yankovic, generated a lyric video featuring the instrumental to Michael Jackson's Beat It that was taken down on July 14th.
Vice reports that Twitter received a Digital Millennium Copyright Act takedown notice for copyright infringement from the International Federation of the Phonographic Industry, which represents major and independent record companies. Weird AI's creator and Georgia Tech researcher Mark Riedel thinks his videos fall under "fair use" and has contested the takedown with Twitter.
While using the instrumental to beat it seems to have triggered the takedown, Riedel is still convinced of his case, arguing that he does not require permission from the copyright holder to publish parody content. AI is rapidly changing and becoming more powerful, and it seems likely that cases like Riedel's will only become more common. This case raises a question of how machine learning and copyright laws interact, and there doesn't seem to be an easy answer to the question.
That's all for this week. Thanks so much for listening. If you enjoyed the podcast, be sure to rate and share. If you'd like to hear more news like this, please check out SkynetToday.com, where you can find our weekly news digests with similar articles.