Hello and welcome, this is Daniel Bashir here with SkyNet Today's Week in AI. This week, we'll look at TikTok, deepfakes, AI's struggle to adjust to 2020, and how AI is learning to decide whether or not to defer to humans. Amid rising tensions between the US and China, President Trump has issued orders banning social media apps TikTok and WeChat if they are not sold by their Chinese parent companies.
Recently, Microsoft has been in talks to buy ByteDance-owned TikTok. Microsoft seems to be a promising but perhaps unexpected buyer, because TikTok's quirky landscape is a far cry from the mundane but widespread applications that Microsoft is known for. But as the Washington Post reports, TikTok provides Microsoft with something its competitors already have: large swaths of video data for training artificial intelligence systems.
A successful acquisition would likely allow TikTok to continue running independently while helping it make more money than it did before. In addition, it would give Microsoft the ability to use more video data to push the state-of-the-art in AI. Our next story begins with a question: Why would you want to put river water in your socks? Because it's quick, cheap, and easy.
And you know what else is becoming cheap and easy? Deepfakes. There are many photos of Tom Hanks online, but none like the one showing him at the Black Hat computer security conference on Wednesday, August 5th. If you're wondering what in the world Tom Hanks would be doing at a computer security conference, you're not alone. The images were actually not real, but made by machine learning algorithms.
Wired reports that Philip Tully, data scientist at security company FireEye, generated the images to show how easily open-source software from AI labs could be adapted to misinformation campaigns.
While minor details might betray their authenticity, the AI-made images could easily pass as real if used as a thumbnail. Furthermore, Tully only needed to gather a few hundred images of Hanks online and spend less than $100 to tune open-source face generation software to Hanks' face.
While Tully's experiment shows how easy it might be for an individual to use deepfakes to spread disinformation, Georgetown research fellow Tim Huang says the killer app for deepfake disinformation is yet to come. Huang recently authored a report that concludes deepfakes don't present an acute and imminent threat, but that society should invest in defenses anyway.
Lee Foster, who leads a team at FireEye that tracks disinformation campaigns, says that Tully's results, and his own experience with disinformation sewers, makes him think they may turn to deepfakes soon. The quality of Tully's fake hanks were not far from a viable alternative for tricksters.
If you're a person, or even not a person, your habits have most likely been affected quite greatly by the events of 2020 so far, in light of COVID-19, civil rights movements, a US election, and other important changes. TechCrunch reports that with this sudden change in social and cultural norms, the truths that we've taught AI are no longer true.
In particular, computer vision models struggle to appropriately tag depictions of the new scenes and situations we find ourselves in today.
The TechCrunch article gives an example of a father working at home while his son is playing. An AI model would generally categorize such a photo as leisure or relaxation, rather than work or office, which would more accurately describe the new reality. As we discussed in a previous week, facial recognition researchers also want to adapt their algorithms to cope with the new norm of masked faces.
TechCrunch notes that updating the data fed to these algorithms is vital, but in creating the content, we may create increased unintentional bias. The article gives the example of seeing more images of white people with face masks than other ethnicities. But while much of the data we have today already contain biases from their collection, perhaps a shift in norms will give us a chance to do better as we outfit our algorithms with new data that matches our new reality.
If we take a more careful approach this time around, we may have a chance at mitigating some of the negative impacts of AI bias. In a world where AI is becoming ubiquitous, we're beginning to encounter questions of how humans and AI will interact, since together, they can outperform either one acting alone.
In the medical field, for example, there's a question of how AI's decisions should be incorporated into a final medical decision. Since AI are designed to actually make decisions, they should defer to humans if the humans can make better decisions. The MIT Technology Review reports that researchers at MIT's Computer Science and AI Lab have developed an AI system to optimize decisions about whether an AI should defer based on the strengths and weaknesses of a human collaborator.
The system uses two separate machine learning models: one that makes the actual decision of diagnosing a patient or removing a social media post, and one that predicts whether the AI or human is the better decision maker. The researchers tested the hybrid approach in tasks such as image recognition and hate speech detection, and found that the AI system adapted to the expert's behavior and deferred when appropriate.
The experiments were simple, and the researchers believed such an approach could eventually be applied to complex decisions in healthcare and elsewhere. But the key word is "eventually". We should be wary of thinking too much of these results and their applicability without lots of iteration and testing. Real-life decisions are indeed far more complicated than lab scenarios, and it's hard to know how well the hybrid model will handle those situations.
That's all for this week. Thanks so much for listening. If you enjoyed the podcast, be sure to rate and share. If you'd like to hear more news like this, please check out skynetoday.com, where you can find our weekly news digests with similar articles.