We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Mini Episode: TikTok, Cheap Deepfakes, AI in 2020, and Deference

Mini Episode: TikTok, Cheap Deepfakes, AI in 2020, and Deference

2020/8/9
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
D
Daniel Bashir
Topics
Daniel Bashir: 本周的AI新闻综述涵盖了微软潜在收购TikTok的原因、深度伪造技术的廉价化、人工智能在适应2020年事件方面的挑战以及人工智能学习何时该向人类让步。微软收购TikTok的动机在于获取TikTok的大量视频数据,用于训练其人工智能系统。这将有助于微软在人工智能领域取得进展,并使TikTok能够继续独立运营并获得更多收益。 Daniel Bashir: 深度伪造技术正变得越来越廉价易得。安全公司FireEye的数据科学家Philip Tully利用开源AI软件创建了Tom Hanks的深度伪造图像,证明了其易于被用于虚假信息宣传。虽然细节上可能存在瑕疵,但这些图像在用作缩略图时很容易被误认为真实。Tully的实验表明,个人可以轻松地利用深度伪造技术传播虚假信息。Georgetown的研究员Tim Huang认为,深度伪造目前并非迫在眉睫的威胁,但社会仍应投资于防御措施。FireEye追踪虚假信息宣传的团队负责人Lee Foster认为,虚假信息传播者可能会很快转向深度伪造技术。 Daniel Bashir: 2020年的事件(COVID-19、社会动荡等)导致了社会和文化规范的突然变化,使得人工智能难以适应。计算机视觉模型难以正确标记当今新场景和情境的图像。例如,AI模型可能将在家工作的父亲和玩耍的儿子的照片归类为休闲而非工作。更新算法的训练数据至关重要,但这可能会导致意外的偏见增加。 Daniel Bashir: 麻省理工学院计算机科学与人工智能实验室的研究人员开发了一种AI系统,该系统可以根据人类合作者的优缺点来优化AI是否应该让步的决策。该系统使用两个单独的机器学习模型:一个模型做出诊断患者或删除社交媒体帖子的实际决策,另一个模型预测AI或人类哪个是更好的决策者。研究人员在图像识别和仇恨言论检测等任务中测试了这种混合方法,发现AI系统能够适应专家的行为并在适当的时候让步。然而,现实生活中的决策远比实验室场景复杂,因此该混合模型在处理这些情况时的表现如何还有待观察。 Philip Tully: 开源AI软件很容易被改编用于虚假信息宣传,即使预算有限,个人也能轻松创建逼真的深度伪造图像。 Tim Huang: 深度伪造技术目前虽然没有构成直接的威胁,但是社会应该提前做好防御措施。 Lee Foster: 鉴于深度伪造技术的易用性和低成本,虚假信息传播者很可能会很快采用这种技术。

Deep Dive

Shownotes Transcript

Translations:
中文

Hello and welcome, this is Daniel Bashir here with SkyNet Today's Week in AI. This week, we'll look at TikTok, deepfakes, AI's struggle to adjust to 2020, and how AI is learning to decide whether or not to defer to humans. Amid rising tensions between the US and China, President Trump has issued orders banning social media apps TikTok and WeChat if they are not sold by their Chinese parent companies.

Recently, Microsoft has been in talks to buy ByteDance-owned TikTok. Microsoft seems to be a promising but perhaps unexpected buyer, because TikTok's quirky landscape is a far cry from the mundane but widespread applications that Microsoft is known for. But as the Washington Post reports, TikTok provides Microsoft with something its competitors already have: large swaths of video data for training artificial intelligence systems.

A successful acquisition would likely allow TikTok to continue running independently while helping it make more money than it did before. In addition, it would give Microsoft the ability to use more video data to push the state-of-the-art in AI. Our next story begins with a question: Why would you want to put river water in your socks? Because it's quick, cheap, and easy.

And you know what else is becoming cheap and easy? Deepfakes. There are many photos of Tom Hanks online, but none like the one showing him at the Black Hat computer security conference on Wednesday, August 5th. If you're wondering what in the world Tom Hanks would be doing at a computer security conference, you're not alone. The images were actually not real, but made by machine learning algorithms.

Wired reports that Philip Tully, data scientist at security company FireEye, generated the images to show how easily open-source software from AI labs could be adapted to misinformation campaigns.

While minor details might betray their authenticity, the AI-made images could easily pass as real if used as a thumbnail. Furthermore, Tully only needed to gather a few hundred images of Hanks online and spend less than $100 to tune open-source face generation software to Hanks' face.

While Tully's experiment shows how easy it might be for an individual to use deepfakes to spread disinformation, Georgetown research fellow Tim Huang says the killer app for deepfake disinformation is yet to come. Huang recently authored a report that concludes deepfakes don't present an acute and imminent threat, but that society should invest in defenses anyway.

Lee Foster, who leads a team at FireEye that tracks disinformation campaigns, says that Tully's results, and his own experience with disinformation sewers, makes him think they may turn to deepfakes soon. The quality of Tully's fake hanks were not far from a viable alternative for tricksters.

If you're a person, or even not a person, your habits have most likely been affected quite greatly by the events of 2020 so far, in light of COVID-19, civil rights movements, a US election, and other important changes. TechCrunch reports that with this sudden change in social and cultural norms, the truths that we've taught AI are no longer true.

In particular, computer vision models struggle to appropriately tag depictions of the new scenes and situations we find ourselves in today.

The TechCrunch article gives an example of a father working at home while his son is playing. An AI model would generally categorize such a photo as leisure or relaxation, rather than work or office, which would more accurately describe the new reality. As we discussed in a previous week, facial recognition researchers also want to adapt their algorithms to cope with the new norm of masked faces.

TechCrunch notes that updating the data fed to these algorithms is vital, but in creating the content, we may create increased unintentional bias. The article gives the example of seeing more images of white people with face masks than other ethnicities. But while much of the data we have today already contain biases from their collection, perhaps a shift in norms will give us a chance to do better as we outfit our algorithms with new data that matches our new reality.

If we take a more careful approach this time around, we may have a chance at mitigating some of the negative impacts of AI bias. In a world where AI is becoming ubiquitous, we're beginning to encounter questions of how humans and AI will interact, since together, they can outperform either one acting alone.

In the medical field, for example, there's a question of how AI's decisions should be incorporated into a final medical decision. Since AI are designed to actually make decisions, they should defer to humans if the humans can make better decisions. The MIT Technology Review reports that researchers at MIT's Computer Science and AI Lab have developed an AI system to optimize decisions about whether an AI should defer based on the strengths and weaknesses of a human collaborator.

The system uses two separate machine learning models: one that makes the actual decision of diagnosing a patient or removing a social media post, and one that predicts whether the AI or human is the better decision maker. The researchers tested the hybrid approach in tasks such as image recognition and hate speech detection, and found that the AI system adapted to the expert's behavior and deferred when appropriate.

The experiments were simple, and the researchers believed such an approach could eventually be applied to complex decisions in healthcare and elsewhere. But the key word is "eventually". We should be wary of thinking too much of these results and their applicability without lots of iteration and testing. Real-life decisions are indeed far more complicated than lab scenarios, and it's hard to know how well the hybrid model will handle those situations.

That's all for this week. Thanks so much for listening. If you enjoyed the podcast, be sure to rate and share. If you'd like to hear more news like this, please check out skynetoday.com, where you can find our weekly news digests with similar articles.