We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Mini Episode: Two Facial Recognition Stories, A Reckoning for NLP, and ”Self-Programming Computers

Mini Episode: Two Facial Recognition Stories, A Reckoning for NLP, and ”Self-Programming Computers

2020/8/2
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
D
Daniel Bashir
Topics
Daniel Bashir: 本周的AI新闻包括三个主要方面:首先,路透社的调查揭示了美国药店连锁店Rite Aid在其200多家门店中使用面部识别系统的情况,尤其是在低收入、非白人社区的部署引发了对隐私和公民权利的担忧。该系统曾使用与中国政府有关联的公司的技术,虽然没有证据表明数据被发送到中国,但其使用仍带来了潜在风险。此外,面部识别技术也存在误判的情况,例如将无辜的顾客错误识别为扒手。 其次,自然语言处理领域可能面临一个反思期。虽然GPT-3等系统在基准测试中表现出色,但这并不意味着它们能够真正理解自然语言,也不一定意味着这些基准测试有意义。研究人员过度关注基准测试成绩,而忽略了构建更全面世界模型的目标。 最后,自动代码生成技术取得了进展,例如MISIM系统可以改进代码,但距离计算机能够真正“自我编程”还有很长的路要走。虽然这项技术有潜力帮助人们编写软件,但目前仍处于早期阶段,乐观估计也为时尚早。

Deep Dive

Shownotes Transcript

Translations:
中文

Hello and welcome, this is Daniel Bashir here with Skynet, today's Week in AI. This week, we'll look at two recent facial recognition stories, a reckoning for natural language processing, and what apparently counts as computers programming themselves.

First off, a Reuters investigation found that American drugstore chain Rite Aid Corp. added facial recognition systems to 200 stores across the United States over about eight years. While other retailers, including Walmart and Home Depot, have also used facial recognition, Rite Aid's usage represents the largest rollout so far.

Rite Aid deployed the technology in largely lower-income, non-white neighborhoods in New York and Metro Los Angeles. Reuters also found that for over a year, Rite Aid used facial recognition technology from a company with links to China and its authoritarian government, but found no evidence that Rite Aid's data was sent to China. After Reuters sent its findings to Rite Aid, the retailer said it had quit using its facial recognition software and that all cameras had been turned off.

But some damage has already been done. Tristan Jackson-Stankinas says Wrightud wrongly identified him as a shoplifter in a Los Angeles store based on someone else's photo. He told Reuters that the only similarity between him and the man in the photo was that they were both African American. Wrightud's rollout is just another case that will add to the already growing conversation about facial recognition and its interaction with American citizens' constitutional rights.

But if you're afraid about stores like Rite Aid accusing you of shoplifting, then the mask that I hope you're already wearing outside might just be your friend. The Verge reports that a study by the US National Institute of Standards and Technology found that wearing face masks that cover the nose and mouth causes the error rate of widely used facial recognition algorithms to spike between 5 and 50%.

But don't get your hopes up too high. The study focused on algorithms developed before the pandemic. The researchers will later explore algorithms developed with masked faces in mind. Since facial recognition algorithms use many facial features to identify people, it makes sense that a mask would make things more difficult for them. Unfortunately, the people who create facial recognition algorithms know this too and intend to keep up.

If you've been following the hype around GPT-3, OpenAI's new text generation system, you've probably seen the amazing feats it's performed, such as writing webpages and React. You may have also heard that GPT-3 did very well on a number of performance benchmarks for natural language processing. But does that mean that natural language processing systems are poised to take people's jobs? And do those benchmarks even mean anything? Perhaps not.

Jesse Dunietz of Elemental Cognition writes for the MIT Technology Review that natural language processing, the field that broadly seeks to help computers make sense of natural language, could be facing a reckoning. "Even state-of-the-art reading comprehension systems, which achieve solid performance on established benchmarks, can easily be fooled by a human within a few tries."

Dunietz argues that relentless leaderboard chasing, trying to achieve the best performance on a benchmark, has distracted researchers from natural language processing's ultimate goal: to help their systems develop a coherent model of the world. It does seem that researchers are finally coming to grips with this issue and reflecting on what's missing from current technologies. Dunietz hopes that this reflection will lead to not just new algorithms, but new and more rigorous ways of measuring machines' comprehension.

While such work may not be as headline-worthy, it will be very impactful. And finally, in this week's clickbait headline, the MIT Technology Review says, "A new neural network could help computers program themselves." But what does that title actually mean? Well, it's not entirely false. Automated code generation, which roughly translates to generating code based on a description of what you want to achieve, has been around for some time.

The Technology Review article cites Microsoft's move to build basic code generation into its tools, Facebook's system that auto-completes small programs, and DeepMind's neural network that can devise more efficient versions of simple algorithms than humans can. Justin Gottschlich, director of the Machine Programming Research Group at Intel, along with a team from Intel, MIT, and the Georgia Institute of Technology, developed a system called Machine Inferred Code Similarity, or MISIM.

MISIM can extract what a piece of code is telling the computer to do, then suggest corrections or improvements to the code. It compares snippets of code with other programs it has already seen from a large number of online repositories, then uses a neural network to find other code with similar meaning.

Indeed, MISIM is an improvement in this area and could help people write better code. The researchers are hopeful that combined with natural language processing techniques, this idea could help people write software by describing what they want to do in words. While such prospects are exciting, they are very optimistic. We're likely far away from that goal, and even farther from a world in which it might be accurate to say computers are programming themselves.

So don't worry or get too excited just yet. That's all for this week. Thanks so much for listening. If you enjoyed the podcast, be sure to rate and share. If you'd like to hear more news like this, please check out SkynetToday.com, where you can find our weekly news digests with similar articles.