Hello and welcome, this is Daniel Bashir here with Skynet, today's Week in AI. This week, we'll look at two recent facial recognition stories, a reckoning for natural language processing, and what apparently counts as computers programming themselves.
First off, a Reuters investigation found that American drugstore chain Rite Aid Corp. added facial recognition systems to 200 stores across the United States over about eight years. While other retailers, including Walmart and Home Depot, have also used facial recognition, Rite Aid's usage represents the largest rollout so far.
Rite Aid deployed the technology in largely lower-income, non-white neighborhoods in New York and Metro Los Angeles. Reuters also found that for over a year, Rite Aid used facial recognition technology from a company with links to China and its authoritarian government, but found no evidence that Rite Aid's data was sent to China. After Reuters sent its findings to Rite Aid, the retailer said it had quit using its facial recognition software and that all cameras had been turned off.
But some damage has already been done. Tristan Jackson-Stankinas says Wrightud wrongly identified him as a shoplifter in a Los Angeles store based on someone else's photo. He told Reuters that the only similarity between him and the man in the photo was that they were both African American. Wrightud's rollout is just another case that will add to the already growing conversation about facial recognition and its interaction with American citizens' constitutional rights.
But if you're afraid about stores like Rite Aid accusing you of shoplifting, then the mask that I hope you're already wearing outside might just be your friend. The Verge reports that a study by the US National Institute of Standards and Technology found that wearing face masks that cover the nose and mouth causes the error rate of widely used facial recognition algorithms to spike between 5 and 50%.
But don't get your hopes up too high. The study focused on algorithms developed before the pandemic. The researchers will later explore algorithms developed with masked faces in mind. Since facial recognition algorithms use many facial features to identify people, it makes sense that a mask would make things more difficult for them. Unfortunately, the people who create facial recognition algorithms know this too and intend to keep up.
If you've been following the hype around GPT-3, OpenAI's new text generation system, you've probably seen the amazing feats it's performed, such as writing webpages and React. You may have also heard that GPT-3 did very well on a number of performance benchmarks for natural language processing. But does that mean that natural language processing systems are poised to take people's jobs? And do those benchmarks even mean anything? Perhaps not.
Jesse Dunietz of Elemental Cognition writes for the MIT Technology Review that natural language processing, the field that broadly seeks to help computers make sense of natural language, could be facing a reckoning. "Even state-of-the-art reading comprehension systems, which achieve solid performance on established benchmarks, can easily be fooled by a human within a few tries."
Dunietz argues that relentless leaderboard chasing, trying to achieve the best performance on a benchmark, has distracted researchers from natural language processing's ultimate goal: to help their systems develop a coherent model of the world. It does seem that researchers are finally coming to grips with this issue and reflecting on what's missing from current technologies. Dunietz hopes that this reflection will lead to not just new algorithms, but new and more rigorous ways of measuring machines' comprehension.
While such work may not be as headline-worthy, it will be very impactful. And finally, in this week's clickbait headline, the MIT Technology Review says, "A new neural network could help computers program themselves." But what does that title actually mean? Well, it's not entirely false. Automated code generation, which roughly translates to generating code based on a description of what you want to achieve, has been around for some time.
The Technology Review article cites Microsoft's move to build basic code generation into its tools, Facebook's system that auto-completes small programs, and DeepMind's neural network that can devise more efficient versions of simple algorithms than humans can. Justin Gottschlich, director of the Machine Programming Research Group at Intel, along with a team from Intel, MIT, and the Georgia Institute of Technology, developed a system called Machine Inferred Code Similarity, or MISIM.
MISIM can extract what a piece of code is telling the computer to do, then suggest corrections or improvements to the code. It compares snippets of code with other programs it has already seen from a large number of online repositories, then uses a neural network to find other code with similar meaning.
Indeed, MISIM is an improvement in this area and could help people write better code. The researchers are hopeful that combined with natural language processing techniques, this idea could help people write software by describing what they want to do in words. While such prospects are exciting, they are very optimistic. We're likely far away from that goal, and even farther from a world in which it might be accurate to say computers are programming themselves.
So don't worry or get too excited just yet. That's all for this week. Thanks so much for listening. If you enjoyed the podcast, be sure to rate and share. If you'd like to hear more news like this, please check out SkynetToday.com, where you can find our weekly news digests with similar articles.