Hello and welcome to Skynet Today's Let's Talk AI podcast, where you can hear from AI researchers about what's actually going on with AI and what's just clickbait headlines. This is our latest Last Week in AI episode, in which you get a quick digest of last week's AI news, as well as a bit of discussion between two AI researchers as to what we think about this news. To start things off, we'll hand it off to Daniel Bashir to summarize what happened in AI last week.
We'll be back in just a few minutes to dive deeper into these stories and give our takes. Hello, this is Daniel Bashir here with our weekly news summary. This week, we'll look at two stories of positive AI applications, one negative application, and a story on ethical AI. When developers first tried teaching games like chess to AI systems, they showed them expert games to show them how to play right.
But as Wired reports, a new AI chess program named Maya focuses on simply predicting human moves, including mistakes. John Kleinberg, Cornell professor who led Maya's development, says this is a step towards developing AI that understands human fallibility. This knowledge could help AI become better at interacting with humans.
Kleinberg identifies healthcare as a possible case where a system that anticipates errors might be able to train doctors. But he focuses on chess because it's one of the first battlefields where AI defeated humans. As we develop better AI systems, those that understand human behavior might be able to help the humans who can't keep up with them. We'll see if Kleinberg's program helps create such systems.
If you've ever been confused during a lecture, your teacher might have noticed the look on your face and encouraged you to ask a question or stopped to re-explain something. But with lessons online, this can be difficult for teachers to do. According to CNN Business, Hong Kong-based startup FindSolution AI has created a software called Four Little Trees to help teachers read the room.
As students work on tests and homework on the platform, the AI system measures muscle points on their faces to identify emotions. The system also measures multiple indicators of student performance and generates reports on their strengths and weaknesses. It can use that information to adapt to each student, providing targeted ways to help them learn. Founder Viola Lam says students who use the software perform 10% better on exams.
As the technology improves, Lam hopes to expand its use to business. In a less positive use case, Clearview AI doesn't seem to be going away. Their latest patent application describes applications of facial recognition from governmental to social, like dating and professional networking. As InputMag reports, Clearview's patent claims that people can identify individuals who are unhoused and are drug users with the company's face matching system.
Clearview has already stated that its application could be used as a background check in business or dating contexts. As with its previous work, Clearview remains reticent about the privacy risks its new application poses. Facial recognition technology has already been involved in two false arrests, stoking calls for federal legislation governing the use of facial recognition to make arrests.
Clearview has said it will have its software tested by the National Institute of Standards and Technology. It seems unlikely that Clearview will stop without a great deal of pushback, and we may just end up discovering what dangers its new applications pose before they are regulated.
Finally, there has been a fair amount of AI research targeted towards applications like generating faces and predicting criminality. While the researchers may think these ideas are useful, many others think such lines of research are incredibly dangerous for a multitude of reasons and therefore shouldn't exist.
As The New Yorker reports, "The domain of AI has not been questioned very much about the ethics of its research, merely about the potential business applications. But AI research organizations have begun to develop systems for addressing ethical impact." But at present, AI research is largely self-regulated with norms and not rules. Certain papers, such as ones that predict whether a crime might be gang-related, create pushback and Twitter storms.
But we are starting to see some changes, with top conferences calling for explicit consideration of a work's impact on society. At the same time, there is difficulty in creating a more organized ethics process as researchers are diverse in motives, backgrounds, funders, and contexts. Hopefully, if the field at least becomes more transparent about the impacts of its work, we might continue to see improvement on the ethics front.
That's all for this week's News Roundup. Stay tuned for a more in-depth discussion of recent events.
Thanks, Danielle, and welcome back, listeners. Now that you've had the summary of last week's AI news, feel free to stick around for a more laid-back discussion about this news by two AI researchers. One of them is myself, Sharon, a fourth-year PhD student in the machine learning group working with Andrew Ng. I do research on generative models, improving generalization of neural networks, and applying machine learning to tackling the climate crisis, as well as to medicine. And with me is my co-host...
Hi there, I'm Andrey Krenkov, a third-year PhD student at the Stanford Vision and Learning Lab. I focus mostly on learning algorithms for robotic manipulation in my research. And speaking of research, we've got our first story, which is kind of amusing, maybe a little unnerving, titled, VAI Research Paper Was Real, The Co-Offer Wasn't. So the summary here is that...
There was a paper where prestigious academic David Cox, who is the co-director of the AI lab in Cambridge, Massachusetts, this academic found that his name was listed as an offer alongside previous years in China who he didn't know. And this was on two different papers.
So kind of unusual. Usually when you're writing a paper, you know that you are a co-author. At least that should be the case. And this article notes that it wasn't until he threatened legal action that the publisher removed the name and issued a retraction. So, yeah, kind of curious, a bit humorous, maybe that this whole story is
What does it make you think of, Sharon? Yeah, it's kind of crazy. I'm actually surprised this hasn't happened sooner, to be honest. I know there are a lot of senior researchers who don't know a lot of the papers they might be on.
because they have so many. And yeah, I was really surprised that this happened, but I'm also surprised it didn't happen earlier, to be honest, because I can see people just putting, you know, famous people on, on their papers to try to get, I don't know, accepted or something like that, which is really horrible and really makes me want to push for, you know, making, making conferences, making submissions more blind. And by that, I mean,
I mean, when things are put up on our archive, they're also not completely blind in a sense. So what are your thoughts here, Andre? Yeah, it's as you say, I guess I don't find myself being too surprised given just the sheer quantity of papers that we have and the size of a field now. It isn't that surprising. And yeah,
I think we've seen, I guess, with size, there's different issues that have arisen in the field. So we know that, for instance, paper reviews and reviewing is overloaded. And the process by which papers get published is barely keeping up with this flood of research.
And so that's probably another reason why this got through the system is, you know, reviewing is quite flawed and really incapable of keeping up with research. We've also seen cases where
You have papers that are very questionable, like trying to guess gender from someone's voice, for instance, being published in prestigious conferences like NeurIPS. So, yeah, as you say, it's maybe not surprising, but it's yet another sign of what happens when you have a field that just blows up in size.
and isn't quite able to keep up and adjust properly. So yeah, I guess it's another signal that we do need to figure out how to do reviewing and how to scale up while still keeping academic integrity at the forefront.
Yes, absolutely. You're right that there are other areas of integrity. And this is an interesting one with co-authors. And certainly there have been a lot of rules around co-authorship since, you know, people have bought co-authorship in the past. Once a paper gets accepted to NeurIPS, I remember there was a time where startups would literally buy co-authorship.
for like $20,000 a pop or something like that. And that's why Nerves had to say, you know, we need to lock down. We have to freeze the authors once at submission time, you know? And so that was definitely interesting. And I just can't believe that happened. I mean, there's so many weird ethical or non-ethical, sorry, things, unethical things that are happening around all of this. Exactly. Yeah.
And on to our next story, we've got a topic that we often address. Maybe one we are a bit tired of, but one that deserves discussion. And this is...
The article Clearview's AI plan for invasive facial recognition is worse than you think. And this article is actually based on a report from BuzzFeed News, which has a more descriptive title, which is a Clearview AI patent application describes facial recognition for dating and identifying drug users and homeless people.
So we've been hearing a lot about Clearview and their efforts to basically be a search engine where you can enter someone's face and get their name. And this was articles based on a patent that was unfailed last week that describes basically what Clearview is pitching its application for. And it's interesting because
After a controversy, Clearview stated that they will avoid transactions with non-governmental customers anywhere. But here in the patent, they say in many instances, it may be desirable for an individual to know more about a person that they meet, such as for business, dating or other relationship. It says that a strong need exists for an improved method and system to obtain information about a person.
There's also other applications like using their technology to grant or deny access for a person to a facility, a venue or device. And lastly, they could also use Query to identify a sex offender or homeless people or to determine whether someone has a mental issue or handicap, which would influence how police responds to a situation.
So yeah, kind of a glimpse into what Clearview imagines. And maybe unsurprisingly for a tech company, they have some grand plans and the idea to be used for a ton of stuff. But as we've discussed before, Clearview is...
really racing ahead of regulations on these topics. And it's pretty concerning what they are hoping to be used for all these cases. What's your thoughts, Sharon? Yeah, I mean, it's not surprising that they know that they need to lie low in the public light, but they're still going to continue doing these things. So, I mean, that's the TLDR. That's clear view for you. Yeah. Yeah.
I mean, I mean, their exact, like their whole product is designed for this. I don't, I don't see how they're going to try to make it better or something. I, yeah, I don't see like the fact that they're, that their patent is about like, Oh, trying to figure out who are drug users or who's, you know, homeless and stuff. And yeah, it's just clear what they're being used for. Exactly. Yeah. And, um,
Here, the patent itself also describes what technology we're hoping to stake out. And likewise, it's quite broad. So it lists two generic concepts, a web crawler, which searches for images with faces, and facial recognition technology. So both things that are pretty well established in technology.
And nevertheless, of course, Clearview, the company wants to stake out as much IP as possible. So they have this patent to be quite broad and, you know, have a patent over a ton of applications and technology. So again, another case where, you know,
The company is clearly ambitious and they're racing ahead to build facial recognition to be used by police and seemingly other businesses. And it's something to be aware of and to be concerned given that there is not really regulation on this front and good security to protect
make sure there's no misuse, which of course we've heard about. It was misused originally, you know, different businesses and individuals had access before that led to controversy. And then it was retracted. Right.
Well, on to a more positive note. Our next article is titled, This AI-Powered Gadget Could Completely Disrupt the Ridiculous Hearing Aid Market. And this is about Whisper, which is essentially a hearing aid using technology to be able to pick up sounds better for people who are hard of hearing. And this includes many elderly people, since hearing loss is one of the actually most common
prevalent service related disabilities among military veterans and something like over 5% of the world's population suffers from hearing loss. So a huge, huge problem, huge market and underserved, under crowded. And I'm very glad to see that there's a company here going going for it. Yeah, I agree. I think it's a great example of how I can actually be beneficial
you know, in a very concrete way. So the idea here is that instead of having the sort of technology that exists now for hearing aid, which kind of just makes things louder, it doesn't necessarily work that well to be able to distinguish different noises and improve the clarity of the sounds around you. Here, the idea is that
Instead of just doing that, they use different algorithms to hone in on sounds that you want to amplify and basically have a smart sort of hearing aid that is using natural language processing, audio detection, segmentation, and isolation to really make the augmentation work better. And
Also, Whisper, this company, is thinking to do a subscription plan instead of charging thousands for the device. So yeah, really nice example of AI being used for something positive and in a way where without AI, this would not be possible. But with AI, it is. And I think in general...
There's a lot of applications for assistance for different needs. Of course, we've also seen applications for visually impaired people. We've seen even applications for people who have trouble moving, where we have assistive robotics. So yeah, great to be reminded that that's a whole area of AI where we don't have a reason to be suspicious or worry about misapplications of it.
Right. I mean, I guess the worst thing that could happen is it detects some, you know, like it amplifies something that's not correct. And so.
And yeah, I could imagine that could be bad in certain cases, though. I don't I don't really see that happening automatically unless they have some bias around, you know, oh, they can't enhance certain people's voices like women's voices that are have higher pitch, for example. So, I mean, things like that could still happen, but.
But I'm guessing it would improve drastically beyond the current methods. And I hope they're thinking about things like that. Yeah, I think luckily, the technology itself using audio segmentation is a topic that's been researched for decades. So and then, of course, got a lot better as a lot of applications in the past decade.
So, yeah, I think there is a reason to hope that this works well and works much better than the existing solutions that don't use such technology. Right. Definitely. And onto our last story, back to something a bit less positive. I guess we have a good mix this week. The story is Google reshuffles AI team leadership after researchers controversial departure.
And really, there's a whole bunch of stories aside from this one that we can get into. But to start with, I guess, the sort of first event was that there was a promotion of Marianne Croke.
who has been a vice president of the company for six years working on different projects, including getting public Wi-Fi and railroads in India. So this person will run a new center focused on responsible AI within Google Research. Now, alongside this story, there was other news that came soon after.
that Margaret Mitchell, one of the co-leads of the ethical AI team at Google, has been fired. And this was two months after Timid Jebru, who was also a co-lead, was also fired or resonated, as it was known at the time.
And it seems pretty clear that Margaret Mitchell protested the treatment of her colleague. We know that she was locked out of her account a few weeks ago when trying to sort through emails.
And while not necessarily surprising, it is sad to see another prestigious researcher, Margaret Mitchell has a lot of research that has been very influential in Africa, be fired and be dismissed by Google. So, yeah, it's kind of sad to see that a team like this, which had such great leads, has gone through this
fundamental transformation in such a short time. What do you think, Sharon? Right. It is very sad. And unfortunately, the way I think Google had approached this promotion was also in this very...
Very surreptitious, like very secretive way, because apparently employees in the group only found out from the, I think, Wired article or some kind of article online as opposed to internally. But it's been verified internally. And so I think like Google is trying to go around this very gently. But at the same time, as they the more they try to make it
go around it gingerly, the more press it produces. And so I think they've done kind of a not so great job of keeping themselves quiet on this situation. Exactly. Yeah, it's great to see, I guess, other people being promoted who are probably quite qualified to do good contributions here. But at the same time,
it's pretty shocking, honestly. There's not been many cases where, you know, prominent researchers have had such developments with large companies where essentially with Tim Najabru, we know that it was over research, over a research paper that ultimately led to her being fired. And now Margaret Mitchell stood up for her co-lead and said,
That also led to her being fired. So certainly maybe sort of the biggest development for prominent researchers collaborating with a company and also within responsible AI as a topic, which is really evolving. And then these two people were really instrumental in developing a lot of the initial results there.
Yeah, it's kind of hard to overstate, I guess, if you don't have much context, how big a deal this is. But it's really quite enormous, I would say. Oh, absolutely. People are very much...
I suppose, judging Google for a lot of their actions right now and really rethinking it. I think I was actually talking to a VC recently and he told me that, you know, a ton of people are starting companies now and I can't help but think it's because...
People are like leaving these companies, you know, and thinking about, you know, what else could I be doing instead? I mean, obviously, that's not the only reason, but I could see I could see a lot of people leaving. And I know a lot of people have been thinking about leaving at least. So we'll see where this group goes. I wonder how they're going to mend themselves or or be what I think other people thought they were going to be, which is just to be, you know, a shell and kind of
a fake way for Google to say, hey, we work on ethics when they don't. I don't know. We'll see. Yeah, we'll see. But it's hard to imagine that Google will bounce back and be seen as a serious group in this area, given how things played out. Yeah, absolutely. I think they've definitely lost that from the community. Yeah.
Which is sad, which is very, very sad because they were probably the only ones who had a shot. Yeah. Prior to this, it was kind of notable how much research in this area they produced and tools as well. They really did do quite a bit of work, this ethical AI group. So this sort of sudden dissolution or at least loss of leadership
again, is pretty huge. And the only sort of silver lining, as we've said in the past, is hopefully it leads to these two very talented and influential researchers doing something on their own or of another organization that is more supportive. And then we get even better or even more output from them, even if right now I'm sure they're not very happy with how this played out.
Yeah, definitely. I mean, I definitely want to see, you know, where Timnit and others go and see what they actually, what they do next and show the world, you know, how to actually do ethics. Yep. And with that, we're going to wrap up. Thank you so much for listening to this week's episode of Scanning Today's Let's Talk AI podcast.
You can find the articles we discussed here today and subscribe to our weekly newsletter with similar ones at scannettoday.com. Subscribe to us wherever you get your podcasts and don't forget to leave us a rating and review if you like the show. Be sure to tune in next week.