Hello and welcome to Skynet today's Let's Talk AI podcast where you can hear from AI researchers about what's actually going on with AI and what is just clickbait headlines. I am Andrey Karenkov, a third year PhD student at the Stanford Vision and Learning Lab. I focus mostly on learning algorithms for robotic manipulation in my research. And with me is my co-host,
I'm Sharon, a third year PhD student in the machine learning group working with Andrew Ng. I do research on generative models, improving generalization of neural networks and applying machine learning to tackling the climate crisis. And Sharon, I believe you just actually submitted a paper to NeurIPS this past week, which you mentioned you were working on before. So I hope that went well and you've been recuperating.
Yes, it's been a roller coaster to say the least. And I it's hard to slow down the momentum once you've once you've been going for so long, probably for the past month, very little sleep. But yes, I did get my my papers and I had about I think I three actually this time.
Wow. That's, I usually try for one given deadline. So, uh, that is commendable. There was, it's mainly one, but speaking of Nureps, our first article has to do with, uh, the black lives matters movement, um, and how Nureps responded to that, which, um, I think was, uh,
So the article is AI Conference NeurIPS Extends Paper Submission Deadline Amid BLM Protests.
And so at a high level, the deadline for submitting papers to this AI mega conference called NeurIPS has been extended by 48 hours by the committee. And this was to give people affected by the ongoing protests in the U.S. more time to finish off their work. And of course, outside of the U.S. too, there have been many protests as well.
And so in its announcement of the extension, the NURPS board said, Today, NURPS grieves for its Black community members devastated by the cycle of police and vigilante violence. Today, NURPS mourns for George Floyd, Breonna Taylor, Ahmaud Arbery, Regis...
Korchinski, Paquet, and thousands of Black people who have lost their lives to this violence. Today, NURB stands with this Black community to affirm that today and every day, Black lives matter.
And so in a further demonstration of their support, researchers Aaron Grant and Nicholas LaRue have created a list of AI mentors that black people working on papers can actually contact for advice before the NARF's deadline. And for every person giving help, both LaRue and Google AI chief Jeff Dean have pledged to donate $1,000 to the Black in AI group.
Yes. So both the extension and the statement and some of these additional initiatives from AI researchers, I think have been really good to see. And in some ways, the ongoing protests and issues in the US are a bit separate from AI, but
For people who maybe are not in the US, it's hard to understand, but if you are a minority or if you know people or if you just live in American cities, it's impossible not to be impacted by what's happening right now. And it's taking a heavy psychological toll on many people. And so the extension was a welcome reflection of that. The statement, I think, was a welcome reflection of that.
And yeah, this offer of support was also quite welcome. And I would imagine, Sharon, that you also found this to be a welcome surprise or a welcome gesture. Yes, definitely. I think this...
This mainly is to alleviate some of the essentially disproportionate inequity that does occur if some kind of event like this is going on. It would impact those who are black from submitting, right? So I think this was a good move from Nuremberg's part. And it also enabled people
like me who are not black but support BLM to spend some time protesting. Yes, and again, I guess to make it clear, it's just so hard to focus. As you said, you have to be really head down working on your paper to finish it.
And it seems very hard to focus on these things while protests are going on and things are going on. So it was good to see this extension. And in fact, another conference titled EMNLP, which is a huge conference as well, also extended by a few days. Yeah, so this is, I think, a welcome sign that AI as a community is growing.
able to handle these issues, to address them head on. And it follows a trend actually of trying to diversify the community, trying to be more inclusive, trying to help minorities be more seen and have their work more represented.
There are actual organizations like Black in AI that are specifically tackling the difficulty for minorities and Black people to do well in this community. So hopefully this is just another sign that efforts that need to happen for Black representation and opportunity within AI and within society more broadly are being made.
And on that note, we can move on to our next article, which is on the same topic. And it's titled The AI Community Says Black Lives Matter, But More Work Needs to be Done. And this was in VentureBeat.com.
So this article also covered the New Europe's deadline extension and the statement that was made. But it also goes into a little more detail on the state of a field of AI and its diversity issues. So to quote from it, it says that for the AI community,
Acknowledgement of a movement is a start, but research shows that it, much like the rest of the tech industry, continues to suffer from a lack of diversity. According to a survey published by New York University's AI Now Institute, as of April 2019, only 2.5% of Google's workforce was Black, while Facebook and Microsoft were each at 5%.
In addition to that, AI has some pretty specific issues to deal with with regards to representation.
For instance, a National Institute of Standards and Technology study last September found that facial recognition systems misidentify black people more often than white people. So as we build out new technology based on AI, we risk perpetuating historical biases against black people and essentially serving black.
white people better than black people and other minorities. So to tackle that, AI Now Institute has proposed that the AI community needs greater transparency with respect to salaries and compensation, needs harassment and discrimination reports, and needs to have, again, more transparency around hiring practices.
Uh, and others are also calling for targeted recruitment to increase employee diversity and commitments to bolster the number of people of color, women, and other underrepresented groups of leadership at AI companies. I think this is all fine and good. I would love to see it actually happen and be implemented. I, having been in the Bay area for several years now, about four years now, uh,
I haven't seen that many black people in tech, to be honest. And it sucks. Like, I'd rather see much more diversity. I guess diversity in many different ways. And tech, I sense, has a little bit of a monoculture going on. So...
Yeah.
Saying things is not as effective as actually trying to do something. Actually, it struck me that when you were talking about this in our lab meeting, our lab is fairly big, but there is the underpresentation even just in that small group of there's not that many black people among us.
So I guess the silver lining is this is reminding us to pay attention and actually try to do something. And I've also seen a lot of other prominent researchers express support and caring in the last week or two via Twitter. So.
Of course, it's a stressful and rather emotionally fraught time, but hopefully through it we can actually lead to improvement. I've seen a lot of solidarity in the community, and I've also seen several AI leaders come forth with their opinions and stated opinions.
essentially publicly. And that's been really compelling. One is definitely from Andrew Ng, my advisor, and the other is from Joshua Bengio that I've seen as well. And I'm sure others have too. I've not been much on social media due to the NURFS deadline, but yeah,
Yeah, I've admired that. And I hope that people realize that this is not something where they necessarily have to... I saw something about how this isn't something you necessarily have to be all professional about because, I guess, being neighborly and being kind to fellow other human beings should come first. And yeah, so I found that pretty compelling. As did I. Yeah, actually, I...
I was not submitting, but just reading the statement from Europe's as a rather professional, large conference, it seemed to me a fairly, fairly direct, you know, non, non careful, non sort of PR, very compassionate response, which was,
In one way, it isn't doing much, but another way is very directly addressing the situation, which was commendable. And so now switching gears a little bit, we are going to talk about an article called Deepfakes Are Going to Wreak Havoc on Society. We Are Not Prepared. And this was published in Forbes.
So deepfakes are already pretty scary and they're enabled by recent advances in AI. So deepfakes essentially allow synthesized videos of all sorts of people, including politicians, also fake audio recordings and much more potentially harmful media.
And so this amount of content is growing. We're already beginning to see how it could be used for ill purposes. An Indian politician actually used deepfakes as part of his campaign, and Donald Trump recently retweeted a deepfake of Joe Biden.
The facts that defects are getting better and due to the accessibility of the technology that they can be created by essentially anyone or many more people at least, the stakes are very high. The Brookings Institution summed up the social and political dangers they pose, among them manipulating elections, undermining public safety, and inflicting damage on the reputation of prominent individuals. So as a result, U.S. lawmakers are beginning to pay attention to this.
while experts warn that the examples we've already seen of their use are canaries in a coal mine. So how do we actually combat the rise of deepfakes? Well, unfortunately, the legal frameworks that we have now are limited against this anonymous internet that we have. And the most effective short-term solution may have to come from actually the major tech platforms taking steps to limit that spread.
Yeah, so to expand a little bit, this article is in many ways just a great summary on the state of deepfakes. But it also does note that essentially passing laws is a tricky idea here. For instance, one idea might be to try and legislate so that companies would have to legally legislate
work against deepfakes and other damaging and misinforming content on their sites. But this goes against a long held kind of legal framework for regulating companies where they essentially are not responsible for content that users post on their platforms.
So it's very tricky. And quoting from the article, it says that in the end, no single solution will suffice. An essential first step is simply to increase public awareness of the possibilities and dangers of deepfakes and informed citizenry is a crucial defense against widespread misinformation.
And so as the U.S. presidential election draws near, I think we'll all need to be a little more careful and be wary because there may well be many more deepfakes coming around. Yes. And I think this is actually extremely worrisome right now when people very much believe in the videos that they see or audio recordings that they hear where people
I think if over time deepfakes are just so widespread that people become almost immune to them or at least...
not immune, but less affected or kind of skeptical of them much like I think written text, written news is now, um, it might be slightly less concerning, but right now people highly believe some of these things. And so that is, this is probably the most concerning time for deep fakes, which is that transition period. Um, and while these legal frameworks are trying to adjust to this, uh,
I feel like it's only inevitable that deepfakes will essentially become commonplace. But hopefully in their becoming of commonplace, people will be dispelled from the illusion that they are indeed real.
Yeah, I guess this plays into a larger issue of on social media, especially on Twitter, you may see something and read it or watch a video and then take that to be accurate and not, you know, edit it to be misleading or in this case, actually just generated by AI to be misleading.
And so, as you say, hopefully having this more sophisticated way to trick people will lead to people just double checking the veracity of whatever they stumble upon instead of just trusting it. And that is also what this article is kind of pointing out.
That being said, people don't really double check some of the news articles they read or rather the news headlines they read. So it's quite possible that this just exacerbates the echo chamber and confirmation bias.
Yeah, it's tricky. We've had some discussions of this at Stanford with some Stanford professors and so on that have attended. And one part of the solution is to actually have more fact checkers and have newspapers kind of attach, let's say, proof of veracity to various documents like photographs and so on.
So we'll need some sort of bodies that can actually be trusted or else truth is dead. And hopefully, I guess our journalistic backbone as a society also strengthens in response to these challenges.
But enough of that, I guess we'll see if deepfakes actually do come about more. For now, we're going to turn our attention to the next article, which is titled Eye-Catching Advances in Some AI Fields Are Not Real. And this is from Science Magazine. And so like the last article, this is a bit of an overview of something that is the case in the field of AI. In this case, this is an overview of how
Many times we see kind of big headlines, big results come about that in actuality are maybe much more minor than they seem.
So you might hear about a new technique leading to superhuman performance on some game. And though this looks impressive, it doesn't amount to much actual progress. And in fact, existing techniques, when used appropriately, might be as good or better.
More specifically, this article covers some findings such as that 2019 meta-analysis of information retrieval algorithms used in search engines concluded that a high watermark was set in 2009. Another study in 2009, we produced seven neural network recommendation systems of a kind used by media streaming services and found that six failed to outperform
much simpler non-neural algorithms. So broadly speaking, some papers that claim to make advances when you actually tune and use the prior work appropriately, you find that those are just as good or better. Yeah, I'm trying to think what we can even say here. Not much. Yeah.
I guess, yeah, I mean, our transition. So we've discussed this a little bit already in the past of how there is a problem or there's not enough real good scientific of how the research methodology used in many AI papers are not quite strong enough. We all rush to a vacation and so on. And so this is not particularly surprising and it's important to be aware of as someone outside of AI looking in.
Sharon, I think before when we discussed this, you said you saw a lot of papers that seem flimsy and also think that research in methodology should be stronger. So I imagine that these meta-analysis papers are not surprising to you.
Yeah, it also makes me think about, you know, prior methods that have been largely abandoned or have only been touched on a couple times since because so much focus now is on AI. For example, genetic algorithms, I've wondered where that research line has gone. And I think I've recently spoken to some people who are doing RL who still use genetic algorithms as some kind of baseline, but not a...
but they're a very basic genetic algorithm. And I know there is work being done merging the two, but it's just, there's very little focus on these prior methods. And I think over time, hopefully things will come together and we will find something that is maybe hybrid that does succeed, but it's possible that this is hype and it's possible that, uh,
even running the same model on different GPUs on different machines will get you different results or a different implementation of the model will vary because it was initialized slightly differently or something. Something that we've found is that the PyTorch ResNet 18 is different from, I think, the TensorFlow one. And then there's also an
other implementations that are all different, but they, but they have all the same underlying architecture from the, the resonant paper. So it's, you know, like it's, it's very arguable what, what we should be trusting. Um, a lot of the times here when many things are very empirical. Yeah, exactly. And it's, it's a little bit funny, I think, um,
When you are in AI research and you sort of work on AI research, you actually realize that AI researchers are the most cynical people with regards to AI advancements. Like often when you read a paper, you're a bit skeptical of the precise numbers and everything and maybe just are curious about the idea.
So as an outside observer, it's important to be aware of these things, that there is a lot of tweaking, a lot of parameters, a lot of sort of accidental improvements that may not necessarily hold up. And so hopefully as a field, we are discovering things over time. But it's a messy process and individual claims, individual announcements.
should be regarded with healthy skepticism. Yes, I completely agree. And with that, thank you so much for listening to this week's episode of Skynet Today's Let's Talk AI podcast. You can find the articles we discussed here today and subscribe to our weekly newsletter with similar ones at skynetoday.com. Subscribe to us wherever you get your podcasts and don't forget to leave us a rating if you like the show.
Be sure to tune in next week.