We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Bias in Voice Recognition, Debates in AI, and Robotics in the Time of COVID-19

Bias in Voice Recognition, Debates in AI, and Robotics in the Time of COVID-19

2020/4/4
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Kurenkov
S
Sharon Zhou
Topics
Andrey Kurenkov:语音识别系统中存在种族差异,对白人用户的错误率远低于对黑人用户的错误率。这种偏差源于训练数据缺乏多样性,以及AI系统创建者自身的潜在偏差。此外,这种偏差形成了一个恶性循环:服务主要被白人使用,导致难以收集黑人的数据,从而服务继续被白人使用。即使收集了足够的数据,如果模型训练不当,也会导致对某些群体的识别效果较差。AI研究中的数据偏差问题现在也出现在了实际应用中。AI研究人员需要更好地沟通在部署AI系统时需要考虑的各种问题。应对深度伪造,需要技术手段与事实核查员和记者的现有技术相结合。深度伪造检测工具的设计应注重用户体验和可解释性。应对虚假信息需要多种工具,包括应对简单虚假信息的技术。不应该停止深度伪造技术的研发,而应关注其积极应用,并努力防止恶意使用。新冠疫情期间信息传播的快速性,对虚假信息检测提出了挑战。关于AI未来发展方向的争论核心在于是否需要将深度学习与经典的符号AI相结合。深度学习取得了很大的进展,但现在似乎已经达到瓶颈,需要新的创新来解决一些复杂的问题。需要尝试不同的方法来解决深度学习的局限性。 Sharon Zhou:如果AI系统的设计没有仔细考虑不同群体,可能会导致不公平的结果。即使收集了足够的数据,如果模型训练不当,也会导致对某些群体的识别效果较差。AI研究中的数据偏差问题现在也出现在了实际应用中。AI研究人员需要更好地沟通在部署AI系统时需要考虑的各种问题。应对深度伪造,需要技术手段与事实核查员和记者的现有技术相结合。深度伪造技术的民主化降低了其使用门槛,增加了恶意使用的可能性。创建深度伪造技术的民主化,需要相应的检测和过滤机制。新冠疫情期间信息传播的快速性,对虚假信息检测提出了挑战。关于AI未来发展方向的争论核心在于是否需要将深度学习与经典的符号AI相结合。深度学习取得了很大的进展,但现在似乎已经达到瓶颈,需要新的创新来解决一些复杂的问题。

Deep Dive

Chapters
The podcast discusses a study revealing significant racial bias in commercial voice recognition systems, highlighting the disparity in error rates between white and black users and the implications for privacy and performance.

Shownotes Transcript

Translations:
中文

Hello and welcome to Skyner Day's Let's Talk AI podcast where you can hear from AI researchers about what's actually going on with AI and what are just clickbait headlines. This week we'll look at bias, disinformation and hype in AI and talk about yet more stories about coronavirus and AI.

You can find the articles we discussed here today and subscribe to our weekly newsletter with similar ones at skynetoday.com. I am Andrey Kronikov, a third-year PhD student at the Stanford Vision and Learning Lab. I focus mostly on learning algorithms for robotic manipulation. And with me is my co-host...

I'm Sharon, a third year PhD student in the machine learning group here working with Andrew Ng. I do research on generative models, improving generalization of neural networks and applying machine learning to tackling the climate crisis. All right, Sharon, we are now in the third week of shelter in place here in the Bay Area. Feels like this past month has been

you know, several months at least, maybe a year. Absolutely. Time is moving really slowly. Yeah, it's very surreal. How have you been adjusting to this new reality?

I think I've actually passed my peak anxiety point. So this is the new normal quarantine. I have also been thinking about, you know, how can we help mitigate some of these effects as we see other countries that are, you know, a few weeks or months ahead of us.

It's a little bit ominous to think about, but how do we make sure we either don't get into their situation or that we step into their shoes and follow their footsteps? Yeah, I think it's kind of where we are as a country are to some extent, where we are finally at the point of sort of understanding the scale of this thing.

And just coming to accept that things are going to be strange and different for a while, which in a way makes it easier to get used to it.

But I guess for now, we'll try to talk about some AI news to distract ourselves from all this virus stuff and just focus on the things we find interesting. So to get going, our first set of articles will be on issues of AI, which will include bias, disinformation and

and flaws of our current techniques. The first piece we'll discuss is titled, There is a Racial Divide in Speech Recognition Systems, Researchers Say, and it was released in the New York Times. And so we're already aware of bias, racial and otherwise, that pervades AI systems being trained and deployed today. But this is going to be a very specific area of speech recognition systems that the New York Times has pointed out.

Interestingly, the disparate treatment different groups receive from AI systems even extends to speech. So speech recognition systems from five of the world's biggest tech companies, including Amazon, Apple, Google, IBM, and Microsoft, make far fewer errors with users who are white than users who are black, according to a study published on Monday in the Journal Proceedings of the National Academy of Sciences, or PNAS.

So this includes our Amazon Echo, Alexa. This includes Siri. This includes OK Google, IBM Watson, Microsoft Cortana. Yeah. So to get into a little more detail, there's a quote from a study that says that the systems misidentified words about 90% of the time with white people, but with black people, mistakes jumped to 35%. So a huge difference. And

And interestingly, this was also shown before a few years ago for facial recognition systems. So the same cloud companies, Amazon, IBM, Microsoft, also had the same problem already, and they've already mitigated that for their vision AI systems, but still have the same problem with the speech recognition systems.

And relatedly, separate tests have also uncovered both sexist and racist behavior in chatbots, translation services, and other systems designed to process and mimic the written and spoken language. And the cause of a lot of this is the bias in data. So the bias of

um, in essentially all the data that we're training our AI systems on, as well as potentially bias in the creators of the AI systems themselves. Uh, in fact, the Stanford study indicated that leading speech recognition systems could be flawed because companies are training the technology and data that is not as diverse as it could be. They're learning their task mostly from white people and relatively few black people. Um,

And it's interesting to also recognize that it's not just gathering the right data is difficult. It's that these companies might not have the motivation or incentives to do so unless we push for it. And also another very, very important problem that the article points out is that the companies actually face a chicken and egg problem. If their services are used mostly by white people,

They'll have trouble gathering data that serve black people. And if they have trouble gathering this data, the services will continue to be used mostly by white people. And it becomes this positive feedback loop that could be pretty terrible, especially with a lack of awareness. So I'm glad that this paper has come out to state these biases in speech recognition.

Yeah, this is a fairly high profile article. It is in the New York Times. And it does showcase that if not thought through carefully, if you don't actually make an intentional choice in making your AI system work well for not just the average case, but different groups, different populations, different types of people, you might get into this sort of issue.

I think here in particular, we can think of there's different minority groups with different accents, right?

And so even if you do get enough data and you think to include different groups, if you don't train your models carefully, just because there is fewer of people of a certain type, the model will be worse for those types of people. So you really need to be careful. And this goes beyond research. This is deployment of these AI systems that are touching people right now, today, yesterday and tomorrow.

Because in research, oftentimes our data sets are very biased because these are like the only data sets that we've been able to collect. And this is a huge problem in research. And then now it's being seen in deployment in the real world. So it's actually pretty bad that this trickles into real world data sets as well. Yeah, it's been interesting. We discussed a little bit an episode or two ago about how the conference NeurIPS has requested offers to include a statement of impact

And the thinking there was precisely for this kind of scenario, where when you create an algorithm, you actually need to lay out all the potential problems when you deploy it out into the real world. And you need to be mindful of many different things. So us as researchers, we need to be better about communicating the various things that need to be accounted for when you develop a product. And people who want to develop products or start companies need to really start

start keeping track of things like this. Cause if a giant companies, uh, are getting it wrong when someone's starting from scratch,

has, I think, even a harder time making sure to avoid things like this. On the flip side, potentially this could maybe have a silver lining of protecting the privacy of those whose voices cannot be picked up on by these systems. That's something I do wonder about, whether or not this might be able to protect privacy or whether this will actually exacerbate a lot of other situations.

It's something I have wondered about. I don't know if you have thoughts on that, Andre. Yeah, it's a tricky thing. I think the ideal world would be where it works perfectly for everyone, but everyone can opt out if they want. So right now, many people are starting to be worried about having a Siri or Google Home when you're visiting someone and it's just listening to you and picking up your voice in the background.

Ideally, the whole kind of field needs to evolve to be able to deal both with privacy and performance at the same time. But it is challenging. But I think luckily, I don't know about you, Sharon, in my research, I don't have to work with people quite yet. I work with robots and train them in a simulation. So at least I can kind of rest easy and not worry about these things for now.

I think I get to think about humans a little bit more. Which brings us to our next article, Research Summary, The Deep Fake Detection Challenge, Insights and Recommendations for AI and Media Integrity.

And this is a summary of a paper put out by the Partnership on AI on the deep fake detection challenge. So in an increasingly automated world, we'll see further use of deep fake technology to push malicious content onto people

And the deep fake detection challenge is meant to surface effective technical solutions to mitigate this problem. And deep fakes are essentially synthetic media. So any media that is generated by an AI system or that is synthesized. So there was an Obama deep fake sometime previously, as well as a Mark Zuckerberg deep fake. So basically a

a fake talking head of Obama and Mark Zuckerberg that seemed real, making statements that they did not say. And so this is really troubling for media integrity. The mere existence of deepfakes causes a lot of trust in the media as fake news. And the Partnership on AI, using their AI media integrity team, has compiled a helpful list of recommendations to improve this deepfake detection challenge that was put out by Facebook.

but the lessons also widely applied to any others doing work in this space. Yeah. So these recommendations come from, uh, the AI and media integrity steering committee, which has representative from nine, uh, partnership on AI partner organizations, which actually spans civil society, media, and technology. So that includes Amazon, but also the BC, uh, the BBC, CBC radio, Canada, Facebook, Microsoft, New York times, uh,

So kind of a mix of tech, media, and civil society, which is pretty good actually. The recommendations of this report are a little bit nuanced. So they're not just saying we need better detection. We're not just saying we need better algorithms to know whether something is a defect or not, because you just end up in an arms race between detection and evasion.

So their claim is that the texture needs to be paired with existing techniques that fact checkers and journalists already use in determining whether something is authentic or synthesized. I think something that is also really striking for me from this article is that the prize money for this challenge, some of it will be allocated towards making better design from a user experience and user interface perspective.

So this includes explainability criteria so that non-technical users are able to make sense of these interventions, as well as highlights of fake content such as bounding boxes around regions of manipulations. So this is very much focused on how usable and how useful to the public this kind of tool will be as an entire package, not just fully on detection, not fully just running an algorithm on detection. Okay.

Yeah, it's very interesting. And it relates to a point made here as well, which is that in addition to deepfakes, really fancy deepfakes that you've seen in AI, that for instance, you know, mimic someone's voice on actually create a video. You can also have disinformation that's very simple. So there was an example where a video of the politician Nancy Pelosi was slowed down a little bit to make her seem like she was learning her words and was a bit drunk.

So that's very simple. There's no AI involved. You're just slowing down a video, but that is still a type of misinformation. So the combination here is you don't just want fancy techniques to deal with these really advanced, complex AI deepfakes. You want to have a kind of suite of tools that fact checkers and journalists can use for different types of misinformation, including these simple ones.

So a lot of my research actually is focused on evaluating generative models and generative models are a class of AI models that essentially contribute to building deep fakes. So I have thought quite a bit about this, both within my research as well as within the field broadly. And people have approached me asking, you know, oh, now we can create all these deep fakes. This is so different from before. And

Something that does strike me is that I think the major difference right now with these deep fakes is the democratization of this technology. It's that it's easier for an individual to build and create this technology than it was before, because before it was still possible using graphics technology, using really good Photoshop, but you would need someone who is an expert.

Now you can have someone who is much less of an expert be able to do this. And that encourages and increases malicious use as well as makes it very difficult to prevent. Yeah, so I think...

This is often brought up in discussions that the first time you see a deep fake, you might be concerned that, oh, wow, this is a new type of spam. We can't tell what's true or false. This is the end of truth. And when you think about it a little more, you realize, oh, well, we have already had the ability to create really convincing fake videos or fake audio or fake images for a while. As you said, we've had Photoshop, we've had graphic software, etc.

It's just that now anyone, even without any advanced training, advanced software, can do it. And so we need approaches that can scale up for visibility to generate a lot of misleading content that is kind of convincing.

I think basically we need now that creating deep fakes is democratized. We need a democratized mechanism to also detect and catch them and filter them. Yeah, actually that reminds me, this relates in an interesting way to the present moment where if you log on to Twitter or Facebook, they have little banners talking about being informed about the coronavirus because, um,

I think now is a very interesting moment in the sense of everyone is becoming informed about this issue very rapidly. There's a lot of news, a lot of opinions, a lot of articles being spread. And so it's sort of an interesting test case to see

for misleading things, for information people really need to know. How do these systems, how do these platforms handle with making sure people aren't informed in an accurate way? That's true. That's true. It's interesting that as information unfolds rapidly, rumors are able to build, right? Fake news is able to build much more quickly. And as a result of

One of maybe the most effective ways right now is for Facebook and Twitter and some of these media platforms to just put a banner out and say that do not trust, be skeptical, do not trust everything you see because we can't catch everything. But we can try to prevent you from believing every single piece of news you do see in the news feed that we have presented to you.

Yeah, this also relates to several weeks ago, Twitter started rolling out their feature for flagging misleading content, and they have already done it in several cases. So as we are in this coronavirus situation and nearing the US election, the problem of online disinformation will only increase. And we've actually already seen a case of using DeFake, a really fancy AI technology,

in politics. So we've seen, I believe it was an Indian politician use it for campaigning to generate deep fake videos of him speaking in different dialects and languages. So it's in a very rapidly evolving situation. It's good that the Partnership on AI brought together such different perspectives to do these recommendations. And I guess now that they have these recommendations, the hope is that

Facebook and Microsoft and Twitter and so on can take into account when getting ready for the implications of a technology. And something that is also interesting is that I think a week ago or so, Facebook had a bug and was catching nearly any piece of content

information as that was posted about coronavirus as fake and tagging it as fake. And a lot of people got very upset at Facebook for this. So basically falsely labeling something as fake also riles up the public. So being careful about this is pretty important, I would say. Actually, I had a few of my posts tagged. I think it was like four or five. Yeah.

And I immediately thought this must be a flawed algorithm or something, but it was pretty amusing. And we've actually seen several news articles discuss how people that usually do the checking who flag misleading posts, humans, now are unable to come to work, as is true in many industries.

So these companies like Facebook are relying on algorithms more than they have before. So they are forced to really do the thing that you said of democratizing and scaling up fact-checking via AI. And it is, let's say, a little buggy.

Now, one thing to mention in this whole discussion that might be interesting is it sometimes brought up like why not just stop the development of deep pay technology? Why not stop developing GANs if we're going to have all this misleading content? And another thing noted in this piece is that there are actually many reasons to develop this technology. So there are positive applications.

of GANs for creating art, for creating animations, entertaining media, many pro-social uses. And so it really needs to be a balance of sort of avoiding these potential malicious uses while still enabling the benefits of getting with technology. Given that there are positive applications of such models,

The article does point out the process of this multi-stakeholder input should happen at an early stage, allowing for meaningful considerations to be incorporated and dataset design that can be done appropriately to counter bias and fairness problems. Yeah, as you just mentioned, bias is a thing with speech recognition and other systems. So we don't want it to be a thing here. And speaking of bias and problems with our present day AI algorithms,

Our next piece here is titled A Debate Between AI Experts Shows a Battle Over the Technology's Future. And it covers a debate that happened on March 26 at the MIT Technology Review's annual MTech Digital Event, in which Gary Marcus and Danny Lang came together how AI should evolve into the future.

And Gary Marcus is quite known in the field of AI for having a strong opinion. He is a professor emeritus at NYU and the founder and CEO of Robust AI.

He's a well-known critic of deep learning in particular, and his book called Rebooting AI, which was published last year, argues that AI's shortcomings are inherent to the very technique and that researchers must therefore look beyond deep learning and combine it with classical or symbolic AI. And these are systems that encode knowledge and are capable of reasoning. And this is often AI known as...

or people often think of it as more old fashioned AI. Yeah. So to take a bit of a step back, deep learning is at the very high level, basically this idea of you have a complicated, uh,

set of parameters so numbers that you can tweak and tune to accomplish different things and you can get a whole bunch of data and go through an optimization process to tune these parameters or numbers and then basically get a model that does what you want just based on training optimization from the data and

which has resulted in a lot of very impressive advances over the past decade. So when you have, let's say, facial recognition systems, when you have speech recognition systems in something like Siri or Google Voice, these are based on deep learning by now and have gone a lot better because of deep learning. So despite the flaws of deep learning, as we heard previously with bias and deep fakes,

Danny Lang, who is the vice president of AI machine learning at Unity, is in the deep learning camp. And he built his career on the techniques promise and potential, having served as the head of machine learning at Uber, the general manager of Amazon machine learning, and a product lead at Microsoft, focused on large-scale machine learning. So he's done machine learning online.

and deep learning probably across lots of different companies that have pioneered different techniques in the space. And at Unity in particular, he now helps labs like DeepMind and OpenAI construct virtual training environments that teach their algorithms a sense of the world. Yeah, so he took a stance of...

basically saying we don't necessarily want to combine deep learning with classical techniques. And to expand on that a little bit, the idea of classical techniques is that you don't just have different numbers and parameters that you tune with data optimization, but instead you do sort of hard-coded or pre-written rules for reasoning. So for instance, you know that something is true,

and something else is true, and you have some sort of hand-coded logic or symbolic reasoning techniques to then infer whether something else, some claim is true or false.

So the core of the debate here was Gary Marcus saying we need more of a deep learning. We need to combine it with other things that aren't just dependent on data and optimization versus any line who argued that we don't necessarily want to make a claim. Maybe we can just improve and fix deep learning because it is quite a broad field, really. And that codifying these various aspects of symbolic AI, that symbolic AI would essentially need would be actually quite difficult.

Basically, you can't codify, you can't write down as a human exactly what you are thinking and doing and what your neurons are perhaps operating on. Yeah, so I guess to set the scene of why there's even this debate, partially it's because the whole field of AI or large parts of it, let's say, have converted to using largely deep learning.

And there's been a lot of excitement and hype because converting to using deep learning as the main family of techniques for various tasks in computer vision and speech recognition

in many, many domains has turned out to work very effectively. And so we have removed the usage of, let's say, hard-coded reasoning algorithms or handwritten rules or anything like that in favor of these very flexible sets of weights that we just optimize.

But now that we have done that for a little while, let's say for close to a decade now, there's a lot of questioning of whether we can actually move beyond the problems of deep learning, like bias when your data is problematic, like generalization, basically like common sense.

or whether we need a fundamentally different paradigm in AI to be able to get there. What are your thoughts on this, Andrei? I think it's an interesting topic. I mean, as is always the case with future forecasting, it's hard to say. This debate has been going on for a few years now, and...

It's sort of complicated to even set the boundaries of the claims. So in some cases, people agree that we need more, let's say, priors on how to reason and how to pay attention and how to basically reason about the world. But people say we can still do this with a deep learning toolbox. We just need to think of some new tools and ideas.

tricks up our sleeve. Whereas our people are saying, no, you can't just do this with optimization from data. You need the symbols and so on. So personally, I'm in the camp at things. Okay. We have two perspectives established pretty well. Now we just need to try things and actually do the research to see what works and to start developing algorithms from different types. But I do think it's good to explore in multiple directions, of course. How about you?

That sounds like a nice A-B test. That's research, right? I mean, the whole idea is all of us do random stuff and then some things actually turn out to be useful.

And for some listeners who are not at Nureps this year, uh, Gary Marcus, uh, spoke at Nureps, uh, for quite a few workshops, I believe in quite a few, uh, various talks. And, um, they were very much well attended as there were some spicy debates there. Um, and yes, it's been a heated argument for a few years now. Um,

I think that a lot of the work going into self-supervision is being branded as just deep learning, but I think there are actual –

there actually is quite a bit of priors being embedded into these systems. In fact, convolutions have various priors embedded in them, and they form the basis of the neural network architectures that are used for computer vision. Yeah, it's an interesting point our field is in, in some ways. I think we can think of it as...

We discovered a new, very powerful hammer and we spent, let's say, the better part of the last decade sort of figuring out all the nails we can hit with this hammer. But now we've sort of started to plateau in some ways in advancing and hitting these really complicated issues of bias and verifying that the model will always avoid some sort of catastrophic error and things like that.

And it really just goes to show that AI isn't on some sort of runaway train to go towards human level intelligence. We still very much need a lot of new innovation, a lot of new ideas, possibly even a whole new paradigm to continue making progress towards the goal of really, really sophisticated AI.

Yeah, and I think there are people in the camp that is even further than laying on the deep learning side who believe that not only do we want to not embed priors at all, we want our AI to learn everything from scratch. And I think there are people who really very much believe that. So I think the debate spans quite a large spectrum of opinions. Yeah.

Yeah, and I guess let's just hope that we make progress over the next few years. People keep publishing papers and ultimately we will find, let's say, whatever works well next with a lot of experimentation, a lot of curiosity, a lot of basic research is what's needed now, it seems.

But enough speculation about the future. Let's talk about the present. So this week, robots meet coronavirus in a few articles that we will talk about. So one is in Wired. If robots steal so many jobs, why aren't they saving us now?

So a lot of people have been led to believe that there will be some sort of robot revolution in which humans get entirely replaced by robots, which will be able to carry out their entire jobs. But AI roboticists and researchers already know that this isn't exactly the case. Robots aren't quite there yet, certainly not with dexterity, nor quite intelligence in the same way as humans are.

And that some people did speculate that this catastrophe with COVID-19 is actually blowing up the myth of a robot takeover. Yeah. So we've just seen in the U.S., I think last week, that we have ridiculous unemployment numbers, right? This is a real crisis. Millions of people are going to lose their jobs. And

And the point of this article is that, you know, if this was true, if robots and AI was advanced enough, we could continue a lot of this work with robots who cannot be infected and all that. But the truth is that technology is just not there. And most of these jobs simply cannot be automated. Perhaps some of the most important jobs needed now cannot be automated, which are those in medicine.

And also recently Amazon announced that they were going to hire 100,000 additional human workers to work in their fulfillment centers as delivery drivers.

and as delivery drivers, showing that not even this incredibly tech-enabled company that leverages robotics more than most cannot do without humans. And send them off, usually work alongside robots, but the entire job still cannot be automated. What's interesting is that very recently, I think,

In the last day or so, Amazon workers have been striking because they are unhappy with the working conditions for people right now in those Amazon fulfillment centers. And so you definitely see that humans are definitely needed as the strike has been pretty bad on Amazon.

Yeah, so it's a good thing to keep in mind that as this article notes, basically this hype that all our jobs are going to be lost to AI, at least in the near term, just is not the case. And it would be nice if we could actually respond to this crisis with replacing a lot of the necessary workers and making those people safe by using robots, but we cannot. And that's just the case. But...

It's also true that we can use robots in many cases, and we already are in response to coronavirus. So the next article dovetailing from that is called COVID-19 Pandemic Prompts More Robot Usage Worldwide.

So understandably, the COVID-19 pandemic has sparked lots of interest in robots, drones, and AI as people attempt to figure out ways to carry out various tasks remotely and while under quarantine. So this article reviews several different robots that have been helping during this time and have been increasing their usage worldwide.

Yeah, we've already actually talked quite a bit about different robots in prior episodes. So for instance, robots in hospitals and so on. But there's quite a few specific interesting examples here. So for instance, just quoting from the article, it says XAG has scaled up its use of ground robots.

with aerial drones converting agricultural units into disinfectant sprayers. The company has deployed more than 2,600 drones in China, which is said is starting to recover. To perform deliveries, Baidu has partnered with Neolix to deliver food and supplies to Beijing Haidan Hospital with the Apollo autonomous vehicle. So Baidu's AI algorithms are also being used to track the spread of infection.

And continuing on that kind of trend of varying applications of robots, there is another one here of the Hamilton Company offering its MAGEX Starlet and PCR Prep Starlet SA radio workstations. So robots are actually being used to develop the vaccine and the treatments for this crisis.

Now, these are not necessarily autonomous robots. It's not necessarily cases where there's AI involved, but these are physical arms and robots.

And on that note of many of these robots not having AI, we can move on to the next article from Wired, which is the COVID-19 pandemic is a crisis that robots were built for. And this article is about an editorial that was in Science Robotics we discussed actually in last week's episode, where several robotics researchers and academics came together to basically say that

This moment shows us that we really need to push the research and development of advanced robotics to be able to apply it in many more ways when we face a crisis like this one. What's interesting is that there's plenty of precedent for machines helping humans do their jobs, which is something that MI2 roboticist Kate Darling notes. She says that ATMs allowed banks to expand teller services. So ATMs are the robots here.

Bomb disposal robots let soldiers keep more distance between themselves and danger. And there are cases where automation will replace people, but the true potential of robotics is in supplementing our skills. And we should stop trying to replace and start thinking more creatively how to use our technology to achieve our goals and not put lives in danger.

Yeah, and this is actually noted in particular with respect to the notion of healthcare robots. We mentioned last week there are companies currently developing robots to deploy in hospitals to take care of some of the easier tasks and free up the time of nurses.

But one thing that is said in this editorial is that we should really push the development of robotics for healthcare such that robots can work alongside humans and be very beneficial. And this editorial even presents the idea that maybe we should have a competition for medical robotics. So DARPA has famously had competitions for self-driving cars.

something like 15 years ago and then soon afterward we've seen a huge boom in self-driving car industries. We've had a competition about five years ago for disaster response, which again is an application where robots are famously useful. And now we might want to have a competition for medical robotics and having robots in hospitals working alongside humans.

So reading all this about robotics, I wonder, Sharon, are you jealous of us researchers who work with these systems or are you happy to mostly work with data? Am I jealous? Oh, man, I would work on robotics if I could figure out how hardware works and if I had the patience to work with hardware. Yeah.

Otherwise, it's software for me. Yeah, I can say a little bit about this as a robotics researcher. I mean, the reason we are not there yet is that this is just very hard. I mean, when you're not just dealing with software, you're dealing with a physical thing. You need to move around. You need to make sure it has power. You need to make sure it has the right version. It doesn't hit anything. It doesn't break. It's complicated. And so it'll take a while for us to actually build the systems and

But the good news is now that we have the current crisis, it will probably push us to move faster and to really make a huge effort to make progress faster than we would have otherwise.

So with that, thank you for listening to this week's episode of SkyNet Today's Let's Talk AI podcast. Once again, you can find the articles we discussed here today and subscribe to our weekly newsletter of similar ones at skynetoday.com. Subscribe to us wherever you get your podcasts and don't forget to leave us a rating if you like the show. Be sure to tune in next week.