We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI for COVID detection fails, GitHub's Copilot can code, GAN Theft Auto is fun

AI for COVID detection fails, GitHub's Copilot can code, GAN Theft Auto is fun

2021/7/1
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Abigail See
A
Andrey Krenkov
D
Daniel Bashir
S
Sharon Zhou
Topics
Andrey Krenkov: 本期节目讨论了Google关于深度学习模型优化的综述论文,该论文探讨了如何使模型更小、更快、更好,并提出了一些经验证的优化策略。此外,还讨论了大量用于检测COVID-19的机器学习模型并不适合临床使用,研究人员发现这些模型存在问题,例如依赖图像中的无关信息,并提出改进建议,包括提高可解释性、改进数据收集和临床医生参与等。最后,还介绍了GitHub和OpenAI合作推出的代码生成工具Copilot,以及亚马逊仓库机器人技术、Twitter的AI伦理团队、LinkedIn算法偏差等话题。 Sharon Zhou: 就模型优化而言,她认为应根据模型对资源和质量的敏感度选择不同的方法:资源敏感型选择高效架构,质量敏感型则先训练大型模型再压缩。在讨论AI医疗诊断的局限性时,她强调了模型可解释性的重要性以及临床医生参与研究的重要性。 Abigail See: 她分享了自己在使用GPT模型方面的经验,并对Copilot代码生成工具的潜力和局限性进行了分析,指出该工具可能生成不安全或有偏见的代码。在讨论YouTube推荐算法时,她提到了视频病毒式传播的现象,并分析了其背后的机制。 Daniel Bashir: 他补充介绍了其他一些AI相关的新闻,包括Google将AI研究部门的员工转移到新的机器学习团队,丰田研究院在机器人技术方面的进展,现代汽车集团收购波士顿动力公司,DeepMind研究员呼吁对AI伦理负起集体责任,以及斯坦福大学启动的AI伦理审查项目等。

Deep Dive

Chapters
The discussion covers a survey on optimizing deep learning models for efficiency, focusing on techniques to make them smaller, faster, and better. The conversation explores the balance between model size and quality, and the practical implications for developers and researchers.

Shownotes Transcript

Translations:
中文

Hello, and welcome to SkyNet Today's Let's Talk AI podcast, where you can hear AI researchers discuss what's going on with AI. This is our latest Last Week in AI episode, in which you get summaries and discussion about some of last week's most interesting AI news. I am soonish-to-be Dr. Andrey Krenkov. And I am Dr. Sharon Zhou. And today we have a special guest co-hosting with us, Abigail See. Hello, everyone.

Oh yeah, also we can mess up and whatever chitchat and it's totally fine. All right. Yes, we are super happy to have Abhi of us, a friend from Stanford, fellow PhD for many years. So we'll have some fun talking about stuff together.

And let's dive straight in. First, we have a couple of articles on research as usual, with our first one being for Medium titled Google Survey Explores Methods for Making Deep Learning Models Smaller, Faster, and Better. So actually, this is interesting because there's a single guy here, Gaurav Mangani, who wrote this paper. And it's a beefy survey that's like 40 pages long.

as the title suggests, on making models smaller, faster, and better. And it focuses on model efficiency as its main theme. And this also led to open sourcing an experiment-based guide and code to help practitioners optimize their model training and deployment.

So to give a bit of a summary in this paper, there is a classification of optimality of models in five major areas in terms of compression techniques, learning techniques, automation, efficient architecture and infrastructure. And basically this paper explored how to efficiently optimize for a combination of these things.

And lastly, it also proposes a couple of empirically proven or evaluated strategies for doing this optimization. So, yeah, quite neat. And again, a sign that we are starting to optimize for more than just downline accuracy or these metrics. What do you think about this, Sharon?

Well, I think one takeaway here is that there are two types of models. If you're really footprint sensitive, so you don't have that much compute or you're thinking about how to constrain compute, especially during training, you want to maybe tackle some kind of efficient architecture. You want your model to be small upfront.

If you're really quality sensitive, so you really want really good quality, you still want to train a really big model, but maybe then shrink it afterwards, use some kind of compression technique afterwards. And so it kind of that was one takeaway from this large paper. And I think that is true. And we do see that as true. And it's very hard to sometimes reach the same level of quality without having a very large model to start with.

Have you worked with any huge models, Abby? I know now Transformers are overage and they have been going crazy. No, I wouldn't say I've worked with any huge models because the ones that have been open sourced of the GPTs have been always the smaller ones first. So yeah, I've only really worked with, I think, like a GPT-2 large, which in the end still can fit on a single GPU if you're doing kind of one sample at a time.

Yeah, because what we used to the Alexa price, our chatbot that actually spoke to people in deployment is a GPT-2 medium because that was the kind of biggest one that would run in small enough latency to be useful. Got it. Yeah, this paper also covers how to do this combination of accuracy and speed. So it seems like it's quite useful and especially for industry, something that people care about.

And on to our next article under research titled Machine Learning Models That Detect COVID-19 on Chest X-Rays Are Not Suitable for Clinical Use. And this is from Physics World. So maybe unsurprisingly, all the thousands of machine learning models that were developed during COVID to detect COVID on chest X-rays and CTs were found to be not very suitable for deployment in clinics.

And two medical students who are working towards their doctorates in CS at the University of Washington were the ones who rigorously audited all of these machine learning models or a large subset of these machine learning models. And their results are published in Nature Machine Intelligence.

So basically, I guess this is not hugely surprising that a lot of these models are not necessarily safe to deploy just yet after just a year of development. And I will say I am surprised at how many the sheer order of magnitude there are. I thought there would be, you know, 50, 60, but being thousands was quite surprising for me.

Exactly. This article notes that their first hurdle was recreating the published machine learning models, which by itself seems like... It always is. Exactly. That's how we research. And then it points to some things like they took these...

machine models, introduce some new data sets, and maybe unsurprisingly, sometimes when they look at the saliency maps, you've got things like lungs, but others pointed to things like text or arrows on the images here for like x-ray images or to an image's corners, which is not what you want for detecting COVID. So yeah, I mean, not surprising necessarily, but still a little bit sad.

Well, I think this highlights from perhaps the point of view of people who are not primarily in machine learning, right? Like they might not be aware of these things and might be counterintuitive to them that these kinds of shortcuts can happen, right? So, you know, it's still valuable to point these things out.

Yeah, it's cool. They also provided some suggestions like we need for explainable AI and also some things on what to kind of best practices collecting data prospectively and the model's goal in mind.

There's also one that clinicians should be involved in study design and data collection. So here a lot of CS people probably were trying to tackle medicine without really knowing. And of course, also auditing. So yeah, a useful study as it always is to show, you know, we don't really know what you're doing and we should be doing better.

And on to something more fun. We were going to talk about some applications of AI, which, you know, probably less of a bummer than this result. Starting with something hot of a press just announced yesterday, which is...

Copilot is a result of a collaboration between GitHub and OpenAI, and it's basically fancy code autocomplete. So it's a very intelligent AI model which you can integrate into Visual Studio Code Editor and basically have it write bits of code for you for Python, JavaScript, TypeScript, Ruby, and Go, so a fairly popular set of languages.

So the performance was quite impressive. Lots of people were pretty blown away by what this did. But of course, there's various details as well. I don't think there's a research paper yet. There was just some blog posts. So definitely kind of an early stage for this and we'll see where it goes.

I think what's striking about this announcement is that, you know, there have been a lot of applications that came out where you can tab complete code. And I've used many of them. And I'm sure Andre and Abby, you've used a couple too there. And, you know, they auto-complete things or they can maybe do a little bit more. But here, I think you could do a lot more. And based on the demos, it looks like the...

Copilot is essentially auto-completing even large functions just based on a doc string, for example. And so that's really attractive, and I'm very curious to see how that works and where that goes as people... I already signed up to work to get it as people start using it. Of course, there are issues. This is based on GPT-3, so I think there have been murmurs of,

You know, this can still be racist with racist variable names, for example, or it can have a lot of issues based on what you put in your doc string. And so all of those, everything is still in beta. And so all those things can still emerge. Abby? Is this trained on just a bunch of public GitHub tools?

Exactly. Yeah, there was some discussion on Twitter of like, this is trained on a huge amount of code that's public, but a lot of it is GPTL licensed, which makes it interesting in terms of a legal perspective. GPTL means that you can't use it for commercial purposes. So yeah,

Yeah, I mean, this is really exciting in a number of ways. But before I say that, I just want to point out that the Verge's headline for this was GitHub and OpenAI launch a new AI tool that generates its own code, which I think doesn't sound right. That sounds like kind of nonsense because presumably it's not in any way generating the code that trained Copilot, right? Yeah.

Right, yeah. It generates completions, right? So you write some doc string, you write some whatever, and it gives you like 10 lines, 15 lines, but of course a human has to use it. So the its own code part is insane. But anyway, yeah, I mean, this is really cool. In some ways it seems like it's kind of...

it could potentially automate the process where you're trying to figure out how to do something you want to do. You search for it on Stack Exchange. You find some people talking about it on Stack Exchange. You try to figure out who's actually answering the question you want to answer. And then you look at that snippet and maybe that solves your problem. So it seems like this could maybe kind of automate that in a useful way, though, of course, you would still need to look at what it gives you and figure out whether that's actually solving your problem or not.

Yeah, I think there was actually a little plug-in I saw. It was a bit of a joke that exactly did that. It searched for Stack Overflow questions and then took out code and automatically imported it for you. So yeah, this looks like certainly something very useful. And right now it's in private beta, so it's not launching for a while. It's in a restricted technical preview format.

You can sign up to access it similar to GPG-free, and we'll see how soon it gets released. Yeah, and I think one of the kind of differences between

the idea of grabbing something from Stack Exchange since that has been an ongoing research kind of question. And I think there was a paper published a long time ago in the Human Computer Interaction Lab by Ethan Fast that did exactly that, you know, grabbed Stack Overflow things, answers that worked and tried all the different answers to find one that worked.

based on your doc string there and just put it into your code. But here, hopefully, it is doing a little bit more than just search through, I guess, every database in this case. And it is doing some generation as well from things that are not necessarily duplicates. I think they do an analysis of this. So yeah, I guess there's a fine line between that when you're searching through a lot of code or like...

an infinite amount of data, in that case it'd be the same.

I think there's also some issues that people have raised about security with this model where it will generate code that is not necessarily secure. And I think there has been a discussion around one of the examples actually on their blog posts and site around the code being generated is not secure. It's just generating HTTP instead of HTTPS. So based on

I guess the training data, it decided to do that. So hopefully the people working with Copilot initially understand that and understand not to just, you know, understand that this is just a kind of first draft beta and that they will still need to review that code. Well, yeah, I think you should treat it as trustworthy as something that some random person on the Slack exchange has posted, right? Yeah.

But there is furthermore, perhaps the most more kind of subtle security problems that it might be repeating sensitive information like API keys and so on from the code that it was trained on, right? Yeah, exactly. So a bunch of issues, as you might expect, but, you know, still exciting for sure. And kind of obvious application of AI that, yeah,

It's quite nice. I think there is a lot of kind of boilerplate code that we developed for coding. We have some examples of like, there's a comment over JSON schema, and then there's a function named collaborators map, and they just generate a set for you. So it's much more contextual than something like Stack Overflow, and it takes into account what you're programming, and it isn't just dealing with generic algorithms or generic things. So...

Certainly I would want to use it when it gets a little bit more robust. One thing I really like is that it can generate tests for you. Yes, no more excuses to not write tests. Well, maybe I still want that excuse. Just kidding.

And on to our next article in applications. What's going on with Amazon's high-tech warehouse robots? This is from IEEE Spectrum. And so Amazon just had a blog post that they published around, it's titled New Technologies to Improve Amazon Employee Safety. And it's around this robot that is autonomous and mobile, and it can help carry things for employees.

I think what the article kind of highlights is that it feels like, you know, this is great Amazon, but this feels like it is actually two years behind the state of the art in commercial mobile robotics. And it wasn't as impressive, but I don't know what your thoughts are, Andre and Abby. Yeah, you know, I'm probably a little more aware of this working in robotics. And they do highlight some companies like

auto motors and fetch robotics. And it's true. I think this space in particular, warehouse automation, is sort of the hottest place for robotics. And it's kind of the no-brainer of where you could make things smarter and help employees and help scale and so on. So I do think it's a little bit...

surprising, as the article notes, that Amazon appears to be behind. These are fairly simple systems of just carrying stuff around. There's no arm aspect, which Fetch has and different startups are working on. Maybe it's because of the scale that they are and there's a lot more concern about safety and deployment.

But it still is a pretty good indicator of where the space is and an important application area that maybe we don't see as much in AI. I'm just scanning through the article trying to understand why they called it BERT, because obviously NLP has a very different name, but we're all very aware of it, but it could be that we don't see it. There's no explanation here why it's called BERT.

Oh, no, wait, there's another one called Honey. How confusing. It is a Sesame Street thing. Everyone likes Sesame Street, I guess. I think that's what it is. I think that's what it is. It's a safe... Technologic cuter. Safe bet for a name. These robots are pretty boring looking, so maybe it's just an attempt to make them a little more cute. These robots don't look very impressive or...

in any way. I'm actually really surprised that this is their recent blog post. Maybe the goal of their blog post wasn't exactly to highlight cool robotics. It was more to highlight robots

We are actively working on employee safety. We're actively working on, you know, making sure these robots can kind of coexist and help humans too. Because I think Amazon has very much been attacked for creating systems or just employee work conditions that are horrible and harsh. And so maybe this is an attempt to shine light on that area and is not necessarily about announcing a new technology. Yeah.

Exactly. Yeah, the blog post is all about employee safety. It's in the title. And in it, they note that, you know, the health and safety of our employees is a number one priority. By driving genealogy, they're confident they can reach their goal of reducing recordable incidents by 50% by 2025. So, yeah.

Definitely a bit of a PR move, but also, I guess, you know, something that definitely needs to be addressed and an area where robotics can make a positive impact. I would still prefer to have more robot dogs and pets, personally. You want a robot dog?

Yeah, I want a robot dog. We've discussed this before. There's like these cute Sony robots that kind of act as your pet and they only cost $3,000. So I want them to cost less. Do they have artificial fur? Or are they like shiny and metallic? No, they're shiny. They do have sensors for when you pet them and they are kind of cute. But some people are really into them. Yeah.

There's a classic British animated film called Wallace and Gromit and I think it's called A Close Shave and I think you've seen it, Andrew, and it has an evil robotic dog in it. So you should watch that again and see where you can go. I love that. Oh, I didn't realize Wallace and Gromit was British. It is. It's a national treasure. You leave her so many references. Wait, how is that? Wait, is that really British?

Alrighty, and moving on from robotics to something we discuss a whole bunch on here, understandably, but definitely one of our popular topics. Our next article from protocol.com is about how Twitter hired tech's biggest critics to build ethical AI. So it starts off by talking about how machine learning engineer Ari Font was worried about the future of Twitter's algorithms.

It was mid-2020 and the leader of a team researching ethics and accountability for the company's machine learning had just left Twitter.

So the future of the ethics team was unclear. And then this person volunteered to help rebuild Twitter's meta team. Meta is machine learning ethics and transparency and accountability. And she said she had a roadshow to persuade Jack Dorsey, the CEO, and his team that ML ethics weren't just for research, they were also for guiding the

development of the company's

And we've seen that already a little bit. So, for instance, we've had a recent project that was quite acknowledged of there was a bunch of controversy around Twitter's cropping algorithm that seemed to have some racial bias centering white people more often than not. And this was like an AI algorithm to center an image to not show all of it.

So yeah, because of this team was there, they conducted a rigorous study and ultimately that was removed. And yeah, they keep building out of this company, notably Roman Choudhury, who is a notable researcher and someone who pushed for algorithmic auditing. She left her startup to become the leader of this group.

So yeah, really cool. And I think Twitter has seemed to be very committed to this, maybe more so than some other groups that we've talked about. So this is a cool little overview article. Does it say what's next for them after the image cropping thing? Like what else are they prioritizing at Twitter? Yeah.

It does note that Dorsey and Twitter's board of director made responsible machine learning one of Twitter's main 2021 projects.

priorities. And the idea is to scale their work inside of Twitter's products. So I guess so far in ethical AI, there's been a lot of research, but it's as obvious how to productize it. And I think that's the idea here, you know, avoiding recommendations that are biased, pretty, you know, common problems in AI, but it doesn't seem to be a lot of specifics.

I think one striking thing here is, and maybe it's subtle, but it's the fact that we don't have to necessarily do ethics research, especially within companies, and instead be part of essentially the product team. You know, you don't have to actually publish to make an impact on these companies. And I think that's kind of what's going on here. And maybe, I don't know, like maybe there were, you know, the issues previously,

brought to light with Google's, uh, situation with, uh, publishing, uh, in the ethics area and causing all this, um, drama because they're like, oh, but our PR team is unhappy with, you know, being anti-AI, but what if you could just steer it from the inside? And I think it's kind of getting at that, like this model, um,

might work better because it's already made some changes about Twitter. And I mean, Twitter seems to be more open to saying, you know, there are problems with our algorithm and saying that publicly on a blog post as well. So maybe this is a model that things are shifting to a bit more, especially in light of the Google thing.

Yeah, I think it's quite neat that when they worked on this cropping algorithm, they had a press release, but they also had a paper that very clearly laid out how they looked into it, what their results and what justified their decision. And that's a pretty cool model. It sets a good example for others and pushes for being more open and transparent and

And here, yeah, this combination of changing what the company is doing and being open is very cool. And this also notes that they work with the engineers and guide them and provide expertise. So this sort of power to steer things and not just fix them when problems are found is pretty great.

Yeah, it's really great when companies share what the solution was. Because it's a bit unsatisfying when you see something go wrong and then you don't know what the solution was. So someone showed me the other week that TikTok had a strange thing with their text-to-speech, where if you wrote a bunch of just H's, the letter H...

and then wrote some texts, then the text-to-speech would say the text in a very strange kind of slurred way if you put enough H's before. And this wasn't necessarily true of other letters. And yeah, it's kind of like the Google Translate stuff with the biblical text coming out on low-resource languages. So yeah, I'd love to hear what's the reason for this H thing on TikTok. I don't know if they've announced it yet.

But yeah, it's always good to know the solution because I think it can shed light on other problems elsewhere. Yeah, and in industry, I do think there is a tendency or a perception that you're not going to address or comment like there's going to be a crisis and then someone says something and then you don't address it again. Yeah, I think there's an urge to kind of say as little as possible for damage control, but...

I think, you know, these things are somewhat unavoidable machine learning. There's always going to be unintended side effects and consequences. So I think it's great to kind of be more open about explaining exactly what went wrong and what you learned and how you can act against it. And on to our next article, LinkedIn's job matching AI was biased. The company solution, more AI.

And this is from the Technology Review. So several years ago, LinkedIn found that their recommendation algorithms that match job candidates were producing biased results. Maybe that's not super surprising. It often referred more men than women to roles because men were more aggressive at seeking out new opportunities, they said.

And I think now the way to counter that is to build another AI program to try to counteract that bias from the results of the first model. And

Because these algorithms are very black box, we don't know exactly what they're doing in this case. But of course, these algorithms do try to exclude factors that may contribute to bias in a much similar way as insurance companies do. So they remove a person's name, age, gender, their race, because these algorithms

characteristics very directly can contribute to bias. But, but of course these algorithms can pick up on other patterns that can distinguish, um, common, uh, gender, uh, gender identities. And so, um, and so, um,

I think like we're working on this. We're using more AI to try to counteract this bias. But of course, it's hard to it's hard to tease out exactly what what a lot of these companies are doing and what might work best. And of course, with these kinds of algorithms being biased, it will greatly exaggerate and exacerbate the problem of bias.

of job seeking and job. Yeah, this kind of reminds me of interesting on YouTube, right? There's a recommendation algorithm and YouTubers often say that they cater to the algorithm or need to cater to the algorithm to get views. And, you know, the algorithm is in public, so you have to sort of, you know, reverse engineer or sort of learn what it likes and doesn't like.

And because, again, there's no openness, no understanding by job seekers, they kind of have to do the same thing here, which is really annoying because you would want to be considered based on your qualifications and not be sort of weird factors. Yeah, so more problems with more AI, I guess. Yeah.

I guess the only thing that this kind of reminds me of once I was trying to make Facebook not try to classify me as anything. And so I looked up, you know, the most common likes and I just started liking really random things just to try to...

fool their algorithm or kind of, you know, steer it in a different direction. And I guess if we don't know these factors, like really weird patterns could also occur where individuals are trying to steer something in one direction or another. I think in this case, it's a little bit more challenging since I at least don't look at the recruiter interface necessarily. And so I don't really see the results of my actions. Yeah.

Abby, have you had any fun experiences with recommendation algorithms on any website? Because I think they're everywhere now. Yeah, they are everywhere. I don't know. I mean...

So I don't know. I'm often quite impressed with them really on things like Netflix. Although I've heard that on Netflix, those very granular descriptions like, you know, quirky comedies with a female lead or whatever, those are actually kind of hand labeled by a person. Yeah.

Or there's some quite significant manual effort that goes into that. But yeah, I mean, YouTube is a very interesting one. And it's interesting how sometimes there's just one video that appears on the mainstream of all users' feeds. And then if you look at the comments, it's just everyone being like, why are so many hundreds of millions of us here right now? Literally. Do you know what I'm talking about? Yeah.

Like sometimes a video just goes viral, not just with particular subsets, but with absolutely... Like Plastic Love with 70 songs. Yeah, yeah, yeah.

Yeah. And I was reading about why this happens and it's some kind of feedback loop, like certain videos are kind of eligible to do that because if they have a broad enough appeal to a majority of people, because they trigger some kind of curiosity that many people would respond to, then, you know, something can happen. And then it's kind of like a domino effect, like a virtuous cycle of it just reaching more and more people.

And it's a really strange phenomenon when that happens because sometimes it's something that was uploaded like 11 years ago and suddenly everyone's looking at it. Yeah, I think, Abby, I know I looked into this and I think you looked into this as well of the whole story of like videos, weird, creepy videos for babies. Oh my gosh, that is such a fascinating rabbit hole. I recommend you go read about that if you haven't. It's very sinister. Basically, people...

So, um, some people out there are kind of automatically generating videos to appeal to children on YouTube, like very young children. And it's kind of a real, as we were talking about before, mix of appealing both to people and to the algorithm, but like maybe especially the algorithm.

because very small children maybe just let autoplay happen or they could just kind of like mash the next video suggested a lot. So these automatically generated videos often just like kind of cheaply made animations that often use like really popular IP, like Elsa from Frozen or whatever,

like 3D animations of these characters, obviously unlicensed. And they're just kind of doing random combinations of animations that can be automatically generated with, you know, like nursery rhyme music. And this just appeals to children,

or appeals to the algorithm or something and then they get so many views it's crazy and then the other strange thing is that sometimes they're even a bit kind of violent or disturbing so the animations themselves might be quite strange and sinister and i'm not entirely clear on like why that is if there is like a malicious intent behind it or not uh yeah so it's super creepy

Yeah, fascinating story. And interestingly, I think the solution from YouTube was actually now when you go to upload a video, you have to say, is this meant for children or not? So there the solution wasn't necessarily more AI, probably there was tweaking to algorithm, but it's also kind of just tweak the UI, tweak information and be more mindful of these sorts of crazy things.

Which, yeah, maybe something like that is needed for LinkedIn. I don't know. We'll see. Sometimes rules-based algorithms are useful, especially for post-processing of what the AI does.

And on to our last article, a bit of a laugh. It is GAN Theft Auto is a snippet of GTA 5 made by AI. And this was so made by YouTuber Harrison Kinsley. There's an AI tool called GameGAN, and he basically created this highway stretch from GTA 5.

using a GAN. And it's worth watching because a lot of the details are pretty interesting. You know, the glare from the sunlight kind of moves around properly. And it just it looks very cool because it is an AI generated kind of scene from GTA. Yeah, it's definitely a fun video to watch. You can check it out. It recreates GTA in a sense of you can actually control like you can you can interact with this program, give it inputs and

And they much changes accordingly. So you can drive a car, but of course it's, it's not nearly as, let's say pretty as a GTA. It fails to capture a lot of complexity when you crash into a car. Um, there was a case where they showed that, uh, it's split a police car in two when it crashed hand on instead of having any physics, uh,

So definitely more of a fun project, you know, not a bit of a silly thing, not anything commercial or applicable more generally. Okay. And that's it for us this episode. If you've enjoyed our discussion of these stories, be sure to share and review the podcast. We'd appreciate it a lot. And now be sure to stick around for a few more minutes to get a quick summary of some other cool news stories from our very own newscaster, Daniel Bashir.

Thanks, Andre and Sharon. Now I'll go through a few other interesting stories that haven't been touched on. First off, on the research side, Google is moving dozens of employees from its AI research division into a new group focused on machine learning that will be the center of gravity for how Google applies machine learning to its own products.

According to Business Insider, the creation of the new group underscores the growing importance of machine learning for the future of Google's business. The group will be led by Nadav Ayrin, a previous engineering VP in the AI division. For our second story, on June 21st, the Toyota Research Institute announced an advancement in its robotics work.

Toyota roboticists were able to train robots to understand and operate in complicated situations that confuse most robots, like recognizing and responding to transparent and reflective surfaces. Transparent glass on a table, for example, might give robots trouble when navigating a kitchen.

To overcome this, as TechCrunch reports, the roboticists developed a novel training method to perceive the 3D geometry of the scene while detecting objects and surfaces. Larger news in the robotics world comes on the business front.

As Finnbold reports, vehicle manufacturer Hyundai Motor Group has announced its acquisition of a controlling stake in U.S. robotics firm Boston Dynamics. The deal was initiated last December, and Hyundai will buy a share of $880 million.

Boston Dynamics, known for releasing technologically advanced machines, is currently valued by Japanese investment firm SoftBank at $1.1 billion. Finally, we turn to some stories about AI and society.

In the midst of ongoing backlash against Google, DeepMind research scientist Raya Hadzal has made a public call for collective responsibility to develop ethical AI during a recent talk. While developing responsible AI systems is multidisciplinary, Hadzal is particularly interested in what AI researchers and practitioners can actively do.

and detailed resistance she's met within the research community and changes she's helped bring about during the talk. And finally, according to Stanford's Institute for Human-Centered Artificial Intelligence, a new program at the university is requiring AI researchers to consider potential negative impacts of their proposals before being greenlighted for funding.

The Ethics and Society Review requires researchers to not only consider negative impacts, but also come up with methods to mitigate those risks and, if needed, collaborate with an interdisciplinary faculty panel to ensure those concerns are being addressed before receiving funding from the Institute.

Thanks so much for listening to this week's episode of Skynet Today's Let's Talk AI podcast. You can find the articles we discussed today and subscribe to our weekly newsletter with even more content at skynetoday.com. Don't forget to subscribe to us wherever you get your podcasts and leave us a review if you like the show. Be sure to tune in when we return next week.