We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Gets More Efficient, Improves Taxation, and Looks Out For Masks

AI Gets More Efficient, Improves Taxation, and Looks Out For Masks

2020/5/17
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Kurenkov
S
Sharon Zhou
Topics
Andrey Kurenkov:本期节目讨论了AI新闻,这改变了我对AI研究及其影响的看法。OpenAI的研究表明,近年来取得高性能AI模型所需的计算量持续下降,算法效率可能超过摩尔定律带来的增益。AI研究中,计算效率通常不被充分讨论,缺乏对计算量的评估,导致一些成果的计算成本过高。OpenAI追踪模型效率,推动将计算效率作为评估标准。工业界非常重视AI模型的效率。OpenAI的基准测试工作将从图像识别和机器翻译开始。传统实验室或研究人员缺乏动力和能力进行大规模的效率数据收集和报告。Allen Institute of AI、卡内基梅隆大学和华盛顿大学的研究人员也提倡将效率作为评估指标。关注AI效率与绿色AI理念相符。Salesforce的AI经济学家项目利用AI模拟经济,设计旨在最大化生产力并最小化不平等的税收政策。AI模拟经济实验表明,AI设计的税收政策在模拟世界中优于传统模型。该项目与AlphaZero类似,可用于测试和验证经济学理论。设计税收政策的挑战在于定义奖励函数和优化目标。Clearview AI停止向私营公司销售其面部识别应用,并在伊利诺伊州终止所有合同。媒体监督对Clearview AI的决策产生影响。Clearview AI的信息泄露可能导致更严重的风险。Clearview AI和Banjo等公司与极右翼组织有关联。法国使用AI检测地铁乘客是否佩戴口罩,但不会收集个人信息,采用边缘计算,数据不会上传到云端。该系统每15分钟生成统计数据,仅向当局提供佩戴口罩比例信息。法国的AI口罩检测系统旨在鼓励戴口罩,而非进行个人处罚。戛纳市计划利用该系统评估口罩需求和宣传效果。法国的AI口罩检测系统在保护隐私的同时,也能有效监控口罩佩戴情况。法国的AI口罩检测系统不会用于识别和处罚个人,用于识别口罩佩戴率低的区域,以便部署执法人员。法国AI口罩检测系统的优势在于其边缘计算的实现方式和明确的使用范围。如果AI监控具有明确的目标和透明的沟通,则可能被公众接受。法国的AI口罩检测系统可以作为其他国家效仿的良好模式。欧洲在AI隐私监管方面领先。 Sharon Zhou:AI研究者缺乏关注AI现实世界应用的动力,但关注新闻有助于反思研究与现实趋势的关系。媒体对AI的报道常常与研究者的认知不符,但也有积极方面。AI在具有明确目标和已知环境的模拟环境中表现良好,但其局限性在于模拟的真实性。该项目通过亚马逊Mechanical Turk测试了AI设计的税收政策对人类行为的影响,但其影响有限。Clearview AI需要更严格的监管。设计税收政策的挑战在于定义奖励函数和优化目标。对使用AI检测口罩佩戴情况的舒适度取决于采取的行动。

Deep Dive

Chapters
The hosts discuss how discussing AI news has made them more aware of the real-world impact of their research and the media's role in reflecting on AI work.

Shownotes Transcript

Translations:
中文

Hello and welcome to Skynet today's Let's Talk AI podcast where you can hear from AI researchers about what's actually going on with AI and what is just clickbait headlines. I am Andrey Kornikov, a third year PhD student at the Stanford Vision and Learning Lab. I focus mostly on learning algorithms for robotic manipulation in my research. And with me is my co-host...

I'm Sharon, a third-year PhD student in the machine learning group working with Andrew Ng. I do research on generative models, improving generalization of neural networks, and applying machine learning to tackling the climate crisis. Hope you're doing well, Sharon. This is our 10th episode of this podcast, at least in the news discussion format. So we've been doing it for a little bit now. And yeah, I wonder...

Has discussing the news and kind of checking in on what's going on with AI out there and not just in research changed your outlook or made you think about it more? It definitely made me think about it more and also think about the impact of what my research and just generally what our research in the lab, how it affects people and how it's also perceived.

Yeah, I agree. I think as AI researchers, actually, we don't have that much incentive or push to actually pay attention to what's happening with AI out there in the real world outside academia. But it is kind of cool to be more aware of it and keep up with it. And it does make me reflect more on my research and how it relates to these trends. That's exactly what the NeurIPS guidelines wants us to feel.

Yeah, no, the whole field is moving in that direction, which I think is presumably for the best. You know, we have mental advances on data sets and so on. And it's good to think more about how this stuff will affect real world. It's also interesting that the fact that there are news articles, perhaps like

pop science articles about our work, uh, then it gets us to think about it a little bit more because we hear people who are not deep in the weeds reflecting on it. And I imagine that some fields don't have this luxury where they're, they don't have other people reflecting on, um,

essentially their work. And so they don't get that for free, in a sense. They would have to spend time outside to really reflect on it. Here, I think we get this luxury of having these news articles already reflect on a lot of our work. And it just makes it easier to get our minds thinking about it a little bit more. Yeah, it's kind of a luxury and a bit of a curse, maybe, how AI is...

You know, I think that a lot of people find interesting and cool, and so there is a lot of media hype and attention around it. Often AI researchers, I think, get annoyed at how the media represents the state of AI, right? Because it doesn't quite match how we perceive it. But there are also the positive aspects, as you say.

But enough discussion of the podcast and ourselves. Let's get on with talking about this week's news. Starting with this article from VentureBeat titled OpenAI Begins Publicly Tracking Model Efficiency. So this is a news article covering some new research from the

semi for-profit company, OpenAI, which announced on May 5th that it has begun tracking machine learning models that achieve a state of the art efficiency and effort it believes will help identify candidates for scaling and achieve top overall performance. So what this is about is basically how much energy, how much computation do you need to spend to get the

performant AI model. And the findings about OpenAI were that over the last decade, over kind of a while, the amount of computation needed to achieve great or like state of our performance has been decreasing pretty consistently across multiple areas.

In fact, OpenAI claims that spotlighting these models will also, quote, paint a quantitative picture of algorithmic success, which in turn will inform policymaking by renewing the focus on AI's technical attributes and societal impact.

And I think that definitely is true. And as we can dive a bit into some of the specifics that they found in their survey, for example, they found that Google's transformer architecture surpassed a previous state-of-the-art model called Seek2Seek, which was also developed by Google, with 61 times less compute.

They also found that DeepMind's AlphaZero, a system that taught itself from scratch how to master the games of chess, shogi, and Go, took eight times less compute to match an improved version of the system's predecessor, AlphaGo Zero, just one year later.

Overall, they also speculate that algorithmic efficiency might outpace gains from Moore's Law, the observation that the number of transistors in an integrated circuit doubles every two years, and which has explained and pushed a lot of success in AI and software at large.

Yeah, I tend to agree with you that this sort of tracking could actually do what they say it might do, which is renew a focus on this topic of efficiency. Something that non-researchers might not know is that so far in AI research, efficiency, how much compute it takes, hasn't been as discussed. So usually when you read a paper, you have...

kind of a graph or a table showing the performance statistics and you say, oh, I was able to get this level of performance. But you don't usually necessarily include the amount of computation it took to get to that level of performance unless your paper specifically deals with that.

And that actually has been kind of a flaw or maybe something to criticize in AI research where in some cases you get really good new results, but that comes at massive compute that only really large companies or labs are able to do.

So this kind of tracking seems to point to a future where maybe it becomes more of a standard to actually evaluate and include specifically how much compute it takes and how it compares to prior work in terms of compute for your system.

I think it's really important that open AI is making this push. I think in research, people push on accuracy without thinking about efficiency, like you said. But in practice, in industry, people care a lot about efficiency because that comes with increased costs, increased time for inference, all sorts of things. This is what matters a lot in practice. And I've seen folks in industry, companies really, really care about it because

But they don't know how to search for this in the research that has come out, which only shows essentially this leaderboard of accuracy without any sight of efficiency. So as part of its benchmarking effort, OpenAI also said it'll start with vision and translation efficiency benchmarks, specifically ImageNet and WMT14, which is the machine translation benchmark.

and it will consider adding more benchmarks over time. And I think this will be a really, really good push on the academic front. I'm also actually kind of impressed with OpenAI.

being able to be this bridge between industry and research and be able to push something like this out because I think perhaps they've been able to gain insight on the industry front to see that this is a need but also are able to implement it in a way that researchers in academia and the like would find it amenable.

Yeah, I agree. I think this is interesting and it's the kind of thing that a traditional lab or researcher might have less incentive or less ability to do. Just devoting time to compiling information and presenting this kind of report.

that doesn't necessarily fit that cleanly to standard research practices. So usually you would have some sort of idea, you would write a paper about it, you get some results and experiments. Here they're just collecting what is known and making kind of a case that we should care about efficiency. So it does show that their kind of quasi-academic, quasi-industry status can be used for good as is in this case.

And it's also worth noting that this point of view, that people should care about efficiency, agrees with researchers at the Allen Institute of AI, Carnegie Mellon, and University of Washington, who also last year advocated for making efficiency a more common metric alongside accuracy, which is more common still.

And of course, this is symbiotic with the work on green AI or making AI more efficient so that we can have less impact on climate change at large. And so I think this is a great push and going in the right direction. Agreed. Yeah.

So let's move on to our next article here called an AI can simulate an economy millions of times to create a fair tax policy, which is from technology of you. And this is about a project called the AI economist, which was done by the company Salesforce. And basically it's,

It's about this project where they created a simulated tiny little city. You can think of like SimCity or something like that, where you had some notion of workers and they used AI techniques to try and develop a system

tax policy. And I say tax kind of we should acknowledge that this is within this limited simulation, but they designed kind of a tax policy that was trying to maximize productivity. So how much work was done while also minimizing inequality. So the gap between the rich and the poor.

And they showed that if you simulate a lot of kind of trials and a lot of different tax policies, you can actually get something pretty interesting. And that at least in a simulated world works better than some traditional tax models using this technique. So kind of a cool results, of course, pretty limited because it's not really simulating the full complexity of human societies, but still interesting. What do you make of it, Sharon?

I think it's a cool direction, but of course, AI has been shown to work really well, or at least has been able to optimize environments really well when there are clear...

to find rewards, objectives, the whole environment is mapped and known. And I think just in that limited kind of game setting, it works really well. So I'm not surprised that it works well here. But of course, it's understanding where this model breaks down, where the simulation doesn't quite fit the real world that I think is where this will break. Though something I do find really interesting is that they did test this on

over 100 crowd workers through Amazon Mechanical Turk to see how it would influence human behavior. So that's a little bit of something in the wild. But of course, having done those experiments before, often it is quite limited as well.

Yeah, as you say, AI models are known. If you give it a closed world, a little game, you can develop some pretty impressive performance these days. So this article actually compares this to AlphaZero, which is from DeepMind and learned to play Go at a superhuman level. And I guess an interesting comparison is that AlphaZero is actually used now by professional players to try and gain insight into the game of Go and learn new things about it.

So the hope here, I think, is that economists can use this to sort of play test or verify ideas about

different tax policies, which is, of course, promising, but presumably also requires this idea to be developed a lot more. But as a direction, it still seems pretty cool. I think we can all agree that we're not happy with the current tax policy, perhaps. We can always do better, but I have a feeling we'll never be satisfied. I think we can always do better. Yeah.

And I think that's what makes it challenging, actually, because what is the reward function? What is the objective here? What who are we optimizing for? Who should be happiest, quote unquote. Right. So and what is fair? So I think I think it could get tricky in terms of what kind of optimization is done.

And now on to a company that we talk about quite a bit, Clearview in surveillance. This is from The Verge. The article is titled Clearview AI to Stop Selling Controversial Facial Recognition App to Private Companies.

So we've covered them before in past podcast episodes. Clearview AI has been one of the hottest topics in tech news recently, specifically around surveillance and the ethics of what they're doing. So their controversial face recognition app allows a user, such as a policeman typically, to snap a picture of someone and immediately find a match in their Clearview AI database.

which has mass scraped images from numerous sites, such as social media, so Facebook, Twitter, et cetera, to create profiles of people. In a BuzzFeed news article some time ago, exposed that Clearview, who claimed their product was for law enforcement, had many, many other customers as well, including Bank of America, Macy's, and Walmart. And the list goes on, thousands. And there are some individuals there, too.

The article quotes says Clearview AI says it will no longer sell its app to private companies and non law enforcement entities. And it will also be terminating all contracts, regardless of whether the contracts are for law enforcement purposes or not in the state of Illinois. Yeah, so pretty welcome development, I guess. And an interesting thing.

demonstration that having an active set of journalists actually covering and scrutinizing these companies seems to have an effect where this seems to fall quite quickly after our, we actually talked about this news article, I think a few podcasts ago, and now they're announcing this. I guess one question is, is this going far enough? I mean,

It's good that they're no longer selling to private companies, but it still seems like a bit of a wild west and really strange that they were able to do so in the first place. This article notes that they are actually ending all contracts in the state of Illinois, and that's because there's actually a lawsuit against

dealing with the Illinois Biometric Information Privacy Act, which actually prevents use of facial recognition software. But it doesn't apply to the rest of the US. So it seems that Clearview still is hoping to operate elsewhere. So, yeah, I don't know. Do you think Clearview should...

limited self-fervor, Sharon, or we need more kind of scrutiny of it? I think based on the way it's been operating, I think it definitely requires greater, greater reins on this company. I'm not quite sure they are prudent about not violating privacy. Yeah, it's also worth noting that some of this reporting came about due to leaks that

from Clearview. And I think if they leaked, you know, their stash of photos, their stash of identities, that could definitely lead to even, you know, worse actors, worse companies getting them. So it seems very strange that we can have this company operating with seemingly little oversight right now still. And I personally would like to see more of it soon.

Bit of an add on to that, there's been an article called AI in the Far Right, a history we can't ignore. And in summary, so we've seen a lot of concerning stories recently about how some of these companies are using AI for surveillance, such as with Clearview and Banjo. And while the operation of these companies is insane.

enough to raise eyebrows. We've also seen that they were connected with a number of actors with questionable purposes for using their technology as well, particularly tied to the far right organizations.

Yeah, so some more specifics here. There was a report from OneZero that talked about how the founder and CEO of Banjo, Damian Patton, was a former member of the Dixie Knights of the KKK. So he was actually a far-right member as a teenager.

and actually did some crimes and shot at some housing, I believe, when he was younger. And also there was a report about the founder of Clearview AI

showing that he had been affiliated with various far-right individuals. So Breitbart writers, Pizzagate conspiracy theorists, things like that. So this is a bit of a vague connection where these individual CEOs have these connections. It doesn't necessarily mean that there's a giant conspiracy or anything, but nevertheless, it does indicate

again, make it feel kind of weird that people with somewhat extreme views or connections are running these companies that are, you know, doing surveillance, essentially. So as AI is being used by the far right, AI is also being used by a fairly liberal country. So the next article is from Slate titled France is using AI to detect whether people are wearing masks.

So we've already heard stories of how countries around the world are keeping track of their citizens in the midst of the pandemic. And among the programs of over 30 countries that have ramped up surveillance in response to the pandemic, China's, Singapore's and Israel's have all come under particular scrutiny for concerns over privacy violations.

So even France, a country known to be very averse to surveillance, has integrated AI tools into its CCTV cameras in the Paris metro.

So while France has shown itself willing to use surveillance to monitor citizens' behavior, their usage isn't as invasive as it sounds. So the article states, the system created by French startup Dataka Lab identifies the number of riders who appear to have face masks on

without collecting and storing data on individual passengers, the company says. And according to The Verge, the software works on locations wherever it's installed, so the data is never sent to the cloud or to Ducata's lab offices. This is also known as computing on the edge.

So essentially every 15 minutes it generates statistics that are then sent to the authorities who only have access to a dashboard that displays the proportion of riders with masks. So it doesn't pinpoint individuals at all. And this is really France trying to differentiate themselves from these other countries that have come under particular scrutiny, but that they also want to be able to deploy some sort of surveillance to try to encourage their citizens to wear masks.

Yeah, so this is kind of a different model of surveillance where it seems to be doing a lot to guard people's privacy while still allowing for monitoring and data collection that is presumably useful for knowing where to distribute masks, where masks are not being worn, stuff like that.

I'm curious, Sharon, if we had a system like this, let's say in the Bay Area, in the US, would you be comfortable with that kind of tracking mask wearing? I guess it depends what the actions taken would be. So they see these statistics, but what types of actions would this enable them to then perform? Yeah, it does...

It would make sense to actually want to know why or what the motivation is to have this data. But this article notes that one place it's been installed since late April, the resort city of Cannes, plans to give free masks to all residents. So the technology might help with distribution, assessing need and understanding whether government messaging around masks is effective or not.

I think that's a valid thing, as long as it's not being used as a foot in the door approach to mass surveillance on another level. Though the way that they've been doing it, which is trying to be very careful about privacy, I think is valid. So the article does state, although it is now mandatory to wear a mask on public transport in France, and the country is considering fining individuals 135 euros, which is about $147 for

for going without them, the software will still not be used to identify, rebuke, or fine people.

And I think this is important because essentially what they're trying to do is identify hotspots of where people are probably groups of people are probably not wearing masks and then perhaps deploy some kind of law enforcement there. So not use the software directly, which then falls under, you know, flaws on the software and potential mass surveillance, but maybe deploying law enforcement a bit more effectively. And I think people are a little bit more

more okay with that? Yeah, I think there's a few good things here. So for one, the implementation itself, as you said, is edge AI. So the computation is done on board, which means that it's kind of more limited, hopefully. Like you can't just change this to be a mass surveillance for facial recognition system. It's probably a little more limited and that's a good thing.

And as you said, having a clear scope where this is not going to be used to find or identify people, it's being used just to collect hotspots. If you use AI in that way, then surveillance might be sort of acceptable if you are mindful about how you message and explain these things to us, the regular people.

And I think that this could be potentially a really good model that other countries can then follow. Because before this, the models were a bit shocking or uncomfortable, I would say. And perhaps this is one that the U.S. or other countries might be more comfortable adopting, especially those with democracies.

Yeah, and I guess it sort of makes sense because Europe often has led on privacy and AI regulation. We've heard of some of the reports we've had on how to do it. So it's cool to actually hear that they are trying to do it and doing it in practice in a way that seems to be pretty well thought out, at least in this case.

Okay, and with that, we'll finish up our 10th episode of Let's Talk AI. Thank you so much for listening to this week's episode. You can find the articles we discussed here today and subscribe to our weekly newsletter with similar ones at skynettoday.com. Subscribe to us wherever you get your podcasts and don't forget to leave us a rating if you like the show. Be sure to tune in next week.