We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode On the State of AI Ethics Report with its authors

On the State of AI Ethics Report with its authors

2020/7/24
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Abhishek Gupta
A
Andrey Krenkov
C
Camille Lantigne
M
Muriam Fancy
R
Ryan Khurana
Topics
Andrey Krenkov: 本期访谈讨论了蒙特利尔AI伦理研究所发布的AI伦理报告。报告强调关注AI发展及其对社会和人际互动的影响至关重要,旨在总结AI伦理领域的重要进展,为读者提供必要的工具和知识。 Abhishek Gupta: AI伦理领域贡献者众多,但对非专业人士来说难以全面了解。该报告旨在帮助他们找到所需知识和工具,并关注主流机构和学者的工作,以及那些不太知名但有影响力的研究。报告旨在成为一份方便的参考,帮助读者快速了解快速发展的AI伦理领域。 Camille Lantigne: 撰写报告摘要不仅让读者了解研究,也促使研究所人员持续学习和理解新研究,并使研究成果更容易被大众理解。 Ryan Khurana: 一些研究探讨了如何评估AI伦理贡献的重要性,以及如何重新思考算法和AI伦理的框架。另一些研究强调了人们对AI进步的理解不够清晰,以及许多在玩具问题上表现良好的模型无法推广到现实世界。 Muriam Fancy: 该报告关注AI对人权和隐私的影响,特别是疫情期间AI在监控和数据追踪中的作用。

Deep Dive

Chapters
The report was created to bring forward the most important developments in AI ethics, providing a comprehensive guide for implementing ethical AI practices.

Shownotes Transcript

Translations:
中文

Hello and welcome to SkyNet Today's Let's Talk AI podcast, where you can hear from AI researchers about what's actually going on with AI and what is just clickbait headlines. I am Andrey Krenkov, a third-year PhD student at the Stanford Vision and Learning Lab and the host of this episode. On this special interview episode, you'll get to hear from several of the authors of the June 2020 State of AI Ethics Report from the Montreal AI Ethics Institute.

which is an international nonprofit research institute helping people understand the societal impacts of AI and equipping them to take action. With us is Abhishek Gupta, who is the founder of the Montreal AI Ethics Institute and a machine learning engineer at Microsoft, where he serves on the CSE Responsible AI Board.

His research focuses on applied technical and policy methods to address ethical, safety, and inclusivity concerns in using AI in different domains. And also with us are Camille Lantigne. Actually, can you let me know how to pronounce that? Okay, I'll give it a try. Thanks.

And also with us are Camille Lanting, Ryan Khurana, and Muriam Fancy, who are also AI FX researchers at the Montreal AI FX Institute. So thank you to all of you for joining us on this episode and for telling us more about the support. Yeah, no, thank you for having us, Andrew. It's great to be on here along with Camille, Ryan, and Muriam.

So let's just dive straight in and start talking about the report. So quoting from its introduction, it states that it has never been more important than now to keep a sharp eye out on the development of this field and how to shaping society and interactions with each other.

With this inaugural edition of the State of AI Ethics Report, we hope to bring forward the most important developments that caught our attention at the Montreal AI Ethics Institute this past quarter. Our staff has worked tirelessly over the past quarter, surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain.

So I think that's a very clear motivation for why you created the report and what you hope people can get out of it. So can you maybe tell us more about the process of working on it and kind of how it came together to fulfill that aim?

Yeah, no, absolutely. And so, you know, I think 2019 was a great year in terms of people paying a lot of attention to this field of AI ethics, which is great because we have a lot of

diversity of viewpoints and a lot of scholars and practitioners coming from not only the field of machine learning but from other fields in the social sciences who've started to you know raise issues try to propose solutions and work together in you know coming up with things that are hopefully applied and practical and

But one of the concerns of having a lot of people start to contribute to this space is that it can become hard for people whose primary job function is not responsible AI to navigate this space, to find the necessary pieces of knowledge, the necessary tools and techniques that they can apply to their research and work. And so that really was the motivation for us.

to compile the report. What we found was that we were doing this on a weekly basis with the newsletters that we put out. And so our process of accumulating sort of the necessary bits of information and knowledge in the space was

are part serendipity and part, you know, systematic research. And the reason I, you know, I mean, systematic research is something that, you know, everybody does, but I, you know, we're also big believers in serendipity. And I think there is something to be said in terms of discovering some of the less popular works, if I may use that phrase. And the reason I say that is because

a lot of the mainstream authors and academics and institutions from which you get some of this work are fairly easy to find and are amplified quite a bit. But there's also a lot of meaningful work being done by, you know, again, within quotes, lesser known scholars, but that is quite impactful. And finding those voices, elevating them,

and amplifying the message of their research and the lessons learned from that, I think is equally important. And so that's something that we've been actively trying to do is to not only find some of the most impactful works from the large organizations, the more influential authors, but also the ones coming from those who are maybe not as well known yet, those who are doing meaningful work.

And, you know, sort of compiling that into the report, our goal was that it would be a little bit like, you know, your handy reference for catching up on this very, very rapidly evolving field quickly. And, you know, use that as a guide for implementing some of these ideas into practice.

I see. Yeah, it makes a lot of sense. Even as an AI researcher, I find myself barely keeping up with a lot of these different takes on AI and ethics. So it was very exciting to see such a compilation.

Maybe Camille or Ryan or Muriem, can you put in what was, how were you involved in the process? What involved working on this report? So yeah, I got the chance to contribute a few of the summaries, a few of the research summaries that are in the report. And through that,

Not only do the people who read the report get to learn more about the research, but it's also a great incentive for us at the Institute to keep up with new research and deliver it and understand it in a way that is accessible not only to high-level academics, but also to the general public. So that was a great insight for me.

I see. Yeah, yeah. Actually, we've, I believe in the past on this podcast, talked about some of the research summaries from the Montreal Ethics Institute and having such a high level kind of digest of a more substantive work definitely makes it more accessible.

Moving on, actually, from that note, I was wondering, so this report is quite large. It's more than 100 pages and it covers many topics like agency and responsibility, disinformation, jobs and labor. So I was wondering for each of you if there are any particular lessons that you got this year working on it?

or any even particular articles or research papers that particularly struck you or are particularly memorable that you can highlight? So, yeah, I think, you know, one of the papers that I was really happy that, you know, we came across and had a chance to feature in there was on how...

in pre-trained NLP models surface and affect people with disabilities. And I think that was something that wasn't really well explored in the literature prior to that. I mean, there were a couple of reports, I think, from the AI Now Institute that did talk about a research roadmap for AI

some of the, you know, consequences and impacts of AI systems on people with disabilities. But there really wasn't too much work that was done otherwise on the impacts. And it was interesting to see that now researchers have started to talk about that and also to, you know, look at these pre-trained models, things that, you know, have potential downstream consequences because

there is a lower barrier to using pre-trained models compared to maybe rolling your own or starting from scratch. And just because of the higher accessibility, the impact of

that biases and such models have is also so much larger. So that was something that was interesting. Another piece that I found to be quite insightful was looking at Ubuntu ethics as a way of

breaking away from the traditional ethical standpoints and viewpoints in discussing AI ethics, which I think was not something that, again, gets talked about very often. But when we're talking about inclusivity, we really need to be inclusive of different ethical perspectives as well.

even when they maybe don't necessarily jive with the ones that are quote-unquote mainstream or the ones that we know and talk about a lot in the Western Hemisphere. I see, yeah. And yeah, I think that's part of what's great about the report is it highlights some of these more interesting, maybe less seen perspectives. Yeah, any other highlights that you can share?

If I could just jump in there. One of the most interesting sections I think is super valuable for people to read through is on the future of AI ethics. So there's two research papers that are summarized there that I think are quite critical for a lot of people who are both machine learning practitioners and people thinking about ethics. The first is called Beyond Near and Long Term. And I think this is a really important distinction because what

confuses a lot of people who think about AI is that they conflate the real issues that people are facing right now, such as the ones that Abhishek mentioned with biases and NLP algorithms with this more like existential threat of a super intelligent AGI or the threat of what if AIs have so much potential

control, that we no longer need human labor, these very far-off threats that don't really match the current research priorities and capabilities. So this paper provided a really interesting framework on how to assess the importance of an AI ethics contribution and how we even think about algorithms. And instead of thinking about near and long term, they proposed thinking on an axis of

impact and capability. And the more the capability is somewhat detached from the existing capabilities, the less it seems like a prescient topic. The more that the impact is narrowed to something really specific, the less it also seems like a prescient topic. So it really helps reprioritize how we frame AI ethics.

The other one that I think is really important was called Troubling Trends in Machine Learning Scholarship. And this, again, is really important because it highlights a lot of the ways people think about AI and the way that we think about the gains in AI as something that is not always really clearly understood. Like you can't easily quantify what progress looks like. And a lot of these

examples that perform really well on toy problems don't necessarily generalize to the real world. And understanding that is important both for ML researchers and for people that are putting their trust in algorithms and deploying them into business contexts. And having that frame really helps you make a lot more ethical judgments going forward. So I think that kind of more general understanding

ethics framing work in this paper is super valuable for people to take into account. I see. Yeah, those are two really interesting ones. I think we actually talked about the troubling trends paper when you saw your summary on the website. So it was great to see. Maybe next we can talk with Miriam. What were some highlights for you?

For sure. Particularly, my interest in AI goes really into the connection with regards to human rights and privacy. And so the law and governance and privacy sections were particularly interesting, especially with the highlight of its impact on certain communities, but also bringing into light the

role of coronavirus and how surveillance in AI plays a big role in understanding people's human rights and how that's governed. And I would say translating surveillance tool into a virus tracker for democracies is a great example of an article that really speaks to some of those issues that I think Abhishek mentioned before and same with Kami is that

what Montreal AI Ethics Institute is trying to lay out what's happening with artificial intelligence for the public and not just for academics who are understanding what's going on in the space. And I think that section really did that really well for understanding how AI is playing in current events.

Great. Yeah. And it's nice that it covers that span from talking about AI scholarship, as Ryan mentioned, and as you mentioned, talking about larger scale AI governance and topics like that. And to round things off, Camille, did you have any particular articles or themes that you found maybe are highlights for you? Absolutely. I'll mention only two.

The first one I want to mention is actually the first summary that is presented in our report. So the title of the article is Robot Rights, Let's Talk About Human Welfare Instead. To me, that is a very important paper because I remember when I first started in the space of AI ethics,

I thought robot rights were very interesting and important, and they're kind of a very flashy subject. This topic tends to garner a lot of attention because it's quite shocking to people. It plays into our ideas of science fiction and robots being conscious. But unfortunately, I don't think that robot rights are...

Now, I don't think robot rights are a very pressing issue. And so I was really happy to see this article, this research paper highlighted here, because I think we really need to shift the conversation as the paper highlights towards human issues and not rights for robots.

So the second paper I want to highlight is Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning. That's by Henderson and colleagues. And so this paper, I actually have the chance to write the summary for it. And it strikes me as very important because we don't hear a lot about the impacts of

machine learning and AI on the environment, on our climate yet. It's not such a hot issue, but I personally think it should be. And I guess I'm somewhat biased here because I am working on these issues in my own work. But I was very, very happy to be given the opportunity to work on this specific paper, more largely on the topic of

of the carbon impact of ML because it's so new still and it should really be

more broadly explored. Yeah, I think the intersection between machine learning and environmental issues should be studied more widely. And there has been some great progress in the area of how AI and ML can

help mitigate, uh, carbon impact and climate issues. And that's really awesome. We do want to see, uh, more work happening in that area, but I think it's also crucially important to, um, kind of assess how, uh, ML and AI might be contributing negatively to, uh,

to carbon footprints and climate issues. So I thought that was a very vital perspective and I hope we hear more about it in the future. Great. Yeah, actually, I really like that work by Henderson et al. Henderson, I think, is at Stanford doing some very cool work. And I think I saw also that paper on robot welfare and I thought it

was very accurate to say that it's good to have these theoretical concerns, but

There's more to it with humans. So that was very interesting to hear all of your highlights. I think that showcases kind of the broadness and all the variety of topics covered in the report. And kind of it makes sense that there is that deep because there's a lot to AI ethics and a lot of dimensions to discuss.

It's great that there is a single report where you can go and sort of browse through all of these things.

On a similar note, another thing I found interesting is that besides research summaries, you also cover a lot of reporting, a lot of articles concerning AI ethics. So you have some of them are this dating app exposes the monstrous bias of algorithms, or there's also racial disparities in automated speech recognition, or how Allstate's secret auto insurance algorithm squeezes big spanders.

And it's interesting to see all these articles because it's something we talk a lot about on this podcast is AI news and actually seeing how AI is already out in the world and how there are actual current problems of it that people should be aware of and kind of avoid.

So going from there, I'm curious, yeah, what are maybe some of the concrete instances of AI being used in the world in problematic ways and not necessarily research or kind of conceptual takes, but what are some actual concrete instances of AI you found out about that were memorable or you think showcase kind of what is going on with AI right now?

So I think one of the interesting ones there is on how we're increasingly delegating our responsibility to do, you know, background checks in employment where, you know, you're using predictive models to

applicants based on not only the data that they've submitted, but additional data that's scraped from elsewhere on the internet, from the social media profiles and your other web presence. And what's interesting there is

I mean, you know, hiring processes are already opaque. Adding another layer of obfuscation on top of that makes it opaque even for the people who are doing the hiring, which is interesting because

you are making the process less and less transparent to the applicant and they might be accepted or rejected based on obscure criteria that aren't clear neither to the applicant nor to the people who are doing the hiring because you're increasingly sort of delegating away your responsibility and your judgment to external systems.

The other thing there is, I mean, if you're buying these systems off the shelf, which is most often the case, I mean, you're not developing these in-house, we don't know what are the values and norms that the vendor is encoding into these systems and how they're going about thinking in terms of weighting different features and how they think or, you know, what they think are the important things to consider when looking at job applicants. So,

The reason this is important, again, is because and more so now than ever, where, you know, people are well, human resources are limited. We're trying to do the best in terms of constrained timelines and resources in terms of hiring.

We are making these decisions in a less transparent way that is going to affect people's livelihoods. It's not something benign like picking the next movie that you're going to watch on Netflix, but it's something that's going to

fundamentally change how you put food on the table for your family. And I think they're having some degree of transparency and ethics is of paramount importance that both the organizations and applicants know what it is that they're being judged on.

I see. That's a great example. I think we also discussed on here. I think maybe Miriam, do you have also an example that you think is interesting? Not an example per se, but I actually wanted to quickly jump on Abhishek's point about...

The importance and the application of AI specifically in issues of the labor market. And I think something that has been brought up recently over time is understanding the inequality and the lack of inclusion in data and how

data is not generally representing majority of people in the application itself and how properties of exclusion have become really problematic when utilizing AI instances such as hiring processes. So just wanted to jump on Abhishek's point there and explain that the issues with data

AI being used in this case are multifold and can start just simply from the design of itself and the fact that data needs to be more inclusionary. And that is one of the ways in which AI ethics needs to have a place and really needs to be part of the conversation of these applications from the start.

Makes a lot of sense. Yeah. And that's why it's so good to keep up with such reporting, because you can be aware of what's actually going on right now in the real world and not, let's say, hypothetical future problems, which is a lot of people's concerns of AI. Ryan, do you have any thoughts on this topic or other examples?

Yeah, so actually the thing that I wanted to mention builds off what both Abhishek and Miriam were saying. One of my favorite articles is actually a Wired article covered here called AI as an Ideology, Not a Technology by Jaron Lanier and Glenn Weil, both in Microsoft Research. And I've had the pleasure of working with Glenn Weil through the Radical Exchange Foundation. And I think both of them are some of the most interesting thinkers about technology today.

One of the really fascinating things, because this is a really broad argument that they make,

It's just that our conception of AI is not about a technology with capabilities, but it's about a way of doing things. And so when we speak about automation and the replacement of jobs that AI is going to have, we often refer to it as if it has capabilities well beyond what it's actually doing and that it's almost an ethereal technology that no one made it. It didn't result from anything and it's just going to replace people.

And we completely discount the process by which the data that goes into AI is created. We don't consider that a labor activity. The fact that whenever I do a CAPTCHA, I'm providing some training data of identifying things. That doesn't matter. That's not labor. All the labeling that goes in to different data sets that are used to train algorithms, that's not considered labor. And so this is a really...

ideological point that what is considered labor and what isn't is decided by the people who have power at the moment. And it's a really interesting thing where they also go into this idea that, you know, we speak of automation even in things that we fail to automate.

But that's because there's this view that, oh, it's not that we can't automate it. It's we can't automate it yet. That human labor that we view as less important, it's always replaceable. It's just not replaceable right now. AI that can do that is coming. It's just around the corner.

And this entire way of thinking pervades a lot of how the hype and the buzz around a lot of AI investments are made. And even if you're a well-intentioned researcher who is pushing things further, you have the incentive to present your ideas in this way. And obviously there are some people who think that they're just one step away from an AGI, so they truly believe their own hype.

But all of this creates this pervasive discounting of the actual labor that goes in. It makes us have anxieties well beyond the actual capabilities of a technology. And it makes it far more than just a capacity opportunity.

of what a technology can do. It makes it a way of viewing work and the economy and social relations. And so they, that presentation makes the impact of AI very radical and it allows us to understand why we have these ethical concerns to begin with.

I see. That's a great example. And it does speak to, I think, one of our goals of this podcast is to try and get across from the perspective of AI researchers. I mean, what is even AI, right? What is it concretely right now in the real world? What are the limitations, all these things that

maybe are still percolating through the broader culture and we are even still figuring out. And that's why it's really good to be aware of resources like this report that speak to that question. Speaking to this point around hype based on how I viewed, I think it's fair to say there's been a lot of hype for building up about AI in the past decade. And

Part of why that is, is incorrect perceptions of what it is versus what people think it might be.

So among all these topics that are covered in a report of disinformation, jobs, ethics, maybe robots, what are some topics you think maybe don't get enough attention in people's minds or maybe people pay too much attention or are too worried about disproportionately?

So I think one of the areas that gets covered a lot in popular coverage is privacy, which makes sense. I mean, given the current ecosystem and environment in terms of the COVID-19 and contact tracing apps. So it makes sense that privacy would get a lot of attention, as was the case last year in 2019 as well, where privacy was front and center.

One of the areas that I think deserves a lot more attention and doesn't get enough attention at the moment is machine learning security. And that's basically viewing machine learning systems from a cybersecurity lens and analyzing where

such systems can fail. So, you know, one of the most common examples discussed there are how adversarial examples can break down the performance of a machine learning system. So, you know, you have people painting their faces

so that they fool or, I guess, deactivate the recognition capabilities of facial recognition systems. You have little strips of tape that you put on a stop sign that confuses the computer vision system on a car. You have other adversarial examples in the wild that can confuse machine learning systems.

But one of the things that's important to understand there, you know, in this sort of cluster of attacks, you know, that includes, you know, model inversion, model evasion, data poisoning, model extraction, amongst others, is how that also has impacts from an ethics perspective. And what I mean there is, let's say you start with a machine learning system where you have done your best, you know, you've done your best effort in terms of ensuring fairness.

in terms of the outcomes from the system. You've applied some bias mitigation techniques to your training data. But if you've not put in place some of the machine learning security measures, what essentially ends up happening is that you've opened up this new attack surface through which, let's say, for example, by using data poisoning attacks, you can again now

skew the system in terms of producing biased outcomes. And then it becomes really problematic because all the efforts that you put in prior to deployment to have ethics and inclusivity be an integral part of your system sort of gets subverted

because you did not think about this from a machine learning security perspective. And so we have been doing some work in that space. In fact, Eric Galenkin, who is a researcher at the Montreal Aetics Institute, and myself, we just presented yesterday at a workshop at ICML, a framework that we've been working on called Greenlighting ML,

which essentially helps to ensure confidentiality, integrity and availability in machine learning systems when they're deployed. And I think that's something that doesn't get enough attention and should going forward. I see. Yeah, very interesting. I think security is one of these things you don't realize to worry about until you start to understand modern AI more and see all these kind of adversary examples and other limitations.

Camille, do you have also a take here on what is maybe hyped or not hyped enough?

Yeah, so one of the areas I think should get more attention is what AI actually is. And I think it should get more attention, not necessarily from researchers who are doing this work in universities, but more specifically from mainstream media outlets like television networks or radio stations or newspapers, media that's

people who are in their 50s, 60s, people who are in the age range of my parents, for instance, I think those outlets should

really talk about what AI actually can do and what AI cannot do where it's not going. Yeah, that's a great point. I think actually if you look at surveys out there, most people who aren't experts don't really know much about AI is sort of the findings. So it

It's certainly something that needs more work for more people to understand what it actually is and how it differs from sort of the science fiction representations or the popular media representations. I think, Ryan, do you have also a thought on this area?

Yeah. So I think the two most important things that aren't covered enough are questions of industrialization and management of algorithms. So you mentioned AI hype earlier, and I think it's quite dangerous that the incentives for everyone in the system, be they researchers, be they venture capitalists,

be they managers or be they budding data scientists in an industry, is to overstate what they can do. And as a result, there's so much attention given to it, and then you run the risk of another AI winter happening because it's happened twice before where the promise of what AI could do were so high that it couldn't feasibly deliver it.

And one of the concepts that I think is really unique and important to talk about is this idea that instead of going to a winter, we should be entering into an autumn where our emphasis on beating the state of the art on toy problems should no longer be our sole focus. Getting an extra 0.05% accuracy on CIFAR-10 does not make it more usable to the mass majority of people.

But if you can help create the mechanisms by which it industrializes, you know, working with designers and understanding ML tools as a product in and of themselves, really making stuff more interpretable and easy to understand, right?

working on problems at scale and developing the effective project management protocols to monitor algorithms so that companies are more trustworthy of using it. This is the kind of stuff that actually allows it to deliver on something more than just its promise. It allows it to deliver on practical issues.

And I think those questions of industrialization are getting more attention now. Like I was at ICML this last week and there was a lot more papers on that this year than last year. And so that's something that's starting to get attention, but it needs to get more attention.

And on the flip side, if these do get used more commonly, because while they're used at Google and Facebook or governments like China in terms of surveillance, and that gets a lot of attention, the vast majority of the economy is not using deep learning algorithms or anything close to that complexity. And if they do start using it, we have to ask questions about,

Well, how do they manage it? How do they trust the algorithm? How do they monitor what's going on? How does someone get alerted if the algorithm starts making a mistake? Like if you're Amazon and you develop a hiring algorithm that starts discriminating against women, how can we sound the alarm bells really quickly for that?

And so those management protocols are something that have to be thought to really carefully. And that really practical discussion, I think we're quite behind that. And there's a real risk of people deferring to algorithms because the hype has made them seem far more capable than they actually are. And we haven't talked enough about what it means to have a task be automated. Yeah, great point. I think, again, this speaks to

if people understand what AI is, um, and also what, what are the real concerns with modern AI and what we should be working on and not the science fiction conception, uh, of, you know, killer robots or whatever, uh, that will definitely help a lot. And so it's very good to tell reports like this. And then that's why we are here talking about it. I would suppose, uh,

So to round things off, I think, Miriam, do you also have a thing you think is overhyped or doesn't get enough attention? Yeah, just a small point to also jump off of what Ryan was saying. Specifically, I believe it was mentioned to the industrialization point and also slightly touching on his management point is that from...

a policy perspective and unfortunately I can't bring the technical perspective of this is that the diversity or the lack thereof diversity in developing AI I think

in many instances is quite problematic and we're seeing the detrimental effects of it from applications from such as Compass or as Abhishek mentioned before, the applications of AI in hiring processes is not spoken about too much. And I think that goes to what we spoke about previously, which was the lack of inclusion in AI and the fact that data is not properly maintained and really

I think the fact is that there are often times where AI is not designed in an interdisciplinary nature. And we see the effects of that when it's brought into systems such as governance and it's unable to function in a state that brings, I want to say, like...

objective rulings where it really can't because the design of it is not able to do so. And I think we put a lot of pressure on the idea that AI can be objective and can have a lot of, um, anonymous, uh,

ideation to it where it's not possible because it's humans building it and biases are going to be brought into that. And so really stressing the point that there needs to be a more interdisciplinary nature of developing the designs of the EZI. And I think that will also help mitigate a lot of the issues of biases and ethical dilemmas that are stated in the report that is covered.

Yeah, great point. I think we've seen many examples over the past few years. In particular, there's this article, Racial Disparities in Automated Speech Recognition, highlighted here, what really showcases that in

very major production products that many of us use. There are biases and it doesn't work for certain people. And that certainly is partially because the field is not as inclusive as it could be. And I think there is hopefully growing recognition of that and growing efforts. And it's good that you highlight that as well.

So, yeah, we covered, I think, much of a report now. Many example articles and topics you think deserve more attention from it.

Of course, there's a lot we couldn't touch on because it's so big. So let me just again say that we are talking about the State of AI ethics report from June 2020. So any listen, any interested listeners can go ahead and Google that and find it and peruse it themselves. And now maybe to cap things off, we can go a bit broader. And I'd just like to ask you,

What other things does the Montreal AI Ethics Institute do? What other ways do you work on AI ethics? And yeah, how do you see kind of a field evolving and your part in it?

So I think the biggest thing there, Andrea, is that we firmly believe in equipping and empowering everyday citizens, diverse stakeholders in shaping some of these technical and policy measures. And that's really at the heart of all of the work that we do. And, you know, part of that work manifests itself through the public workshops that we host and

that helped to build competence in this space, adding nuance to the discussions, helping people really understand what some of the concerns are, but also how to take some of these lessons and implement them in practice in their own research and work. We also have learning communities at the Montreal Aiotics Institute, which

meet on a biweekly basis, inviting people from our community and from around the world to take part, to learn together, and to build a deeper understanding. At the moment, we're focused on four areas, which are privacy, disinformation, labor impacts of AI, and machine learning security. And finally, looking at how we can improve

or improve participation from diverse stakeholders in the scientific publishing model. So what we've done is we've launched a program that we call CoCreate, which helps to lower the barriers to

publishing in the traditional academic and scientific model, which for people who are not familiar with it can be a little bit jarring, can be a little bit off-putting. And it could be simple things like understanding how to process and work with LaTeX templates. Or it could be something as simple as looking at the different platforms that are used in academic publishing

Or on the other end, you know, making your ideas more scientifically rigorous and understanding, you know, how to do empirical studies to back up your ideas. That's something I think that's quite important. And through the Co-Create program, what we're doing is helping people find each other

those who have a lot of experience in this space and those who don't to work together because we believe that ideas can come from all parts of the world. It's not necessarily just those who are on traditional academic tracks, but often people who are outside have a fresh perspective and elevating their voices, helping them participate in this scientific publishing model is also something that I think is going to be quite impactful.

I see. Very interesting. So it sounds like any industry listeners, you can, of course, look up the Montreal AI Ethics Institute and look into maybe even if you want to take part in these community initiatives. I would definitely say that you should consider it and take a look because these look like great options for anyone interested in this area.

So with that, I think we will go ahead and cap this episode off. Thanks again, Abhishek, Camille, and Ryan and Miriam for joining us for this discussion. This has been Skynet today's Let's Talk AI podcast. If you've enjoyed this conversation, please go ahead and rate us on any platforms you use and subscribe and tune in to our future episodes.