We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode How a Human-Centered Approach is Building Trustworthy AI

How a Human-Centered Approach is Building Trustworthy AI

2021/11/18
logo of podcast Smart Talks with IBM

Smart Talks with IBM

AI Deep Dive AI Chapters Transcript
People
C
Christina Montgomery
S
Seth Dobrin
Topics
Seth Dobrin: 对AI伦理和信任的关注并非始于AI技术发展初期,而是随着AI应用于真实场景中,特别是涉及到偏见和社会公平问题时才逐渐凸显的。最初对AI的信任主要关注其结果的准确性,但随着AI技术的发展和应用场景的扩展,人们开始关注AI在处理偏见、公平性等问题上的表现。例如,在抵押贷款中,基于邮政编码的算法曾导致对某些种族人群的歧视。近年来,社会正义运动的兴起也加速了人们对AI伦理和信任的关注。 Seth Dobrin: 在AI应用中,尤其是在涉及个人隐私的场景下,需要明确界定哪些行为是可接受的,哪些是不可接受的,并对模糊地带进行讨论和决策。例如,在疫情期间,使用面部识别技术检测体温,以及使用接触追踪技术,都涉及到伦理和隐私问题。IBM的AI伦理委员会在这些问题上进行了深入讨论,并制定了相应的规则和指导方针。 Seth Dobrin: AI领域需要多元化和共享责任原则,这不仅是道德要求,也是为了获得更好的AI系统结果,因为多元化的团队能够更好地理解和解决社会问题。缺乏多元化会导致AI系统产生偏见,例如,一个主要由男性组成的团队开发的招聘算法可能会歧视女性求职者。 Seth Dobrin: IBM通过开源AI技术工具包,并在此基础上构建增值功能,来促进AI技术的公平、可解释性、稳健性和隐私保护。AI与传统软件不同,它会随着时间的推移不断学习和调整模型,因此需要持续监控AI系统的公平性、可解释性、稳健性和隐私保护等方面。 Seth Dobrin: 未来AI发展趋势:神经符号推理技术将提升AI的可解释性和透明度;AI将更多地被用于增强人类能力,而非取代人类工作;AI应用将更加注重以人为本,并提升人们的工作效率和满意度。神经符号推理技术能够让AI像人类一样进行推理,从而避免AI产生有害的言论或行为。 Christina Montgomery: IBM的AI伦理委员会在她的加入后,从一个单纯的讨论机构转变为一个拥有决策权,并能整合公司各方利益相关者意见的执行机构。委员会的成员来自公司各个部门,包括法律、研发、人力资源等,这使得委员会能够从多个角度审视AI伦理问题。 Christina Montgomery: IBM在应对COVID-19疫情期间,在AI技术应用方面,并非只关注技术的可行性,更注重伦理考量和公司社会责任,例如在疫苗护照和免疫证明等技术应用上,优先考虑公司是否愿意参与,而非仅仅考虑技术上的可行性。 Christina Montgomery: 良好的AI治理机制能够加速AI技术应用,而非阻碍其发展,关键在于制定清晰的规则和指导方针,明确哪些行为可以做,哪些不可以做。IBM的AI伦理委员会制定了明确的规则和指导方针,帮助公司员工在AI应用中遵循伦理原则。 Christina Montgomery: IBM通过“设计伦理”的方法,将伦理原则融入AI的整个生命周期,从概念阶段到部署阶段,以减少偏见并确保AI的公平性。 Christina Montgomery: IBM在AI伦理方面的实践方法相对独特,它采取了一种整体的、跨学科的方法,并致力于将这种方法开源共享,以推动整个行业的AI伦理发展。 Christina Montgomery: 政府在AI监管方面正在追赶企业,但企业已经开始主动制定并实施AI伦理规范,并积极向政府提供政策建议。 Christina Montgomery: 公众舆论对AI应用的影响力日益增强,甚至超过政府监管,企业需要同时关注公众和政府的反应。IBM在AI应用中,除了考虑法律法规,也注重公众舆论的影响,并根据伦理原则做出决策。

Deep Dive

Chapters
Malcolm Gladwell introduces the episode and discusses IBM's approach to building trustworthy AI with Christina Montgomery and Dr. Seth Dobrin.

Shownotes Transcript

Translations:
中文

Hello, hello. Malcolm Gladwell here. I want to tell you about a new series we're launching at Pushkin Industries on the 1936 Olympic Games. Adolf Hitler's Games. Fascism, anti-Semitism, racism, high Olympic ideals, craven self-interest, naked ambition, illusion, delusion, all collide in the long, contentious lead-up to the most controversial Olympics in history. The Germans put on a propaganda show, and America went along with all of it. Why?

This season on Revisionist History, the story of the games behind the games. Listen to this season of Revisionist History wherever you get your podcasts. If you want to hear episodes before they're released to the public, subscribe to Pushkin Plus on Apple Podcasts or at pushkin.fm slash plus.

Hello, hello. This is Smart Talks with IBM, a podcast from Pushkin Industries, iHeartRadio, and IBM about what it means to look at today's most challenging problems in a new way. I'm Malcolm Gladwell. Today, I'll be chatting with two IBM experts in artificial intelligence about the company's approach to building and supporting trustworthy AI as a force for positive change. I'll be speaking with IBM's Chief Privacy Officer, Christina Montgomery,

She oversees the company's privacy vision and compliance strategy globally. Looking at things like immunity certificates and vaccine passports, not what could we do, but what were we willing as a company to do? Where were we going to put our skills and our knowledge and our company brand in response to technologies that could help provide information in response to the pandemic? She also co-chairs their AI Ethics Board.

I'll also be talking with Dr. Seth Dobrin, Global Chief AI Officer at IBM. Seth leads corporate AI strategy and is responsible for connecting AI development with the creation of business value. Seth is also a member of IBM's AI Ethics Board. We want to make sure that the technology behind AI is as fair as possible.

is as explainable as possible, is as robust as possible, and is as privacy-preserving as possible. We'll talk about the need to create AI systems that are fair and address bias, and how we need to focus on trust and transparency to accomplish this. What might the future look like with an open and diverse ecosystem with governance across the industry? There's only one way to find out. Let's dive in.

One of the things I'm curious about is the origin of this interesting concern about the ethics and trust component of AI, or is this a later kind of evolutionary concern? About 10 years ago when we started down this journey to transforming business using what we think about as AI today, the concept of trust came up, but not in the same context that we think about it today.

The context of trust was really focused on how do I know it's giving me the right answer so that I can make my decision? Because we didn't have tools that help explain how an AI came to a decision, you tended to have to get into these bake-offs where you had to kind of set up experiments to show that the AI was at least as good as human, if not better, and understand why. Over time, it's progressed as AI has started to come up against real human conditions.

And I think that's when we started thinking about what is going on with AI when it relates to bias, particularly, you know, about five, eight years ago, there was an issue with mortgage, particularly related to the zip code, but started giving, you know, biases against people of certain races, you

And so I think those things combined have led to us to the point where we are today. Plus, you know, the social justice movement over the last two years has really accelerated a lot of the concern. Christian, I noticed you're a lawyer by trade. It's an interesting subject because it seems like this is where AI experts like Seth and lawyers work together. It sounds like a kind of classic cross-disciplinary endeavor. Can you talk about that a little bit?

It's absolutely cross-disciplinary in nature. For example, our AI ethics board, I'm the co-chair. The other co-chair is our AI ethics global leader, Francesca Rossi, who's a well-renowned researcher in AI ethics. So she comes with that research background. So we had a board in place, an AI ethics board in place before I stepped into this job.

And there were a lot of great discussions among a lot of researchers and a lot of people that deeply understood the technology, but it didn't have decision-making authority. It didn't have all stakeholders or many stakeholders across the business at the table.

And so when I came into the job as a lawyer and as somebody with a corporate governance background, I was sort of tasked with building out the operational aspects of it to make it capable of implementing centralized decision-making, to give it authority, to bring in those perspectives from across the business and from people with different

focuses within the IBM Corporation. Lots of different backgrounds, and we have very robust conversations. And we also engage...

the individuals throughout IBM who either from an advocacy because they care very much about the topic or they're working in the space individually and have thoughts around the topic or doing projects in the space, want to publish in the space. We have a very organic way of having them be involved as well. Absolutely necessary to have that cross-disciplinary aspect. You mentioned at the beginning of your answer, you talked about

about robust conversations, a phrase I love. Can both of you give me an example of an issue that's come up with respect to trust in AI? So one example might be the technologies that we would employ as a company in response to the COVID-19 pandemic. So there are a lot of things we could have done

And it became a question not of what we're capable of deploying from a technology perspective, but whether we should be deploying certain technologies, whether it be facial recognition for fever detection, certain contact tracing technologies. Our digital health pass is a good example of a technology that came through the board multiple times in terms of like, if we are going to deploy something,

a vaccine passport, which is not necessarily what this technology turned out to be, but looking at things like immunity certificates and vaccine passports. Not what could we do, but what were we willing as a company to do? Where were we going to put our skills and our knowledge and our company brand in response to technologies that could help to either bring about a cure or help to provide information in response to the pandemic?

COVID is a great example because it highlights the value and the acceleration that good governance can bring because

The way that we as an ethics board laid out the rules, the guardrails, if you will, around what we would and wouldn't do for COVID, help people just do stuff without worrying that we need to bring this to the board. It also laid very clear for this type of use case, we need to go have a conversation with the board. It also provided a venue for us as a company to make decisions about

And make risk based decisions where, okay, this is a little bit of the fuzzy area, but we think given what's going on right now in the world and the importance of this, we're willing to take this risk so long as we go back and we clean everything up later. And so I think that's really important that number one, governance is set up so that it accelerates things, not stops them.

And number two, that there's clear guidance into, you know, it's not no, it's here's what you can do and here's what you can't do. And it helped the teams figure out how they can still move things forward in a way that doesn't infringe on our principles. Yeah. I want to sort of give listeners a concrete sense about how a concern about trust and transparency and such would guide what a technology company might do.

Now, a real example. So if I want to make sure that people are wearing face masks and then just highlight that there is someone in this area that's not wearing a face mask and you're not identifying the person, I think we'd be okay with that. What we wouldn't be okay with that with is if they wanted to identify the person in a way that they did not consent to. And that was very generic. So I'm going to go through a database of unknown people and I'm going to match them to this person.

And so that would not be okay. And a fuzzy area would be, you know, I'm going to match this to a known person. So I know this is an employee and I know this is him. This is something that we as a board would want to have a conversation with. If this employee is not wearing a mask, can I match them to a name or do I just send a security personnel over here because the employee is not wearing a mask? That's a harder, I think, and that's a real world example that we face during COVID.

Yeah. Let's talk a little bit about diversity and shared responsibility as principles that matter in this world of AI. What do those terms mean as applied to AI? And what's the kind of practical effect of seeking to optimize those goals? You know, I think, first of all, we need to have good representation of

of society doing the work that impacts society. So A, it's just the right thing to do. B, there's tons of research out there that shows that diverse teams outperform non-diverse teams. There's a McKinsey report that says, you know, companies in the top quartile for diversity outperform their peers that aren't by like 35%. So tons of good research.

The second thing is you just don't get as good results when you don't have equal representation at the table. There's lots of good examples of this. So there was a hiring algorithm that was evaluating applicants and passing them forward. But all the applicants in the past for this company, the vast majority of them were male. And so females were just summarily wiped out regardless to some extent of their fit for the role.

I wanted to ask Christina, a project comes before the board. And so a conversation might be the team you put together and the data you're looking at is insufficiently diverse. We're worried that you're not capturing the reality of the kind of world we're operating in. Is that an example of a conversation you might have at the board level?

Well, I think the best way to look at what the board is doing to try to address those issues of bias. I mean, so for example, we've got a team of researchers that work on trusted technology. And one of the early things that they've done is to deploy toolkits that will help detect bias, that will help make AI more explainable, that will help make it trustworthy in general. But those tools initially very focused on bias and,

And they deployed them to open source so they could be built on and improved. Right. And right now the board is focused more broadly, not looking at an individual problem and an individual use case with respect to bias, but instilling those ethical principles across the business through something we're calling ethics by design.

Bias was the first focus area of this ethics by design. And we've got a team of folks being led by the ethics board who are working on the question you asked Malcolm about how do we ensure that the AI we're deploying internally or the tools and the products that we're deploying for customers are

take that into account throughout the life cycle of AI. So through this ethics by design, the guidance that's coming out from the board starts at that conceptual phase and then applies across the life cycle up through, in the case of an internal use of AI, up through the actual use. And in the case of AI that we're deploying for customers or putting into a product, you know, up through that point,

of deployment. So it's very much about embedding those considerations into our existing processes across the company to make sure that they're thought of not just once and not just in the use cases that the board has an opportunity to review.

but in our practices as a company and in our thinking as a company, much like, you know, we did this and companies did this years ago with respect to privacy and security, that concept of privacy and security by design, which some may be familiar with that stem from the GDPR in Europe. Now we're doing the same thing with ethics. How unusual is what you guys are doing now?

I mean, if I lined up all the tech companies that are heavily into AI right now, would I find similar programs in all of them? Or are you guys off by yourselves?

So I think we take a little bit of a unique perspective. In fact, we were recently recognized as a leader in the ethical deployment of technology and responsible technology used by the World Economic Forum. So World Economic Forum and the Markkula Center of Ethics at Santa Clara University did an independent case study of IBM.

that did recognize our leadership in this space because of the holistic approach that we take. We're a little bit different, I think, in some other tech companies that do have similar councils in place because of the broad and cross-disciplinary nature of ours. We're not just researchers. We're not just technologists. We literally have representation from companies

backgrounds spanning across the company, whether it be, you know, legal or developers or researchers or, you know, just HR professionals and the like.

So that makes us a little bit unique, the program itself. And then I think we hear from clients that are thinking for themselves about how do I make sure that the technology I'm deploying or using externally or with my clients is trustworthy. Right. So they're asking us, how did you go about this?

How do you think about it as a company? What are your practices? So on that point, we, our CEO is the co-chair of a something called the Global AI Action Alliance, which

initiated by the WEF. And as part of that, we've committed to sort of open source our approach. So we've been talking a lot about our approach. I think it is a little bit unique, as I said, but we are sharing it because, again, we don't want to be the only ones that have trustworthy AI and that have this holistic cross-disciplinary approach because we think it's the right approach. It's certainly the right approach for our company, and we want to share it with the world. It's not secret or proprietary. Mm-hmm.

But if you talk to the analyst community that serves the tech sector, they say far and wide, IBM is ahead in terms of things that we're actually doing, as opposed to talking about it, all while making sure that it is enforceable and impactful. So for instance, we were talking about, we review use cases and we can require that the teams adjust them.

That's unique, right? Most of the other tech companies do not have that level of oversight in terms of ensuring that their outcomes are aligned. There's a lot of good talk, but I think the WEF use case that came out on, I think it was the 27th of September, really supports that we're ahead. And then if you look at companies just in general that have AI ethics board, my experience is that with all the companies and I interact with

hundreds of leaders and companies a year, less than 5% of them have a board in place. And even fewer of those kind of really have a rhythm going and know how they're going to operate as a board yet. I wanted to talk a little bit about the role of government here. Is government leading or following here? I would say they're catching up.

I think we're following is probably the most important, right? Because look, I think over the last couple of years, as we talked about, or maybe it's been almost 10 years at this point in time as these issues have come to light.

Companies have largely been left to themselves to impose guardrails upon their practices and their use of AI. That's not to say that there aren't laws that regulate, for example, discrimination laws would apply to technology. That's discriminatory. But the unique aspects, to the extent there are unique aspects or issues that get amplified,

through the application of AI systems. The government is really just catching up. So we've got the EU proposed a comprehensive regulatory framework for AI in the spring timeframe.

We see in the U.S. the FTC is starting to focus on algorithmic bias and just in general on algorithms and that they be fair and the like. So there are numerous other initiatives following the EU that are looking at frameworks for governing AI and regulating AI. And we've been involved, I mentioned earlier, on our precision regulation recommendations. So we have something called the IBM Policy Lab.

And what differentiates our advocacy through the policy lab is that we try to make concrete, actionable policy recommendations. So not just, again, articulating principles, but really concrete recommendations for companies and for governments and policymakers around the globe to implement and to follow.

Things like out of our precision regulation of AI, that's where our recommendation is that regulation should be risk-based. It should be context-specific. It should look at and allocate responsibility to

on the party that's closest to the risk. And that may be different at different times in the life cycle of an AI system. So we deploy some general purpose technologies and then our clients train those over time. So bearing the risk, it should sit with the party that's closest to the risk at the different points in time in the AI life cycle. One of the interesting things about this issue today in 2021, we're now in a situation where

someone like IBM, I'm guessing, would be as sensitive to public reaction to the uses of AI as they would be to government reaction to the uses of AI. And I wanted you to sort of weigh those. This is a kind of fascinating development in our age that all of a sudden, it almost seems like whatever form public reaction takes can be a more powerful lever of

of in moving, changing corporate behavior than what governments are saying. And do you think this is true in this AI space? I think the government regulation that we're seeing is responding to public sentiment. So I agree with you a hundred percent that this is being moved by the public. And oftentimes when we have conversations at the ethics board,

Okay, Christina and the lawyers say, okay, this is not a legal issue. Then the next conversation is, what happens if this story shows up on the front page of the New York Times or the Wall Street Journal? So absolutely, we consider that. So I would also, I would add to that, look, we've been

we're probably I think the oldest technology company. We're over a hundred years old and our clients have looked to us for that hundred plus years to responsibly usher in new technologies, right? And to manage their data, their most sensitive data in a trusted way. So for us, it's not just about the headline risk. It's about ensuring that we have a business going forward because our clients trust us.

And society trusts us. So the guardrails we put in place, particularly around the trust and transparency principles or the guardrails we put in place around responsible data use in the COVID pandemic, there was nothing wrong.

from a legal perspective said we couldn't do more. There was nothing that said in the U S we can't use facial recognition technology and our sites, but we made principal decisions and we made those decisions because we think they're the right decisions to make. And when I look back at the ethics board and the analysis and the use cases that have come forward over the course of the last decade,

two years. I can think of very few where we said, we're not going to do this because we're afraid of regulatory repercussions. In fact, I can't think of any where, because it wouldn't have come to the board if it was illegal. But yet we did refine in some cases, stop, you know,

actual transactions, right. And, and solutions because we felt they were not the right thing to do. Yeah. Yeah. A question for either of you. Can you, can you dig a little more into this, into the real world applications of this? What are some of the very kind of concrete kinds of things that come out of this focus on, on trust?

So, you know, some real world examples of how trust plays into what we're doing is, gets back to a couple of things Christina said earlier around how we're open sourcing a lot of what we do. So our research division builds a lot of the technology that winds up in our products. And then particularly related to this topic of AI ethics and trustworthy AI,

Our default is to open source the base of the technology. So we have a whole bunch of open source toolkits that anyone can use. In fact, some of our competitors use them as much as we do in their products. And then we build value adds on top of those. And so that is something that we advocate strongly for and the ethics board helps support us with that as do our product teams because we

The value is, AI is one of those spaces where when something goes wrong, it affects everyone. So if there's a big issue with AI, everyone's going to be concerned about all AI. And so we want to make sure that the technology behind AI is as fair as possible.

is as explainable as possible, is as robust as possible, and is as privacy-preserving as possible. So toolkits that address those are all publicly available. And then we build value-added capabilities on top of that when we bring those things to our customers in the form of an integrated platform that helps manage the whole lifecycle of an AI. Because AI is different than software in that the technology under AI is machine learning.

What that means is that the machine keeps learning over time and adjusting the model over time. Once you write a piece of software, it's done. It doesn't change. And so you need to figure out how do you continuously monitor your AI over time for those things I just described and integrate them into your security and privacy by design practices so that they're continuously monitored.

updating and aligned to your company's principles as well as societal principles, as well as any relevant regulations. Yeah. One last question. Give me one suggestion, prediction about what AI looks like five years or 10 years from now.

Yeah, so that is a really, really good question. And when we look at what AI does today, AI, while it's very insightful and it helps us realize things that as humans we may not have picked up on our own. And so to augment our intelligence, it surfaces insights and maybe reduces a complexity from almost infinite and comprehensible to humans to

to I have five choices now that I can make based on the output of an AI. AI is unable, for the most part today, to provide context or reasoning. So AI provides an answer, but there's no reasoning as we think about it as humans associated with it. There's a new technology that's coming up. There's a bunch of them that are lumped under something called neurosymbolic reasoning.

And what neuro symbolic reasoning means, it's using mathematical equations, so AI algorithms, to reason similarly to a human does.

So, for instance, you know, the Internet contains all sorts of things, good and bad. And let's let's look at something that's relevant to me, at least being of Jewish background. Right. You want you want algorithms to know about the Nazi regime, but you don't want algorithms spewing rhetoric about the Nazi regime.

Today, when we build an AI, it's almost impossible for us to get the algorithm to differentiate those two things. With a tool like reasoning around it, you could exclude, prevent an algorithm from learning rhetoric that is not conducive to norms. It's just an example. So those are the kinds of things you'll see over the next three to five years.

I think we'll see a lot more explainability and transparency around AI. So for example, whether it may be you're seeing this ad because you went on and searched for

X, Y, and Z. You're seeing a shoe ad because you visited this site, you know, to the extent it's that. Or there'll be more transparency you're dealing with a chatbot, you know, just when AI is being applied to you. I think you'll see a lot more transparency and disclosure around that. And then the sort of...

answer, less practical, more aspirational answer, I think is, you know, we know AI is changing jobs. It's eliminating some, it's creating new jobs. And I think hopefully, right, with principles around AI, that it be used to augment, to help humans, that it be human centered, that it put people first at the heart of the technology. Yeah.

that it will make people better and smarter at what they do. And there'll be more interesting work, right? So I'm hoping that that will ultimately be something that will come out of AI as there's more awareness around where it's being used in your life already day to day, more transparency around that, more explainability around that, and then ultimately more trust. Yeah, yeah.

Well, wonderful. I think that covers our basis. This has been really, really fascinating. Thank you for joining me for this. And I expect that we'll be having both as a company inside IBM and as a society, many, many, many, many more conversations about AI in the coming years. So I'm glad to be on the early end of that process because we're not done with this one, are we?

Not by a long shot. It's just the beginning. Yes, just the beginning. Thank you again. Yeah, thanks for having us. Thank you. Thank you again to Christina Montgomery and Seth Dobrin for the discussion about trust and transparency around AI and for their insights about what may be possible in the future. It will be fascinating to see how IBM can help foster positive change in the industry.

Smart Talks with IBM is produced by Emily Rostak with Carly Migliore and Catherine Giraudoux. Edited by Karen Shakerji. Mixed and mastered by Jason Gambrell. Music by Gramascope. Special thanks to Molly Socia, Andy Kelly, Mia LaBelle, Jacob Weisberg, Hedda Fane, Eric Sandler, and Maggie Taylor and the teams at APAR and IBM.

Smart Talks with IBM is a production of Pushkin Industries and iHeartRadio. This is a paid advertisement from IBM. You can find more episodes at ibm.com slash smart talks. You'll find more Pushkin podcasts on the iHeartRadio app, Apple podcasts, or wherever you like to listen. I'm Malcolm Gladwell. See you next time.

Hello, hello. Malcolm Gladwell here. I want to tell you about a new series we're launching at Pushkin Industries on the 1936 Olympic Games. Adolf Hitler's Games. Fascism, anti-Semitism, racism, high Olympic ideals, craven self-interest, naked ambition, illusion, delusion, all collide in the long, contentious lead-up to the most controversial Olympics in history. The Germans put on a propaganda show, and America went along with all of it. Why?

This season on Revisionist History, the story of the games behind the games. Listen to this season of Revisionist History wherever you get your podcasts. If you want to hear episodes before they're released to the public, subscribe to Pushkin Plus on Apple Podcasts or at pushkin.fm slash plus.