We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Building Trustworthy AI: A Holistic Approach

Building Trustworthy AI: A Holistic Approach

2022/6/28
logo of podcast Smart Talks with IBM

Smart Talks with IBM

AI Deep Dive AI Chapters Transcript
People
M
Malcolm Gladwell
以深入浅出的写作风格和对社会科学的探究而闻名的加拿大作家、记者和播客主持人。
P
Phaedra Boinodiris
Topics
Phaedra Boinodiris:倡导负责任地构建和部署人工智能不再仅仅是合规问题,更是业务的必然要求。即使拥有良好意愿的组织也可能无意中造成潜在损害。她对负责任AI的兴趣源于游戏行业,最初关注的是如何利用AI提升游戏体验,后来才关注到AI伦理问题。许多公司对人工智能的投资停留在概念验证阶段,部分原因是投资与业务战略脱节,或人们不信任AI模型的结果。构建负责任的人工智能需要采取整体方法,考虑组织文化、流程和工程框架。领导负责任的人工智能倡议的领导者已经从技术领导者转变为非技术业务领导者。数据是人类经验的产物,所有数据都有偏差,如果不充分认识偏差,就会将系统性偏差固化到人工智能系统中。IBM在设计思维方面进行了大量投资,以在编写任何代码之前创建系统性同理心的框架,从而减轻潜在损害。伦理不应该作为事后质量保证,而应该从一开始就考虑在内。通过与客户互动,IBM不断学习,例如如何引导客户避免跳过重要步骤,以及如何利用社会科学家的视角来审视数据的适用性。IBM建立了卓越中心,以便员工分享经验教训,建立正确的文化。未来构建值得信赖的人工智能需要更多教育和理解,并使之更容易获取。 Malcolm Gladwell:人工智能无处不在且隐形,因此公司学习如何构建值得信赖的人工智能至关重要。赢得对人工智能的信任需要问一些以人为本的问题,例如:意图、准确性、公平性、可解释性、数据保护、稳健性等。赢得对人工智能的信任并非单纯的技术挑战,而是一个社会技术挑战。负责任的人工智能需要在流程的每个步骤中构建。构建人工智能的复杂性源于人的复杂性。构建值得信赖的人工智能需要像心理学家、人类学家一样思考,理解人的行为。 Laurie Santos:作为访谈主持人,Laurie Santos主要引导话题,提出问题,并与Phaedra Boinodiris进行讨论,并未提出自己独立的观点。

Deep Dive

Chapters
Phaedra Boinodiris discusses her journey from the gaming industry to her current role at IBM, highlighting her initial interest in AI through gaming and her shift towards ethical AI concerns.

Shownotes Transcript

Translations:
中文

Hello, hello. Welcome to Smart Talks with IBM, a podcast from Pushkin Industries, iHeartRadio, and IBM. I'm Malcolm Glaubow. This season, we're talking to new creators, the developers, data scientists, CTOs, and other visionaries who are creatively applying technology and business to drive change. Channeling their knowledge and expertise, they're developing more creative and effective solutions no matter the industry.

Our guest today is Phaedra Bonadiris, Trust in AI Practice Leader within IBM Consulting. Advocating for artificial intelligence built and deployed responsibly is no longer just a compliance issue, but a business imperative. Part of Phaedra's job is to help companies identify potential risks and pitfalls way before any code is written.

In today's show, you'll hear how Phaedra's team at IBM is approaching this challenge holistically and creatively. Phaedra spoke with Dr. Lori Santos, host of the Pushkin podcast, The Happiness Lab. Lori is a professor of psychology at Yale University and an expert on human cognition and the cognitive biases that impede better choices. Let's get to the interview. ♪

Phaedra, I'm so excited that we get a chance to chat today. You know, just to start off, I'm wondering, how did you get started in this role at IBM? Like, what's the story to how you got where you are today? Oh, goodness. My background is actually from the world of video games for entertainment. So AI has always been very interesting to me, especially when you intersect AI and play.

But several years ago, I began to get very frustrated by what I was reading in the news with respect to malintent through the use of AI. And the more that I learned and the more that I studied about this space of AI and ethics, the more I recognized that even organizations that have the very, very best of intentions are

could inadvertently cause potential harm. And so that's super cool. I love that your interest in more responsible AI came from the gaming world. So talk a little bit about your history with gaming and how that informed your interest in trustworthy AI. Well, it wasn't as much necessarily the ethical components of AI when I was working in games. It was more things like

look at what non-player characters can do. You know, I mean, if you've got an AI acting as a character within the game, and how is it that you can use AI in order to make a game a more interesting experience? Actually, I ended up joining IBM to be our first global lead for something called serious games, which is when you use video games to do something other than just entertaining.

And so the idea of integrating real data and real processes within sophisticated games powered by AI to solve complex problems. It wasn't until, as I mentioned, like later when we started to hear all of us more and more news about just problems, what could happen with respect to rendering or putting out models that are inaccurate or unfair. Yeah.

I know one of your inspirations for hearing other interviews that you've done is sci-fi. I'm also a sci-fi nerd, and I know sci-fi has talked a lot about the trustworthiness issues that come up when we're dealing with AI and so on. And so talk a little bit about how you bring that to your work in developing AI that's a little bit more ethical. A lovely question. So my parents were major technophiles. They both were immigrants to the United States, came here to study engineering, and they met

In college, growing up, my sister and I, we had Star Trek playing every night. My parents were both big fans of Gene Roddenberry's vision of how technology could really be used to help better humankind. And that was the ethos that, of course, we grew up in.

The wonderful thing about science fiction isn't that it predicts cars, for example, but that it predicts traffic jams. And I think there's just so much we can learn from science fiction, or in fact, like I said, play as a mechanism to be able to teach. Science fiction predicting traffic jams. I love it. But when we think about AI and science fiction, we need to be careful.

We need to remember that AI is not something that's going to enter our lives at some point in the distant future. AI is something that's all around us today. If you have a virtual assistant in your house, that's AI. Your phone app that predicts traffic? AI. When a streaming service recommends a movie? You've guessed it, AI. Phaedra says AI may be behind the scenes determining the interest rate on your loan.

or even whether or not you're the right candidate for that job you applied for. AI is both ubiquitous and invisible, which is why it is so crucial that companies learn how to build trustworthy AI.

How do we do that? When thinking about what does it take to earn trust in something like an AI, there are fundamentally human-centric questions to be asked, right? Like, what is the intent of this particular AI model? How accurate is that model? How fair is it? Is it explainable if it makes a decision that could directly affect my livelihood?

Can I inquire what data did you use about me to make this decision? Is it protecting my data? Is it robust? Is it protected against people who could trick it to disadvantage me over others? I mean, there's so many questions to be asked.

Earning trust in something like AI is fundamentally not a technological challenge, but a socio-technological challenge. It can't just be solved with a tool alone.

What are the kinds of risks that companies have to think through as they're developing these technologies to make sure they're as trustworthy as possible? Well, you know, they may be putting a lot of money into investing in AI that gets stuck in proof of concept land, like it gets stuck in pilot. We've done some research where we have found about 80 percent of investments in AI get stuck in

And sometimes it's because the investment isn't tied directly to a business strategy or more often than not, people simply don't trust the results of the AI model.

As a company who is, of course, thinking about this so deeply, what do businesses need to consider when they're trying to figure out how to solve this big puzzle of AI ethics? It has to be approached holistically. So you've got to be thinking about, for example, what culture is required within your organization in order to really be able to responsibly create AI?

What processes are in place to make sure that you're being compliant and that your practitioners know what to do? And then, of course, AI engineering frameworks and tooling that can assist you on this journey. There is so much fundamentally to do.

We found that actually those that were leading responsible AI, trustworthy AI initiatives within their organization has switched in the last three years. It used to be technical leaders, for example, chief data officer or someone who is a PhD in machine learning. And now it's switched.

To be 80% of those leaders are now non-technical business leaders, maybe chief compliance officer, chief diversity inclusivity officers, chief legal officer. So we're seeing a shift. And I believe firmly it's a recognition from organizations that are seeing that in order to really pull this off well, there has to be an investment and a focus

in culture, in people, and getting people to understand why they should care about this space. And so I see two challenges with doing that, right? One is, you know, a lot of these technology companies are really built to be tech companies, not necessarily, you know, social tech companies or having this sort of training and ethics and beyond. Another issue seems to be that you're really proposing a switch that's truly holistic, right? That's like rethinking the way the company thinks about its

bottom line. And so as you think about working through these kinds of challenges at IBM, how have you tackled this? Like how have you brought new talent in? How have you thought really carefully about this big holistic switch that needs to come to make AI more trustworthy? Data is an artifact of the human experience. And if you start with that as your definition and then think about, well,

Data is curated by data scientists. All data is biased. And so if you're not recognizing bias with eyes fully open, then ultimately you're calcifying systemic bias into systems like AI.

So some of the things that we've done at IBM, again, recognizing this important need for culture is big, big, big focus on diversity. Not only looking at teams of data scientists and saying, how many women are on this team? How many minorities are on this team? But also insisting on recognizing that we need to bring in people with different worldviews too. For example, what's your definition of fairness here?

Is your definition equality or is it equity? Also bringing people with a wider variety of skill sets and roles, including our social scientists, anthropologists, sociologists, psychologists like yourself, right? Behavioral scientists, designers. I mean, we have...

one of the leading AI design practices in the world. I mean, the effort, the investments we've been making in design thinking as a mechanism to create frameworks for systemic empathy well before any code is written. So people can think through how would you design in order to mitigate for any potential harm, given not only the values of your organization, but what are the rights of individuals? Right.

Asking oneself these kinds of questions reinforces the idea that ethics doesn't come at the end, like it's some kind of quality assurance, like, "Check, I passed the audit, I'm good to go." But instead, really, as soon as you're thinking about using an AI for a particular use case, thinking about what is the intent of this model? What's the relationship we ultimately want to have with AI?

And again, these are non-technology questions. This is where social scientists, having a social scientist on your team helping think through these kinds of questions is critical. Let's pause here for a second because this is a really profound idea. Building responsible AI does not mean that you create a system, then check in at the end and say, is this okay? Is this ethical?

If you don't ask those questions until the end of the process, you've already failed. You have to think about ethics from the jump. From the makeup of the team, to the data you're using to train the model, to the most basic question of all, is this even the right use case for artificial intelligence? The big lesson from IBM is this. Responsible AI is something you build at every step of the process.

So this season of Smart Talks is all focused on creativity in business. My guess is that thinking about trustworthy AI involves a lot of creativity, but talk to me about some of the spots where you see this work as being most creative. Oh, goodness. I would say incorporating design, design thinking in particular, as well as straight-up design in order to craft AI responsibly.

You've used this word design thinking. And so I'm wondering exactly what you mean here. How do you define this idea of design thinking? Design thinking is a practice that we established here at IBM many years ago. In essence, what it is, it's a way of working with groups of people to co-create a vision for something, for a product or a service or an outcome.

And typically it starts with things like, for example, empathy maps. Like if you're thinking about an end user, thinking through what is this person thinking, seeing, hearing, feeling like, what are they experiencing in order to ultimately craft an experience for them that is targeted specifically for them? So we use it in a really wide variety of different ways with respect to trustworthy AI.

even rendering an AI model explainable to a subject. And I'll give you an example. So we've got this wonderful program within IBM called our Academy of Technology, and we take on initiatives that steer the company in innovative new directions. So we had an initiative where it was titled, What the Titanic Taught Us About Explainable AI.

And the project was imagining if there was an AI model that could predict the likelihood of a passenger getting a life raft on the Titanic. And we broke up into two work streams. One was the work stream full of the data scientists who were using all the different explainers to come up with the predictions, and they would crank out the numbers.

And the other team, here's where the social scientists lived and the designers were, right? Where we were thinking through

we empower people? How do we explain this algorithm and this predictor and the accuracy behind this prediction in such a way as to ultimately empower an end user so they could decide, "I'm not getting on that boat," or, "I want to get a second opinion, please," or, "I want to contest

the outputs of this model because I upgraded to first class just yesterday. See what I'm saying? And that takes a lot of creativity. How do you design an experience for someone in order to ultimately empower them? So design, design, design is critically, critically important. And why I mentioned, you know, we've got to open up the aperture with respect to who we invite to the table in these kinds of conversations.

Taking the time to really understand other people's perspectives is so important when you're doing anything creative. And it is fundamental to the way the new creators work. The core question you should always be asking is, where will the user be meeting this product? As Phaedra said, what will they be thinking, seeing, hearing, feeling?

If you can answer those questions the way IBM does in its design thinking practice, you will be in great shape to create almost anything, really. Let's hear how it works in practice.

And so we've been mostly talking kind of at the meta level about, you know, how to think about AI ethics generally. But of course, the way this probably occurs in the trenches is a client approaches IBM and they want help with a specific problem in AI. And so I'm wondering from a client based perspective, where do you start having some of these tough conversations? It has varied, to tell you the truth. We had one client that approached us to expand the use of an AI model and

to infer skill sets of their employees, but not just to infer their technical skills, but also their soft foundational skills. Meaning, let me use an AI to determine what kind of communicator you might be, Laurie, right? Others might come to us with

okay, we recognize we need help setting an AI ethics board. Is this something you can assist us with? Or we have these values, we need to establish AI ethics principles and processes to help us ensure that we're compliant given regulations coming down the pike. Or we've had clients come to us saying, please train our people how to assess AI.

for unexpected patterns in an AI model, but then also how to holistically mitigate to prevent any potential harm. And those have been phenomenal engagements.

They're huge learning moments. And so it seems like the real additional value that IBM is bringing through this process isn't necessarily just providing an AI algorithm or consulting on some AI algorithm. It seems like the real value added is explaining how this design thinking works. You're almost like this therapist or like a really good bartender who talks to people, who talks whole companies through some of their problems to try to figure out where they're going astray before they start implementing these things.

Can I put chief bartender officer on my card? I like the metaphor. I'll tell you, some of our most valuable people on the team for that engagement, we had an industrial organizational psychologist, we had an anthropologist. That's why I'm saying it's important we bring in the social scientists because you're exactly right.

It's more than just scrutinizing the algorithm in its state. You have to be thinking about how is it being used holistically. And so if I was a business that was trying to think about how a company like IBM could come in and help out with more trustworthy AI, what would this process really look like?

Well, what we're finding more often than not is that there'll be smaller teams within broader organizations that either have the responsibility of compliance.

and see the writing on the wall, or they've been the ones investing in AI and are trying to figure out how to get the rest of the organization on board with respect to things like setting up an ethics board or establishing principles or things like that. So some things that we've done to help companies do this is we kick off engagements with

with what we called our AI for Leaders workshops. On the one hand, it's teaching why you should care. But on the other hand, it's meant to get people so excited across the organization that they want to raise their hand and say, I want to represent this part. Like, for example, I want to be part of the ethics board as it is being stood up. The hard part's not the tech.

The hard part is human behavior. And I know I'm preaching to the choir, given your background. It's so nice as a psychologist to hear this. I'm like snapping my fingers like preach. Exactly. The hard part is human behavior. So it's been like drinking from a fire hose. I mean, in terms of the kinds of things that we've all been learning, and there's still so much to learn. It really bugs me that those who are lucky enough to

to be able to take classes in things like data ethics or AI ethics, self categorized as coders, machine learning scientists, or data scientists. If we're living in a world where AI is fundamentally being used to make decisions that could directly affect our livelihoods, we need to know more. We need to have more literacy and also make sure that there is a consistent message of accessibility

such that we're saying, you don't just have to be interested in coding, like you're interested in social justice or psychology or anthropology. There's a seat at the table for you here because we desperately need you. We desperately need that kind of skill set. Just getting people to think about how do you design something given an empathy lens to protect people? I mean, that I think is such a crucial skill to learn.

One thing I love about your approach is that when you're talking to clients, you're almost doing what I'm doing as a professor, where you're kind of instructing students, getting them to think in different ways. But I know from my field that I wind up learning as much from students as I think sometimes they learn from me.

And so I'm wondering what you've learned in the process of helping so many businesses approach AI a little bit more ethically. Like, have there been insights that you've gotten through your interaction with clients and the challenges they've been facing? I'm learning with every single interaction. For example,

In my mind, given the experiences that IBM has had with respect to setting up our principles, our pillars, our ethics board, there's a process to follow, right? If you're thinking about it like a book, these are the chapters in order to...

to optimize the approach, let's say. But sometimes we work with clients that say, I'm going to install this tool and I want to jump to chapter seven. And it's like, oh, okay, you know, how do we help navigate clients that want to skip over steps that we think are important?

Another one is, again, the social scientists and bringing them in to really push hard on what is the right context for this data. Tell me the origin story again, like really pushing us to think hard and with their perspective, you know.

You know, just constant, constant learning, which is why one of the things we did at IBM is we've established something called our Center of Excellence, where we said, you know what, IBMers, we don't care what your background is. We don't care who you are. If you're interested in this space, you can become a member. The Center of Excellence is a way in which we have...

Not only projects people can join in order to get real life experience, but then also share back. Here's what we learned. We did this with this particular client. Here was our epiphany. Because if we're not sharing back and we're not constantly educating, then we're missing the opportunity to establish the right culture. Establishing the right culture to share what we're learning is so important.

And so I wanted to end by going back to where we started, you with your technophile family watching Star Trek. I think if we were to fast forward a couple of decades, we probably couldn't have imagined that we'd be in the place with AI generally where we are now, and especially as we think through more trustworthy AI. And so, you know, with such change happening right now, with the fact that it's a fire hose, that's going to just get even more powerful over time. What do you think is next in this world of thinking through more trustworthy AI? Yeah.

I would say next is far more education, far more understanding. And we're starting to see that shift far more. CEOs saying, yeah, ethics has to be core to our business. But there's a shift. Barely half of the CEOs in 2018 were saying that AI ethics was key or important to their business. And now you're saying the great majority. So...

Education, education, education. And again, I would underscore making it far more accessible to far more people, which means it's not just...

our classes in higher ed institutions, it's our conferences, it's anytime we write white papers, anytime we publish articles, anytime we do podcasts like this, right? The way we talk about this space has to be far more accessible and open and inviting to people with different roles, different skill sets, different worldviews, because else again, we're just codifying our own bias.

Well, Phaedra, I want to express my gratitude today for making AI a little bit more accessible to everyone. This has been such a delightful conversation. Thank you so much for joining me for it. The pleasure was mine, Laurie. Thank you for being the consummate host. I want to close by going back to that moment when Laurie suggested that Phaedra was actually IBM's chief bartender officer. Not just because that's the best C-suite title ever.

But because it gets at what I think is the biggest, most important idea in today's episode. Phaedra boiled it down into a single line when she said, "The hard part is not the tech. The hard part is human behavior." Why is building AI so complicated? Because people are complicated.

IBM believes that building trust into AI from the start can lead to better outcomes and that to build trustworthy AI, you don't just need to think like a computer scientist. You need to think like a psychologist, like an anthropologist. You need to understand people.

Smart Talks with IBM is produced by Molly Socia, Alexandra Gerriton, Royston Berserve, and Edith Rusillo with Jacob Goldstein. We're edited by Jen Guerra. Our engineers are Jason Gambrell, Sarah Bruguere, and Ben Tolliday. Theme song by Gramascope. Special thanks to Carly Migliore, Andy Kelly, Kathy Callaghan, and the 8 Bar and IBM teams, as well as the Pushkin Marketing team.

Smart Talks with IBM is a production of Pushkin Industries and iHeartMedia. To find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts. I'm Malcolm Gladwell. This is a paid advertisement from IBM.