We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Designing a Better Future: Mastercard’s JoAnn Stonier

Designing a Better Future: Mastercard’s JoAnn Stonier

2021/4/27
logo of podcast Me, Myself, and AI

Me, Myself, and AI

AI Deep Dive AI Chapters Transcript
People
J
JoAnn Stonier
S
Sam Ransbotham
S
Shervin Khodabandeh
Topics
JoAnn Stonier: 我是Mastercard的首席数据官,我的团队负责确保公司信息资产在应对当前和未来数据风险的同时保持可用性。我们的工作范围很广,包括制定数据战略,管理数据治理和质量,支持人工智能和机器学习的应用,设计和运营企业数据平台,以及确保数据合规性。我们致力于将合规性和负责任的数据实践融入产品设计,从数据来源到产品创建和启用,贯穿始终。我们正在积极探索人工智能在减少偏差和构建包容性未来方面的应用,并与产品研发团队、人力资源团队合作,制定相关策略和技能培养计划,以确保我们的人工智能应用符合道德规范,并能为未来的几代人带来益处。我们关注数据集中固有的偏差,包括缺失信息,并努力设计有效的查询来减少偏差的影响。我们使用特定的框架来处理AI伦理问题,并关注上下文变量。为了应对AI潜在的未知偏差,我们需要理解数据来源、质量和一致性,这对于自行进行分析和评估供应商提供的解决方案都至关重要。在构建AI系统时,我们需要关注模型漂移和不准确的输出,以防止错误结果被放大。平均值有时并不能反映实际情况,因为它可能无法满足特定目的。我们需要转变思维模式,将数据视为非中性的,并考虑其使用目的和潜在危害。尽管处理数据偏差和风险具有挑战性,但看到公司为此付出努力并取得进展是令人兴奋和有益的。我们寻找具备数据技能、技术技能和政策理解能力的员工,并重视跨领域沟通和翻译能力。设计思维帮助我更好地解决业务问题,并从之前的首席隐私官职位过渡到首席数据官职位。911事件后,我重返设计学院学习,这提升了我的商业能力和解决问题的能力。设计思维帮助我将技术挑战转化为设计问题,并找到解决方案。Mastercard数据团队成员通常具备左右脑思维能力,能够在不同领域之间进行转换。疫情加速了数据和数字化在商业中的应用,提升了数据解读和利用能力的需求。 Sam Ransbotham: JoAnn Stonier强调设计思维、情绪谨慎和多方面能力在Mastercard成功应用AI方面的作用。Mastercard并未因AI应用的挑战而改变策略,而是坚持继续推进AI项目,并根据战略重点选择项目。Mastercard注重AI的伦理应用,并将其视为识别和减少偏差的工具。 Shervin Khodabandeh: 需要对数据进行多层次的治理,以确保AI输出的准确性、伦理性和无偏差性。

Deep Dive

Chapters
JoAnn Stonier discusses her role as Chief Data Officer at MasterCard, focusing on ensuring data assets are available while managing current and future data risks.

Shownotes Transcript

Translations:
中文

Today, we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast. We hear a lot about the bias AI can exacerbate.

but AI can help organizations reduced by us too. Find out how when we talk with Joanne Stonier, Chief Data Officer at MasterCard. Welcome to Me, Myself and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, Professor of Information Systems at Boston College. I'm also the guest editor for the AI and Business Strategy Big Idea Program at MIT Sloan Management Review.

And I'm Shervan Kodabande, senior partner with BCG, and I co-lead BCG's AI practice in North America. And together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities across the organization and really transform the way organizations operate.

Today we're talking with Joanne Stonier. Joanne's the Chief Data Officer at MasterCard. Joanne, thanks for taking the time to talk with us today. Welcome. Thank you. Happy to be here. Joanne, let's start with your current role at MasterCard. Can you give us a quick overview of what you do? Sure. I'm currently, as you said, the Chief Data Officer for the firm. And me and my team are responsible for ensuring that MasterCard's information assets are available for information while navigating current and future data risk.

So my team has a very broad mandate. We work on helping the firm develop our data strategy and then work on all the different aspects of data management, including data governance, data quality, as well as enabling things like artificial intelligence, machine learning. And then we also help design and operate some of our enterprise data platforms. It's a very broad-based role. We work also on data compliance.

and how do you embed compliance and responsible data practices right into product design. We start at the very beginning of data sourcing all the way through to product creation and enablement. It's a lot of fun. Because our show is about artificial intelligence, let me pick up on that aspect. Sure. Can you give us examples of something you're excited about that MasterCard is doing with artificial intelligence?

Oh my gosh, I've had so many conversations just this week about artificial intelligence, most of them centered on minimization of bias, as well as how do we build an inclusive future. The conversations that really excite me is how the whole firm is really getting behind this idea and notion. I've had conversations with our product

development team and how do we develop a broad-based playbook so that everybody in the organization really understands how do you begin to really think about design at its very inception so that you're really thinking about inclusive concepts. We've had conversations with our people and capabilities team or what's more commonly known as human resources talking about the skill sets of the future.

How are we going to not only have a more inclusive workforce at MasterCard and what do we need to do to provide education opportunities both inside the firm but also outside the firm so that we can create the right kind of profile of individuals so that they have the skill sets that we need, but also how do we upskill, how do we really begin to create the opportunities to have the right kinds of conversations. We're also working really hard on our ethical AI process

So there's so many different aspects of what we're doing around artificial intelligence, not just in our products and solutions, in fraud, in cyber, in general analytics and in biometric solutions around digital identity, that it's just, it's really an interesting time to do this work and to really do it in a way that I think needs to last for the generations ahead. Joanne, that's really interesting. I'm particularly impressed

excited that one of the things you talked about is use of AI to prevent bias. I mean, there's been a lot of conversations around the unintended bias of AI and how to manage it. But I heard you also referred to it as a tool that can actually help uncover biases. Can you comment more about that?

Yeah, but it's hard, right? This is a lot of hard work. I do a lot of conversations both with academic institutions, other civil society organizations, right? We're early days of AI and machine learning. I think we're probably at generation maybe 1.5, heading into generation two. But I think the events of this past year

have taught us that we really need to pay attention to how we are designing products and solutions for society. And that, you know, our data sets are really important in what are we feeding into the machines and how do we design our algorithmic processes that we also are feeding into the intelligence and what is it going to learn from us.

And so I try to explain to people that, you know, our first generation of data analytics, we were creating algorithms or analytic questions that really asked the question of this is the question. If this condition exists in the data set, then do something. Right. So we were looking for the condition to exist in the data set and then we were acting. Rules based.

Right, rules-based, exactly. Artificial intelligence and machine learning actually flips that a little bit on the head.

Instead, we take the data set and say, what does the data tell us? What does the data tell us and what does the machine then learn from that? Well, the challenge of that, though, is if you don't understand what's in the data, right, and the condition of that data as you bring it into the inquiry and you don't understand how those two things fit, then you wind up with a biased perspective. Now, it's unintended. It's inherent, right, to your point.

But nevertheless, it's something that we have to back up and readjust our lenses as we begin to look at our processes around artificial intelligence. So we start with the data sets, and we understand that data sets are going to have all sorts of bias in them. And that's okay. First of all, bias always gets at like a, you know, kind of a very gripping reaction, right? I use the example all the time. If you go back to like the 1910 voter rolls in the United States, right?

That's a valid data set. You may use that for whatever purpose you may have for evaluating something that happened in 1910 or 1911. But you need to know that inherently that data set is going to miss women. It's going to be missing people of color. It's going to be missing parts of society. As long as you know that,

then you can design an inquiry that is fit for purpose. The problem is if you don't remember that or you're not mindful of that, then you have an inquiry that's going to learn off of a data set that is missing characteristics that's going to be important to whatever that other inquiry is. Those are some of the ways that I think we can actually begin to design a better future. But it means really being very mindful of what's inherent in

in the data set, what's there, what's missing, what also can be imputed. We can talk about that, what kind of variables can be created by the machine, right? It can be imputed as well. And so all of those things are things that we're looking at at MasterCard. So we have a very specific framework that we're using around ethical AI, but then we're really getting into the nitty gritty around different kinds of context variables.

So I want to build on that because I think it's a very critical topic to so many of our listeners. So you talked about as long as you know, you could look for it, but then the things you don't know, and then you find out in the hindsight, how do you prepare for those?

Well, I think you have to be aware, again, you know, for what purpose, to what end? And now let's go look at the data sets. And this is where you've got to back up the data food chain to understand your data lineage, understand the

quality, the consistency of the data sets that are going into the analysis. And this is not easy. So nothing I'm saying, I want everybody to hear me clearly. Nothing I'm saying is easy, right? Everything requires a lot of scrutiny and time, but that's easiest if you're doing it yourself. But if a vendor's doing this and is presenting this to you as a combined solution, you need to be even more curious about what is going into this recipe. Every one of these elements becomes super important. Why?

Because if we do it when we're building it, those of us who are the data scientists and the data experts, the data designers, right? We understand it because when we're going to be looking at the information models and the outputs as they come out in the first generations, we're going to be looking for the model drift. We're going to be looking to see, is this an accurate output? Is this truly the result? Or is it because of some inequity in the input?

Which is okay if it's accurate, that's okay, because the machine's going to then start learning and we're going to put more data and more information. But if it's incorrect and we haven't caught it, it could get amplified to the incorrect result. And it could have, and this is what we also really care about, that's when we get to the output analysis, it could amplify really bad outcomes. And the outcomes can be significant or insignificant depending upon if it's being designed for an individual or

or if it's being designed for groups, or if it's just being something, you know, I talk about how I suspect that when the airlines were first modeling the size of the overhead bins, it was on very average heights, okay? But the average height did not include enough women. How many ladies climb on seats to get their luggage at the very back of the bin or need to rely on one of you nice gentlemen to grab our overhead suitcases down to assist us, right?

averages sometimes are the worst result because they're not fit for purpose. I think you're talking about injecting an entire new layer in the data analysis or data usage process, right? Maybe. Because was it not the case, Sam, that 20 years ago, people thought of data as neutral? Like data is neutral. You use what you want.

But data could be potentially harmful. And I think what you're talking about is you're trying to create a mindset shift that says it's not just data comes in, answers go out, but you have to be mindful of why you're using it, how you're using it. And this mindset shift just permeates not just your organization, but also you're saying your vendors and other parts of the ecosystem. It's a huge undertaking. It is a huge undertaking.

So those are some challenging aspects. What's rewarding? What kind of exciting things happen? These seem a little bit sort of like, oh, gosh, we have to worry about this. I have to worry about that. Where's the get excited about the fun part? Oh, I think all of this is fun. I do. So I'm, you know, but I'm a data geek, right? So

What's fun is actually seeing my firm come to life around this. These are actually very exciting conversations inside of the firm. I don't mean to make them sound like just risk conversations at all. It's intellectually challenging work, but we all agree that we think it's going to bring us to better places and already is bringing us to better places in product design.

You talked about the workforce aspect of it and that technology is necessary, but it's far from sufficient. We heard from one of our other speakers, the president of 100 Flowers, about learning quotients, the desire to want to learn.

How do you bring all of that together in terms of the future of the workforce that you guys are looking at and the kind of talent and skill set and attributes that are going to be successful in different roles? Are you taking actions in that direction?

We really look for people who also can design, you know, controls and processes and make sure that we can navigate those. You know, we live in a world of connected ecosystems. And so we're only as good as our partners, but it means those handshakes, right? Those connections are super important to understand. And so how do you make those inquiries of other organizations so that you can create those connected ecosystems that are only going to grow in size and scale and

for the future and how do we help design those? How do we be the leaders in designing some of those is really, really important for us. So it really is this meshing. I talk about being like a three-sided triangle of data skills. Is there a different kind? Exactly, yeah.

But we sit in the middle, right? We sit in the middle of the business. We sit with technology skills, as well as there's an awful lot of policy that we need to understand often. It's the other design constraint of where are laws evolving? Where is the restrictions, the other type of restrictions that we need to know?

Is it going to be on soil? Can we not use the data because it's contractually restricted or it's restricted because it's a certain type of data? It's financial data, right? It only can be used for one specific type of purpose. Or it can only be used at an aggregated level, for example. It can be used at a segment level. Those type of restrictions as well at that compliance level also needs to be understood. So sitting in the middle of the middle is sometimes really hard. So that's why you have to be able to admit you don't know one aspect of that triangle.

So you mentioned the word design, well, a couple of times at least. Can you tell us a little bit about how you got to your role? Because I think that design thinking is coming from your background. Can you share with us how you got to where you are?

I think it's probably easier if I just go quickly backwards. So prior to being the chief data officer, I was the chief privacy officer for the firm, which was a lot of fun. I enjoyed that role immensely. I helped the company become GDPR compliant, which really was, I think, the moment in time when many, many companies were

were coming to the place where they needed to operationalize a lot of data risk. And we had been on that journey. We had a whole process called privacy by design. We still use that process. Lots of companies use that phrase because it's a regulatory phrase. But we had already been looking at our products and solutions to try to understand how could we embed privacy and security into their very design and into their fabric. And we still do that to this day.

But as we were doing the compliance work and planning it out for the GDPR, the European privacy law, we were realizing we were going to need additional platforms and systems to be built. And as I was doing that, I kept saying to anyone who had listened, who's going to own all the data? Because I really need somebody to speak with. And they just kept patting me on the head saying, you just keep designing everything and it'll be fine. Well, okay. So here we are, you know, that I'm the first chief data officer. But it was

It was kind of an expansion and then a severing of a lot of the work that I had been doing. But I came to be the chief privacy officer. I was previously the chief privacy officer for American Express. I came to that role after a pretty good-sized career at American Express as well, working in a variety of roles. I understood how that firm worked. So financial services is something I've been doing for a while. But the design piece comes in because of 9-11.

I had the misfortune of being downtown that day and the American Express building is right across the street from what was the World Trade Center. And so I saw a lot and lost colleagues that day. And we were all relocated for several months while the building was repaired. And so on New Year's of the next, I don't know, it was probably like January 4th, actually, of 2002,

I was thinking, well, there's a lot of people who were not alive that New Year's, and I thought, well, what can I do? What haven't I done in my life that I really want to go accomplish? And so I decided to go back to design school. This is after law school, so I got my law transcripts were part of the process, which was fun getting them to send my transcripts to a design school.

But I got a design degree, and I'm really glad I did because it's made me a better business person, and it's given me a whole different way of thinking about problem-solving.

And then I've had the good fortune. So I do, yes, I do interior design work. And yes, I can help you with your kitchen and your bathroom and all the other things you want to do during COVID. I've helped several colleagues pick out tile and other things. All right, we'll talk after the podcast. We can talk after the podcast about that. But I also then had the good fortune to meet the Dean at Pratt Institute, and I teach in their design management master's program.

And I teach business strategy, and I've taught other courses in that program. And that's also helped me really evolve my design thinking as well. And so all of these things, yes, I have a law degree. Yes, I'm a data person. But having the design thinking as well really makes you not think of things as problems, but just constraints around which you have to design things, and that they will shift over time. So it's no different than, well, the electric is over here, and the plumbing is over here, and you only have so much space. So

So how can you utilize this, right? It's the same thing for, well, I can only use the data in this way, and I want to achieve an outcome. How do I do that?

And so it is the same kind of strategic toolkit. You just kind of flip it for, okay, well, the powder room has to be only this big. So that's one challenge. Or the outcome is to design a lookalike model for fraud that only utilizes, you know, synthetic data, right? Or something along those lines.

But it does kind of give you a little bit of a can-do spirit because you figure something has to be possible because there's raw material in the world. So that's kind of how I approach things. It's very well said.

That's a very weird triangle. The triangle is very weird. Yeah, my triangle is probably very weird. I get that a lot. Although that's a bit of a theme, though, you know, when other people we've talked to, they're bringing in these experiences from other places. And very often they're not super technical things. We're talking about artificial intelligence. You might think that it would quickly go down a path of extreme technical, but I think it's just as likely to go down a path of design.

Yeah, no, what we find on my team and generally is that if you find people that have kind of that combination of right brain, left brain. So I have lawyers and engineers on my team, right? They have both degrees.

It's a really interesting match of being able to translate the business to the technical, the technical to the business, right? Translating the regulatory to the business or the regulatory to the legal. It kind of works as a giant translator role. And if you think about it, that's kind of the moment that we're in right now is the ability to be fluid in generating

translating concepts from one domain into another. And I think it works. But I do think that some of those competencies are going to be equally important or more important as we continue to evolve, as we begin to develop. There'll be the core skill sets that we've always had, you know, leadership skills and problem solving skills and analytic skills. But I do think that the ability to translate that and then also be able to derive from the output, right,

whether it's from a dashboard or reports or whatever, I think that's also going to be really important business skills for any businessman or woman. You said right now, you said like, that's the place we are right now. What do you think is caused that? What's making that so important right now?

I mean, I don't know that I was emphasizing that, but I think when I look over the past year and I think about we're just on the year mark of COVID for us, you know, I can remember when we went virtual. I remember so many of the customer and client calls that I was helping to field at the time. Companies were in a moment, right? So many merchants were going online. If they didn't have a huge presence, they needed to create one. Business models were really shaken.

But everybody was looking for data to try to help make some decisions, right? And they needed guidance. If they didn't have the data, they were looking for data. If they had the data, they were looking for guidance on how to interpret it. If they knew how to interpret it, they were looking for lookalike information to make sure they were making the right decisions. So if you just look at that maturity curve...

We have been thrust forward in time. So I think it would have been happening maybe in a less compressed way. I think we've been in a time compression where data and digital has mattered. And so we can't undo that. And so I think these skill sets matter more and more just because of the compression we've been under as a society and

And so we're going to see that be part of what this next normal or next generation, 'cause I don't really like the word normal, next generation, this is now part and parcel of how we interact with each other. We will never go back to just being in person.

This kind of, you know, connecting digitally will always now be part of our mandate, right? And part of the toolbox. So I just think that there will be more of a need to be able to interpret and to utilize than we had a year ago. It seems like your superpower is taking some situation, whether it's 9-11 and turning it into design school or taking a pandemic and turning it into a more fluid approach to things.

It sounds like your superpower in many cases. Thank you. I'll take that one. Joanne, thank you so much for talking to us. You really picked up on a lot of points. We've covered a lot of ground. Thanks for taking the time to talk with us today. Yeah, thank you so much. You're welcome. Thanks for having me.

Shervin, that was great. Joanne covered a lot today. What struck you as particularly interesting? I mean, a lot of things struck me in a positive way. I liked how she talked about design thinking. She talked about emotional caution. And I actually think that's been part of her secret sauce and part of MasterCard's success with these kinds of initiatives because

Like if you look in the industry, the CDO role is actually a somewhat precarious role because there's very successful CDOs and there's many CDOs that just don't have that vision or that ability or that autonomy or sort of that multi-sided triangle as she talked about. And so they don't succeed and they last for a couple of years. And so I think there is something to be said for that.

What did you think about Sam on her comments about implementation, the challenges? Yeah, exactly. I thought it was interesting that they've not changed what they're doing in terms of AI projects because of some of the fact that it's hard. I think they recognize that it's hard and they've doubled down on it. And I think that in and of itself is a little bit of a validation that what they're doing they believe in, that even though it's hard, they're still continuing. And yeah.

I think they are picking projects based on their strategic focus rather than picking a project to do AI. And, you know, that's something that just keeps coming up. No one's out here saying, hey, you know, let's do some AI today. It's Monday morning. Let's do some AI. Exactly. These people are trying to deliver on things that are in their organization's strategy. And it turns out that in many cases, AI is the tool. But if it's not, that's OK, too.

but it often is. Strategy with AI, not strategy for AI. Where did I read that? Yeah. Yeah, I know, right? Didn't you write it? That was all David. The other thing I thought was quite insightful, and she went there very early on, is about trust and ethical use of AI and sort of reining in the AI solution to make sure it's ethical, but also using it

to uncover hidden bias so that you could be more inclusive, you could be more aware of the workforce, of your customers, of the ecosystem. What I liked was particularly insightful there was

the need not to view all data as equally neutral. And that depending on where the data goes to, very harmful things can come out of it, which necessitates another layer. I mean, she hinted on it, right? There's a layer of awareness and governance and a mix of tech artifacts as well as process protocols to make sure that the outputs are accurate

ethical and unbiased. And she talked a lot about the need for mindset change for that to work, that people have to ask, what am I using it for? What is it going to do? What could go wrong? And that constantly probing for things that can go wrong so that you could preempt it.

But also, she saw it as a tool to prevent bias. I mean, we've seen so many stories out there about, oh, this AI system's biased, that AI system has caused bias. And I'm not denying those are true. I'm not a biased denier. It's just there's an opportunity that she saw in that data to rectify that. They are a fact, and there's not going to be a

No amount of retrofitting is going to take that away. But she also said we don't ignore the data either. We just have to know what that provenance is. I think that's a great point, Sam. Thanks for joining us today. Next time, Shervin and I will talk with Chris Couch. Chris is the Senior Vice President and Chief Technology Officer at Cooper Standard. Please join us. Thanks for listening to Me, Myself, and AI. If you're enjoying the show, take a minute to write us a review.

If you send us a screenshot, we'll send you a collection of MIT SMR's best articles on artificial intelligence, free for a limited time. Send your review screenshot to [email protected].