Today we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcast. How can governance of artificial intelligence help organizations?
the word governance can come with a lot of baggage and some negative connotations but governance can enable organizations too the question is how we'll close out the season with a discussion with kay first butterfield she's the head of artificial intelligence and machine learning for the executive committee of the world economic forum with kay we'll learn not only about her specific background in the legal profession but she'll also help us think about what we've learned overall this season welcome to me myself and ai a podcast on artificial intelligence and business
Each week, we introduce you to someone innovating with AI. I'm Sam Ransbotham, Professor of Information Systems at Boston College, and I'm also the guest editor for the AI and Business Strategy Big Idea Program at MIT Sloan Management Review.
And I'm Shervin Kodabande, senior partner with BCG, and I co-lead BCG's AI practice in North America. And together, BCG and MIT SMR have been researching AI for four years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and deploy and scale AI capabilities and really transform the way organizations operate.
Well, first, Kay, let me just officially say we're thrilled to have you talk with us today. Thanks for taking the time. Welcome. Thank you. Of course. So, Kay, you've got a fascinating job, or actually, really, jobs. You've got so many things going on. So, for our listeners, can you introduce yourself and describe your current roles? Yes, certainly. I'm Kay Firth-Butterfield, and I am head of AI and machine learning at the World Economic Forum.
So what that essentially means is that we work with multi-stakeholders, so companies, academics, governments, international organizations and civil society to really think through the governance
of AI. So when I say governance, I say it very much with a small g. We're thinking about everything from norms through to regulation. But AI, we feel is less susceptible to regulation. Can you tell us how you got there? Give us a little bit of background about your career to date and how did you end up in this role?
I am by background a human rights lawyer. I'm a barrister, that's the type of trial lawyer that wears the wig and gown, and
And I got to a point in my career where I was being considered for a judicial appointment. In the UK, they kindly sort of try out whether you want to be a judge and whether they think you're any good at it. I don't know what their view was, but my view was that it wasn't the culmination of a career in the law that I really wanted.
And I had been very interested in the impact of technology on humans and human rights. And so it gave me this wonderful opportunity to rethink where my career would go. So I was fortunate to be able to come to Austin and teach AI, law, international relations there.
to pursue my own studies around law and AI and international relations and AI and the geopolitical implications of this developing technology. And then, purely by luck, I met a person on a plane from Heathrow to Austin. It's 10 hours away.
He was the chair and CEO of an AI company who was thinking about AI ethics. And this was back in 2014 when hardly anybody apart from me and the dog and some other people were thinking about it. And so he asked me as we got off the plane,
if I would like to be his chief AI ethics officer. And so that's really how I moved into AI. But obviously, with the social justice, with the ideas of what benefits AI can bring to society, and also cognizant of what we might have to be worrying about,
And so I have been vice chair of the IEEE's Initiative on Ethically Aligned Design since 2015. I was part of the Azulimar conference thinking about ethical principles for AI back again in 2015.
And so my career ended up with me taking this job at the Forum in 2017. I say ended up, but maybe not. Who knows? Yeah, we won't call it an end just yet. So what does artificial intelligence mean?
Well, part of the problem and part of its complexity is that AI means different things to different people. So AI means one thing to an engineer and another thing to a person who's using it as a member of the public through their phone. So we're shifting our definition as we go and we'll continue to as well. Yeah, there's that old adage that it's not in artificial intelligence once it's done.
And how much of that do you think is education and is sort of stemming from lack of understanding and lack of education versus a technical or process complexity inherent in putting all that governance in place, right? I mean, I guess part of it is you can't really manage or govern that which you don't really quite understand. Is that...
most of the battle. And once everybody understands it, because it's common sense, then they begin to say, well, now how we could govern this like anything else we would govern, because now we understand it.
Yes, well, I think that it's organizational change. It's education and training for employees. But it's also thinking very carefully about product design, so that if you are actually developing algorithmic product, what's the path of that from the moment that you dream up the idea to the moment that you release it either to other businesses or into customers, and maybe even beyond that?
I couldn't help but pick up on one of the things you said about governance as being negative. But one of our studies a few years ago found that healthcare shared data more than other industries. And that seems counterintuitive. But when we dug into it, what we found is they knew what they could share. They had structure about it. And so that structure then enabled them to know what they could do and know what they couldn't do. Whereas other places, when they talked about data sharing, they were
well, let's have to check with our compliance department and have to check and see what we can do. And, you know, there's much less checking because it's explicit and the more explicit we can be. And that's an enabling factor of governance versus this sort of oppressive factor of governance. Yes, I think just governance has got itself a bad name because, you know, regulation impedes innovation. And that's not necessarily so. Yeah.
I think that at the moment we're exploring all these different soft governance ideas, largely because to begin with, yes, we will probably see regulation. The EU, we will see regulation out of Europe around things like facial recognition and use of AI and human resources because they're classified as high-risk cases. But a lot are not necessarily high-risk cases. What they are are things that...
businesses want to use, but they want to use wisely. So what we have done as well is create a lot of toolkits, for example, and guidelines and workbooks say that companies or governments can say, oh, yes, this can guide me through this process of, for example, procurement of artificial intelligence. Just to give you an example, we surveyed a number of our members of boards,
on their understanding of artificial intelligence. They didn't really understand artificial intelligence terribly well. And so what we did was develop an online tool for them to understand artificial intelligence, but also then to say, okay, my company is going to be deploying artificial intelligence. What are my oversight responsibilities?
and long questionnaires, the things that you might want to ask your board if you're on the audit committee or the risk committee or you're thinking about strategy.
So really digging into the way that boards should be thinking across the enterprise about the deployment of AI. Yeah, because I'm guessing most people need that guidance. Yeah, most people for sure need that guidance. And I think this is a very well-placed point you're making. What we don't want to happen is to be so far behind in AI
understanding and education and governance of any technology where then it becomes such a black box that it's a huge activation energy, you know, for anybody to get there. And, you know, we heard that also from
Slava Kirner from Humana, we heard that from Arti at H&M, is the importance of really big cross-organizational training, not just for the board and not just for the handful, but for everybody almost. Like, you know, I think we heard from Porsche that they actually did training for their entire technology organization, right?
This is AI. This is what it could do right. This is what it could do wrong. It's what you need to learn. And by the way, this is how it can give you all these new designs that you as a, you know, engineer or a designer could explore to design, you know, the next generation model. And this is how it could be your friend. But I think you're pointing out that it's,
time for us to really internalize all of these as not nice to haves, but critical, even I would say almost first step before getting too far ahead. Yes, yes, absolutely. And in fact, there's a company in Finland that requires everybody to learn something about AI, even at the very most basic level. And they have a course for their employees, which is
important. Obviously, not everybody can master the math, but you don't even have to go that far. Or should. I can't help but build off of your human rights background. One of the things that strikes me is there's incredible advances with artificial intelligence used by organizations, particularly large organizations, particularly well-funded large organizations. How do we as individuals stand a chance here? Do we each need our own individual AI working for us?
How can we empower people to work in this perhaps lopsided arrangement? Yes, I think the imbalances of power is something that we have to address as both individuals and as companies. You know, there are some companies with more AI capabilities than others, as non-profits and also as a world, because at the moment the concentration of AI companies
talent, skills and jobs is very skewed around the world. And we really have to think globally about how AI is deployed on behalf of humans and what makes us human and where we want to be maybe in 15 or 20 years when AI can do a lot of the things that
we are doing currently. So I think that it's systemic and structural conversations that we have to have.
in all those different layers as well. Right, the systemic and structural issues are big because I have to say, I don't think most companies intend to start AI with an evil bent. I mean, they're not cackling and rubbing their hands together and plotting. I think these things are more insidious and systemic than that. So how do we do that? In my experience, you know, of working with a lot of companies, governments, etc.,
I would say you're absolutely right. Companies want to go in doing the right thing. And what we need to be doing is making sure that we help them do the right thing. And it's very much that perhaps a lack of understanding of the technology is going to skew how they use it. And so those are all areas that we have been trying to focus on at the forum today.
so that people who go into using AI with the right mindset actually come out with the right result. And, you know, your company is a little piece of society. The idea should be that everybody works together because you're actually going to end up with a better product. And I think to your point, the better we enable our customers or the general public to understand AI,
the less scary it will be.
I also fear that there are many companies that are being told to go out and get AI and they actually don't know what it is that they're getting or really what the benefit is going to be or what the downsides might be. So having the board being capable of asking the right question is absolutely crucial. But, you know, we're currently working on a similar toolkit for different types of C-suite officer teams.
so that they too can be empowered to understand more. But I also see the need for thinking carefully about AI as a top-down and bottom-up. That's why I go back to that survey that you did where organizational and understanding across the organization is actually so important. And I think where you're seeing some of the developments amongst AI
You know, the companies that have been dealing with this, like Microsoft, they went for an ether committee. They went for really sort of thinking about strategically how we're using AI. And so I think that we have the benefits of what they learned early on that we can then begin to bring into the sector from board to designer. Yeah.
And the good part about that is that education component keeps it from just being ethics theater, kind of the thin veneer to put the stamp on it and check the box that, yes, we've done the ethics thing. But I guess what's the role for business in trying to educate people to understand
have a better human machine collaboration. Obviously, we've heard a lot about the potential for AI to affect workplace and job security, but people are already incredibly busy at work. What potential is there for AI to kind of free us from some of these mundane things and lead to greater innovation? When we talk with Gina Chung at DHL, she's in the innovation department, and that's where they're focusing AI efforts. Is this a pipe dream or is there potential here?
No, I think that it's certainly not a pipe dream. And most people have innovation labs, both in the companies and countries. And UNICEF has an innovation lab. We were talking about children and AI innovation.
So the potential for AI to free us from some of the things that we see as mundane, the potential for it to help us to discover new drugs, to work on climate change, they're all the reason that I stay working in this space.
And you might say, well, you work on governance. Doesn't that mean that you just see AI as a bad thing? And that's not true. Just as an example, at the moment, you know, we have problems just using Zoom for education because there are many kids who don't have access to broadband. So that brings us against the questions of rural poverty and the fact that...
Many people move from rural communities to cities. And yet, if we look at the pandemic, cities tend to be bad for human beings. So are the conversations that we should be having and thinking about the innovations that AI will create, which allow that sort of cross-function of rural to be as wealthy as city,
We should be having really deep structural conversations about what our future looks like. Does it look like Blade Runner Cities or does it look like something else? You were mentioning, I guess I was suggesting kids were one extreme and you had been talking about board level, which seems like another extreme. It seems like there's a lot of other people between those two extremes that would need to learn how to work together alongside kids.
And I guess just looking for some practical, how do businesses get people to be comfortable with their teammate as a machine versus their teammate as a normal worker? Actually, for example, we've seen people completely impatient with robots. You know, if it's not perfect right off the bat, then why am I bothering teaching this machine how to do this?
You'd never be that impatient with another co-worker. You remember when you were first learning to do a job. So how do we get that same sort of, I guess, maybe empathy for the poor little machine? Yeah, well, I think, as I say, I do think it's an education and training piece that the company has to put in place. But also, yeah.
It's important because sometimes we over-trust the technology. So the computer told us to do it. You know, that's something that we've been noticing, for example, in the criminal sentencing problems that we've been having where judges have been over-reliant upon the fact that the machine's telling them this. And so it's that education to not over-trust the machine and also trust the machine is not going to take your job.
is not going to be spying on you. You know, there are sort of a lot of things that employees are frightened of. And so you've got to make sure that they have some better understanding of what that robot or machinery is going to do with them. And that it's a human machine interaction as opposed to one dominating the other. What's your thinking on...
To bring about large-scale understanding and change, not just at the board level, but from the fabric of the organization, how important is that companies begin to understand the different modes of interaction between AI and human and begin to test some of those things?
Obviously, that's really important. We do have a project that's actually led by Salesforce called the Responsible Use of Technology. And we see in that what we're trying to do is to bring together all the different companies like BCG who are actually thinking about these issues and
and come up with some best practices. So how do you help your employees to really think about this interaction with AI? How do you make sure that the company itself is focused on ethical deployment of technology and where your employees are going to be working specifically with the technology?
But they don't fear it. I think there's a lot of fear and that is at the moment probably not useful at all. You clearly can't be friends with somebody if you're afraid of them. Yes. And what we are seeing is that, you know, when I was talking about AI and ethics in 2014, very few people were talking about it. Now everybody does.
Not everybody, but every enlightened person is talking about it. But business is talking about it. And we're talking about business here. Business is talking about it. Government's talking about it. Governments are talking about it in the, if there is something that is unsafe, usually we regulate the unsafe. So I think actually the time is now to be having these conversations about
Do we regulate? Do we depend upon more soft law approaches? Because what we're doing, what we are setting now in place is the future. And it's not just our terrestrial future, but if we're going to go to Mars, we're going to use a lot of AI. We need to be really having these conversations. And one of the things that we have been doing is having a conversation that looks at positive futures.
So you can sort of look across the panoply of sci-fi and it's almost all dystopian. And so what we wanted to do is say, okay, we have this potential with AI. What do we want to create? And so we brought sci-fi writers and AI scientists and business and economists and people together together.
to really sort of have that conversation. So we're having the conversation about AI ethics, but the next conversation has to be, how do we systematically want to grow and develop AI for the benefit of the world and not just sectors of it? I could recall the flavor of these kinds of conversation I would have five years ago. It was very heavily tech focused. What does that tell you in terms of a profile of AI
you know, future leaders of AI, what is the right sort of traits, skills, sort of profiles do you think?
I think we will see, so I have a humanities background, I think we will see more humanities. So, you know, there's the AI piece that the technologists have to work on. But what we do know is that there's a Gartner study that says that by 2022, if we don't deal with the bias, 85% of algorithms will be erroneous because of the bias.
If that's anywhere near true, that's really bad for your R&D and your company. So what we know is that we have to create those multi-stakeholder teams. And also, I see the future of AI, this discussion, as part of ESG.
So I see the AI ethics discussion moving into that more social realm of the way that companies think about some of the things that they do.
And that's something that we heard from, for example, Picard Walmart, that they're thinking big picture about how these will connect and remove inefficiencies from the process. And that certainly has ESG implications. What we've seen with some of the other folks we discussed artificial intelligence in business with is that they've transferred learning from things that they've done in one organization to another.
They've moved this education component that you've mentioned before has not happened within companies. It's happened across companies and it's happened across functional areas. How do we encourage that? How do we get people to have those diverse experiences?
Yes, I think that that's A, right, and B, really important that we do. So I was actually talking to somebody yesterday who had set up some really good resources and training around artificial intelligence in a bank, then moved to government, and then moved to yet another private sector job and is doing the same thing.
And many of the trainings that we need to be thinking about with artificial intelligence are cross-sectoral.
So we did an interesting look at all the ethical principles that are out there. There are over 190 now from the Beijing principles through to the Azulimal ones, etc. That's different from 2014. It's very different from 2014. And one of the things that a lot of people sort of have said to me in the past is, well, whose ethics are you talking about anyway?
And what we found was actually there were about 10 things that were ubiquitous to all of those 190 different ethical principles. So there are 10 things that we care about as human beings wherever we are in the world.
And those are 10 things that are actually fairly cross-sectorial. So they're about safety and robustness. They're about accountability, transparency, explainability. They're about that conversation we had earlier, human-machine interaction. Then they're about how does AI benefit us as humans? So...
I think that that ability to be able to take what you've learned in one sector and move it to another is important and relatively straightforward. And also it seems very human. Yeah. That's something that I think that the machines themselves are going to struggle with and need at least our help for a while. Oh, undoubtedly. Yes. And it probably doesn't need saying to this audience, but it's worth saying that these machines are not really very clever yet.
There's still time. We're still, we're still okay. Thank God for that.
Okay. Thank you for taking the time to talk to us. We've really enjoyed it. Yeah. Thank you so much, Kay. It's been a pleasure hearing your views and your leadership on this topic. Thank you so much to both of you. It's been a pleasure and a privilege to be with you. I could have talked on for hours. But we can't because that is the end of our episode and that is the end of our first season. Thank you for joining us on this podcast. Thank you very much.
Thanks for listening to Me, Myself, and AI. If you're enjoying the show, take a minute to write us a review. If you send us a screenshot, we'll send you a collection of MIT SMR's best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback at mit.edu.