Today we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcasts. Ethical use of technology is and should be a concern for organizations everywhere.
but it's complicated. Today, we talk with Elizabeth Raniereis, founding director of the Notre Dame IBM Technology Ethics Lab, about what organizations can do today without waiting for the perfect answer. Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, professor of information systems at Boston College.
I'm also the guest editor for the AI and Business Strategy Big Idea program at MIT Sloan Management Review.
And I'm Shervan Kodabande, senior partner with BCG, and I co-lead BCG's AI practice in North America. And together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities across the organization and really transform the way organizations operate.
Today we're talking with Elizabeth Ranierez. Elizabeth is the founding director of the Notre Dame IBM Technology Ethics Lab, as well as founder and CEO of Hacky Lawyer. Elizabeth, thanks for taking the time to talk with us today. Welcome. Thanks for having me. Let's start with your current role, your current new role at the Notre Dame IBM. I was going to ask which one. Well, I was thinking about the ethics lab, but you actually can start with wherever you like.
Sure. So as you mentioned, I've been recently appointed as the founding director of a new technology ethics lab at the University of Notre Dame.
It's actually called the Notre Dame IBM Technology Ethics Lab, as the generous seed funding is actually from IBM. My appointment is a faculty appointment with the University of Notre Dame. And the intention of the lab is to complement Notre Dame's existing Technology Ethics Center, which is a very traditional academic research center focused on technology ethics.
So you can imagine there are many tenured faculty members affiliated with the center and they produce sort of traditional academic research, peer reviewed journal articles. The lab, in contrast to that, is meant to focus on practitioner oriented artifacts. So the things that we want to produce are for audiences that include companies themselves, but also for law and policymakers, also for civil society and other stakeholders, and
And we want them to be very tangible and very practical. So we're looking to produce things like open source toolkits and model legislation and explainer videos and model audits and a whole array of things that you wouldn't necessarily find from a traditional academic research center. What we really need in this space is we need centers and institutions that can translate between academia and practice.
The beauty of housing the lab in the university, of course, is having access to the faculty that's generating the scholarship and the theoretical foundations for the work.
Can you comment a bit more on how you guys make that happen? Because I know there's a lot of primary research and then you have the faculty's point of view. And I assume that there's also industry connections and some of these applications in real life come into play, which is really important, as you say. What are some of the ways you guys enable that?
Right now, as we're getting up and running, really what we're focusing on is convening power. So we're looking to convene groups of people who aren't necessarily talking to each other and to do a lot of that translation work. So right now,
The intention is to put out an official call for proposals to the general public and to be sourcing projects from all the different stakeholders that I outlined, consisting of teams of individuals who come from different industries and sectors and represent different sectors of society, and to have them focus on projects that actually try and solve real-world challenges. So, for example, right now, during the pandemic, those challenges might be something like
returning to work or returning to school. And then, of course, what we want to do as the lab is we want to take the brilliant work that the faculty at Notre Dame is doing and eventually elsewhere and leverage that to sort of underpin and to inform the actual projects that we're sourcing. And we can hopefully build some kind of narrative arc around how you start translating that theory into practice.
It seems like you're looking at ethics in AI from two sides, right? One is the ethics of the technology itself, as in, is what the technology doing ethical and how do you make sure it is ethical? And how can technology help the ethics conversation itself?
I think it's absolutely both. In my mind, you cannot separate a conversation about technology ethics from a conversation about values, both individual values and collective and societal values. So what I find really fascinating about this space is that you're right. While we're looking at the ethical challenges presented by specific technologies, we're also then confronted with having to identify and prioritize and reconcile competing values of different people and communities and stakeholders in the conversation. And
you know, when we have a specific challenge or a specific technology, it actually really turns the mirror back on us as a society and forces us to ask the question of what kind of society do we want to be? Or what kind of company do we want to be? Or what kind of, you know, individual or researcher do we want to be? And what are our values? And how do those values align with what it is that we're working on from a technology standpoint? So,
I believe it's absolutely both. And I think that's also been part of the evolution of the ethics conversation in the last couple of years is that while perhaps it started out with the lens very much on the technology, it's been very much turned around and focused on who's building it, who's at the table, what's the conversation, what are the parameters, what do we count, what values matter? And actually, from my standpoint, those are the really important questions that hopefully technology is an entry point for us to discuss.
Sam, I was just going to ask Elizabeth to maybe share with us how she ended up here, like the path that you took. How much time do you have? I'll give you the abbreviated version. So I was classmates with Mark Zuckerberg at Harvard, and I've been thinking about these issues ever since.
But more seriously, my sort of professional trajectory was that after law school, I worked at the Department of Homeland Security for a couple of years in their general counsel's office. And this was a long time after 9-11. And I actually am from New York and have vivid memories of the event. And I was really struck by how much of the emergency infrastructure was still in place more than a decade after it was initially rolled out. And
I subsequently went back to obtain an LLM in London and accidentally, having arrived in the year 2012, started working on the first draft of what became the General Data Protection Regulation or the GDPR. And through that process, gained a lot of exposure to the ad tech industry, the fintech industry. Somewhere along the way, read the Bitcoin white paper, came back to the States just before the referendum and was branded a blockchain lawyer because I had read the Bitcoin white paper. Yeah.
So then I had this interesting dance in trying to be a data protection and privacy lawyer and also split my time with the blockchain distributed ledger folks. And I quickly picked up on some of the unsavory, unethical behavior that I saw in the space. And I was really bothered by it and also sort of triggered these memories of the experience with Mark Zuckerberg scraping faces of my classmates in university. And it was just an interesting thing that I didn't appreciate at the time, but sort of bubbled in the background.
And that led me to actually work in-house at a couple of companies, startups based in Silicon Valley and elsewhere. And there was more of this sort of unsavory behavior. And I thought, if only we could talk about technology, we can engage with technology and be excited about it without all of these terrible downsides.
I think part of the reason I was observing that is because you didn't have the right people in the room. So you had technologists that were talking past and over, lawyers and policymakers. And that was my idea in late 2017 to start my Hackie Lawyer Consultancy. And the idea with that was, you know, I'm fairly technically savvy, but I have this great training, these legal skills, these public policy skills. I'd like to be able to translate across those groups.
and bring them together and built up a pretty successful consultancy around that for a couple of years thereafter. That's a very inspiring story from the beginning to here. I kind of want to ask about the long version, but I don't know if we have time for that. Let's follow up on a couple of things that you mentioned. One is, and I think you and Sherbin both talked about this briefly, but
There's a little bit of excitement about some of the bad things that happen. When we see these cases of AI bias coming out and they make headlines, there's also a silver lining, and that lining is pretty thick, that it's really highlighting some of these things that are already existing or already going on, and these seem like opportunities.
But then at the same time, you also mentioned how when we react to those, we put things in place and more than a decade later, the DHS protections were still in place. So how do we balance between reacting to these things that come up, between addressing biases and not putting in draconian measures that stifle innovation?
You're right that there are opportunities. I think the idea is that depending on the challenge presented, I don't like the frame of stifling innovation or the tension between, you know, innovation and other values like security or privacy or safety. I think we're seeing this play up again in the pandemic, right, where we are...
being often pushed a narrative around technologies that we need to deploy and technologies that we need to adopt in order to cope with the pandemic. And so we saw this in the debate over exposure notification and contact tracing apps. We're seeing this right now very prominently in the conversation around things like immunity certificates and vaccine passports.
I think the value of ethics there, again, is that rather than look at the kind of narrow particulars and tweak around the edges of a specific technology or implementation, to step back and have that conversation about values and to have the conversation about what will we think of this in five or 10 years. So the silver lining of what happened after
after 9-11 was we've learned a lot of lessons about it. We've seen how, you know, emergency infrastructure is often, often becomes permanent. We've seen how those trade-offs in the moment might not be the right trade-offs in the long run. So I think if we, if we don't take lessons from those, and this is where it's really interesting in technology ethics, how there's so much intersection with other things like STS and other fields around, you know, history and anthropology and why it's so critical to have this really interdisciplinary perspective and,
Because all of those things, again, go back to a conversation about values and trade-offs and the prioritization of all of those. And some of that also, of course, has to do with time horizon, going back to your question before. So it's easy to take the short view. It can be hard to take the long view. I think if you have sort of an ethical lens, it's important to balance both. Yeah, and also I think you're raising an interesting point. That is, with AI particularly, the consequences of a...
misstep is very long-term because the algorithms keep getting embedded and they multiply. And by the time you find out, it might not be as easy as you just replace it with a different one because it has a cascading effect. On the point about innovation, AI can play a role in helping us be more ethical.
We've seen examples, I think one of our guests talked about MasterCard, right? How they're using AI to understand the unconscious or unintended biases that their employees might have. What is your views on that, on AI specifically as a tool to really give us a better lens into biases that might exist? I think the challenge with AI is that it's so broad and complex.
definitions of AI really abound and there's not entire consensus around what we're even talking about. And so I think there's the risk that we sort of use this broad brush to characterize things that may or may not be beneficial. And then we run the risk of decontextualizing so we can say, you know, we have a better outcome, but relative to what? Or, you know, what were the trade-offs involved? And I think it's not just
AI, but it's the combination of a lot of new and advanced technologies that together are more than the sum of their parts, right? So AI plus network technologies plus some of the ones I've mentioned earlier, I think are that much harder to sort of unwind or course correct or, you know, remedy when things go wrong. So one of the challenges I see in the space is that, again, we can tweak around the edges and we'll look at a specific implementation or specific tech stack and
And we won't look at it in the broader context. It's how does that fit into a system? And what are the feedback loops? And what are the implications for the system as a whole? And I think that's one of the areas where...
The technology ethics conversation is really useful, particularly when you look at things like relational ethics and things that are a lot more concerned with systems and relationships and the interdependencies between them. I worry that we're a little too soon to declare victory there, but definitely something to keep an eye on.
Yeah, I mean, as you say, the devil's in the detail. This is the beginning of having a dialogue and having a conversation on a topic that otherwise, you know, would not even be on the radar of many, many people. What is your advice to executives and technologists that are right now building technology and algorithms? Like, what do they do in this early stages of having this dialogue?
Yeah, that's a tough question. Of course, it depends on their role. So you can see how the incentives are very different for employees versus, you know, executive versus shareholders or board members. So thinking about those incentives is important in terms of framing the way to approach this.
That being said, there are a lot of resources now and there's a lot available in terms of self-education. And so I don't really think there's an excuse at this point to not really understand the pillars of the conversation, the core text, the core materials, the core videos, some of the principles that we talked about before. I mean, I think there's so much available by way of research and tools and materials to understand what's at stake.
That to not think about one's work in that context feels more than negligent at this point. It almost feels reckless in some ways. Nevertheless, I think the importance is to contextualize your work, to take a step back. This is really hard for corporations, especially ones with shareholders. So we can understand that. We can hold both as true at the same time and think about taking it upon yourself to
There are more formal means of education. So one of the things that we are doing at the lab, of course, is trying to develop a very tangible curriculum for exactly the stakeholders that you mentioned. And with the specific idea to take some of the core scholarship and translate it into practice, that would become a useful tool as well. But at the end of the day, I think it's a matter of perspective development.
And accepting responsibility for the fact that no one person can solve this. At the same time, we can't solve this unless everyone sort of acknowledges that they play a part. And that ties into the things your lab is doing because, you know, I think the idea of everybody learning a lot about ethics kind of makes sense at one level. On the other hand, we also know, we've seen with privacy, that people are
are lazy. And we are all somewhat lazy. We'll trade short term for long term. And it seems like some of what your lab is trying to set up is making that infrastructure available to reduce the cost, to make it easier for practitioners to get access to those sorts of tools.
Yeah, and I think education is not a substitute for regulation. So I think ultimately it's not on individuals. It's not on consumers. My remarks shouldn't be taken as saying that the responsibility to really reduce and mitigate harms is on individuals entirely. I think the point is that we just have to be careful that we don't wait for regulation. One of the things that I like, particularly I like about the technology ethics system,
space is that it takes away the excuse to not think about these things before we're forced to, right? So I think in the past, there's sort of been this luxury in tech of waiting to be forced into taking decisions or making trade-offs or confronting issues. Now, I would say with tech ethics,
You know, you can't really do that anymore. I think the zeitgeist has changed. The market has changed. Things are so far from perfect. Things are far from good. But at least in that regard, you can't hide from this. I think in that way, they're at least somewhat better than they were. I also feel like part of that is many organizations, to Elizabeth's point, not only they don't have the dialogue, even if they did, they don't have the necessary infrastructure to
or investments or incentives to actually have those conversations. And so I think I go back to your earlier point, Elizabeth, that is like, you know, we have to have the right incentives and organizations have to have with or without the regulation, have the investment and the incentives to actually put in place the tools and resources to have these conversations and make an impact.
You have to also align the incentives. Some of these companies, I think, you know, actually want to do the right thing. But again, they're sort of beholden to quarterly reports and shareholders and resolutions. And they need the incentives. They need the backing from the outside to be able to do what it is that is probably in their longer term interests. You mentioned incentives a few times. Can we get some specifics for things that we could do around that to help align those incentives better? What would do it?
I think those sort of process-oriented regulations make sense, right? So what incentive does a company have right now to audit its algorithms and then be transparent about the result? None. They might actually want to know that. They might actually want an independent third-party audit. That might actually be helpful from a risk standpoint. If you have a law that says you have to do it, most companies will probably do it. So I think those types of, you know, they're not even nudges. I mean, they're clear interventions.
are really useful. I mean, I think the same is true of things like board expertise and composition. We may want to think about, is it useful to have super-class share structures in Silicon Valley where basically no one has any control over the company's destiny apart from one or two people? So I think these are all, again, common interventions in other sectors and other industries. And the problem is that this sort of technology exceptionalism
was problematic before, but now when every company is a tech company, the problem is just metastasized to a completely different scale. The analogy I think about is food. I mean, anything that sells food, now we want them to follow food regulations. That certainly wasn't the case 100 years ago when...
Upton Sinclair had the jungle. I mean, it took that to bring that sort of transparency and scrutiny to food-related processes. But we don't make exceptions now for, oh, well, you know, you're just feeding 100 people. We're not going to force you to comply with health regulations. Exactly. Yeah, I think that's actually a very good analogy, Sam, because as I was thinking about what Elizabeth was saying earlier,
my mind went into just also ignorance. I mean, I think many users of a lot of these technologies
highly, highly, you know, senior people, highly, highly educated people may not even be aware of what the outputs are or what the interim outputs are or how they come about or like what all of the hundreds and thousands of features that give rise to what the algorithm is doing is actually doing. And so it's a little bit like the ingredients in food where we had no idea some things are bad for us and some things would kill us and some things
that we thought were better for us than the other bad thing are actually worse for us. So I think all of that, it's bringing some light into that education as well as regulation and incentives.
The point is we acted before we had perfect information and knowledge. And I think there's a tendency in this space to say, we can't do anything. We can't intervene until we know exactly what this tech is, what the innovation looks like. We got food wrong, right? We had the wrong dietary guidelines. We readjusted them. We came back to the drawing board. We recalibrated. The American diet looks different. I mean, it's still atrocious, but we can revisit. But we keep revisiting it, which is your point. And we reiterate. And so that's exactly what we're doing.
Exactly what we need to do in this space is say, based on what we know now, and that's science, right? Fundamentally, science is sort of the consensus we have at a given time. It doesn't mean it's perfect. It doesn't mean it won't change. But it means that we don't get paralyzed, but we act with the best knowledge that we have and the humility that we'll probably have to change this or look at it again. So, you know, the same thing happened with the pandemic where we had the WHO saying that masks weren't effective and then changing course. But we respect that process.
Because there's the humility and the transparency to say that this is how we're going to operate collectively because we can't afford to just do nothing. And I think that's where we are right now. Very well said. Well, I really like how you illustrate all these benefits and how you make that a
concrete thing for people. And I hope that the lab takes off and does well and makes some progress and provides some infrastructure for people to make it easier for that. Thank you for taking the time to talk with us today. Yeah, thank you so much. Thanks so much for having me. This was great. Chairman Elizabeth had a lot of good points about getting started now. What struck you as interesting or...
What struck you as a way that companies could start now? I think the most striking thing she said, I mean, she said a lot of very, very insightful things, but in terms of how to get going, she made it very simple. She said, look, this is ultimately about values, and if it's something you care about, and
And we know many, many organizations and many, many people and many very senior people and powerful people do care about it. If you care about it, then do something about it. But the striking thing she said is that you have to have the right people at the table and you have to start having the conversations. And as you said, Sam...
This is a business problem. That's a very managerial thing. Yeah, it's a managerial thing. It's about allocation of resources to solve a problem. And it is a fact that some organizations do allocate resources on responsible AI and governance, AI governance and ethical AI, and some organizations don't. And so I think that's the key lesson from here is that if you care about it, you don't have to wait for all the regulation to settle down.
I liked her point about revisiting it as well. And that comes with the idea of not starting perfectly. Just plan to come back to it, plan to revisit it. Because these things, even as you said, Shervin, if you got it perfect...
Technology would change on us before. Exactly. And you would never know you got it perfect. You know, whatever you do, you got it perfect. Yeah. The perfection would be lost in immortality. I'm still trying to figure out if coffee is good for your heart or bad for your heart because it's gone from good to bad many times. Well, I mean, I think that's, you know, some of what people face with the complex problem. I mean, if this was an easy problem, we wouldn't be having this conversation. Yeah.
If there were simple solutions, you know, if people are tuning in to say, all right, here are the four things that I need to do to solve ethical problems with artificial intelligence, you know, we're not going to be able to offer that. We're not quite that BuzzFeed level of being able to say, here's what we can do, because it's hard.
The other thing that struck me is that, you know, she has a lot of education and passion in this space that I think is actually quite contagious because I think that's exactly the mentality and the attitude that many organizations can start to be inspired by and adopt to start moving in the right direction rather than waiting for government or regulation to solve this problem. We can all take a role in becoming
more responsible and more ethical with AI starting now. We already have the right values and we already know what's important. Nothing is really stopping us from having those dialogues and making those changes. Thanks for joining us for season two of Me, Myself and AI. We'll be back in the fall of 2021 with season three. In the meantime, check the show notes for ways to subscribe to updates and stay in touch with us. Thanks for joining us today.
Thanks for listening to Me, Myself, and AI. If you're enjoying the show, take a minute to write us a review. If you send us a screenshot, we'll send you a collection of MIT SMR's best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback at mit.edu.