Hey, hey, it's Dan Turchin from PeopleRain, host of AI and the Future of Work podcast.
Welcome back to a special edition of the podcast. We've published more than 300 episodes. We get asked all the time to highlight a few of our favorites, which is very tough to do. We listened and we started last month with a special episode featuring some of our top conversations on data privacy in honor of Data Privacy Day, celebrated in January. It was a great episode. Check it out in the link in the show notes.
We're going to keep things going with that series today. We're bringing back short clips from some of the best conversations with female leaders in AI who are defining the future of AI ethics. What makes this episode particularly special is that all the experts featured in this compilation are
are women. And the timing of this episode is to coincide with International Women's Day, where we celebrate the vital contributions that women continue to make. We're joined by the great Navrina Singh, a trailblazer in enterprise AI governance, Meredith Broussard, an advocate for algorithmic accountability, Juliet Powell, who champions principles for responsible innovation, and Mervyn Hickok, a leader at the intersection of AI ethics and social justice.
Together, they discuss the most pressing challenges that face us building trustworthy AI, things like combating bias, ensuring accountability, and crafting actionable frameworks for responsible AI deployment. Get ready for an inspiring set of conversations about how we're creating a fairer, more equitable future with AI.
And stay tuned because we'll be releasing more clips soon in special episodes of AI and the future of work to make it easier to reconnect with amazing past guests. ♪
First up, we have Navrena Singh, founder and CEO of Credo AI, a governance SaaS platform empowering enterprises to deliver responsible AI. As a member of the National AI Advisory Committee and a World Economic Forum global leader, Navrena is driving critical conversations about building trust in AI systems,
In this excerpt, Navrina shares her vision for regulating AI to ensure transparency, accountability, and trust, especially in areas like healthcare, hiring, and education. This truly is one of my favorites. Listen to the end of the full episode to hear Navrina's amazing backstory. I know from following you on social that we never shy with opinions. So let's jump right in. If Navrina ran the world,
How would you regulate AI? You know, what a great question. I wish I did run the world. But since we don't run the world right now, I think what I'm really excited about is that Credo AI is really at all the important tables where a conversation around what do the guardrails for AI look like need to be.
And I think one of the things that we've done at Credo AI is really making sure that not only are we at those tables talking about how we should regulate artificial intelligence, but also attracting multi-stakeholders to those tables. So if we were to regulate AI, I think we would just start with the outcome. Why do we want to regulate AI? The reason for that is to build trust, right?
You know, when you have such an important, powerful technology that is showing up in hiring decisions, in healthcare decisions, in, you know, facial recognition systems, in, you know, social media, which my nine-year-old uses day in, day out, you need to start really thinking about how do you build trust with the consumers first?
and whose jobs are at stake, whose healthcare is at stake, whose education is at stake because of this powerful technology. So I think really starting the regulation discussion with how do you engender trust is really critical. And I think in terms of core tenets of regulations that we have been very actively vocal about is first and foremost transparency.
The transparency can show up in different ways throughout the AI value chain. When you think about the vendors, the builders of this technology, are they disclosing not only the sources of data sets, but did they get consent to those data sets? How those data sets were tested? How the systems that they're building and putting out in the world tested? Did they have multi-stakeholder perspective and red teaming associated with those outcomes?
So I think there's a lot that can be done in terms of transparency at vendor level. And then as you walk downstream, the application developers, especially in the context of foundation models and frontier technologies that we are living through,
It's really critical to start thinking about how these applications are going to come into being and what kind of transparency and disclosures can be sort of, you know, put on top of them. Because so much of building that trust depends upon context.
context of use, context of deployment, and context of where those applications really came from. So I think the first and foremost for us is really focusing on transparency and disclosure reporting across the entire value chain. And I think that's a great first step in terms of educating the regulators in terms of what could that initial policymaking look like.
And then very quickly, Dan, the second area which I believe is really critical in terms of how we can put regulations in place is thinking about impacted communities. Is there a mechanism by which
we can get feedback into our technology and development process from the consumers who are impacted. Just this morning, I was having conversations with a group who are really focusing on disability and accessibility. And as you can imagine,
In scenarios like proctoring, where you're using facial recognition and automated tools to make a decision is that student cheating or not, there's a massive unintended consequences for disabled individuals.
you know, to really show up in the right way in those scenarios. So I think how can we make sure that not only are we taking those impacted communities front and center as we are building these technologies, but also making sure that when these systems do have unintended consequences, that we are able to very quickly course correct for the individuals who are impacted.
Next up is Meredith Broussard, a data scientist, associate professor at NYU, and the acclaimed author of Artificial Unintelligence, a wonderful read. Meredith specializes in algorithmic accountability. In fact, she's kind of a pioneer in the field. She uses her expertise to bridge the gap between technology and social justice. In this excerpt, Meredith shares her journey into the intersection of AI and social justice, and how she's been able to do that.
and why she believes biases in AI systems aren't glitches, but reflections of deeper societal issues. I got into this space mostly through my interest in social justice and also technology. I am a professor at NYU. I teach something called data journalism, which is the practice of finding stories in numbers and using numbers to tell stories.
And I was doing artificial intelligence for investigative reporting, but I would go to parties and say, I do AI for investigative reporting. And people would say, you mean you'd build robot reporters? And I would say, no, that sounds cool, but that's not exactly what I do. And they would say, all right, well, you mean you build a machine that spits out story ideas? I would say, no, that sounds really cool, but that's not what I do.
And eventually I realized that even though we talk a lot about artificial intelligence, it wasn't really clear what we were talking about when we talked about artificial intelligence. So I started moving my work toward explanatory reporting about artificial intelligence. And as I got more fluent with explaining the technical side of it,
I realized that we also needed to have a conversation about the social side of artificial intelligence. So that's where the idea for my new book, More Than a Glitch, came from. It came from conversations that I had with people kind of trying to think through the social implications of AI and
looking at why every single AI advance is accompanied by some sort of horrific story about race, gender, or ability bias.
So what I'm arguing in the book is that we shouldn't think about these things as glitches, as temporary blips. We should think about them as reflections of larger problems in society that are simply manifesting inside AI systems. I've heard you talk about the really scary, almost dystopian example of...
someone with pale skin putting their hands under a soap dispenser and they're allowed to wash their hands.
and someone with darker skin doing the same thing, and the system doesn't allow them to wash their hands. That is a stark reminder about the unfairness that's built into some of these societal norms. What has changed since you published the first book that maybe made you feel there's a need to go beyond human unintelligence and focus on the second book?
I think that this next book grew out of the kind of most intense conversations I had around the first book. One of the things that I've been doing is I've been going around and talking to audiences, talking to different communities about AI in different contexts. And I think
kept getting the same kinds of questions and people in different industries from education to professional sports, like people were all kind of grappling with the same kinds of social issues. And so I was curious about these things too. And so I kind of pulled them all together into a book.
So I do talk a little bit more in this book about the racist soap dispenser, about the viral video you just mentioned. Listeners, if you haven't seen the viral video of the racist soap dispenser, you absolutely should look it up because it's just a really good reminder of why
why we can't assume that because technology works for you, it will work for everyone. Now we'll hear from Juliet Powell, founder and managing partner of Kleiner Powell International,
faculty member at NYU's Interactive Telecommunications Program, and co-author with Art Kleiner of The AI Dilemma, Seven Principles of Responsible Technology. Such a great episode. Check out the full thing. In this excerpt, Juliet reflects on the risks and rapid evolution of generative AI, including the potential for self-aware systems and their implications as collaborators, competitors, and maybe even your boss in the future.
Juliet, we talk about various forms of AI dilemmas every week on this podcast. Two of you are experts. I'd love to know something that you learned in the course of doing research for the book, maybe that didn't end up getting published, something that got left on the cutting room floor that surprised you. When I started having conversations with people within big tech about this calculus of intentional risk,
So why is it that, you know, a company decides to launch as opposed to wait? There are many internal factors around that. And I was very curious to find out how this calculate, this calculus actually played out in the face of competition, in the face of, you know, potential regulation coming down the pipeline. This idea that, you know, it's the wild west and it's time to make money now before, you know, either a competition comes in or that regulation comes in. So all of those different tensions, you
were really, really interesting to me from both an organizational perspective as well as from a technological perspective. But the thing that is not in the book at all and that I am keenly aware of is generative AI and specifically, so we do touch upon generative AI, but we don't talk about self-awareness. We don't talk about the day that's coming down the pipeline very, very quickly, probably within 2024, 2025,
when an LLM will know that it's an LLM.
And at that point, that's a complete game changer. So we've already seen the step change with these technologies now in the hands of the every person in that it hasn't even been a year, right? That happened last November with OpenAI. It has been exactly six months since the grandfathers of artificial intelligence and a bunch of other practitioners around the world jumped in and said, hey, we need to take a pause on the research around advanced artificial intelligence, specifically because of this and so many other factors.
And we didn't pause, of course, we accelerated. So again, this idea of self-awareness of these systems and their ability to manipulate us, I think is really important. And it's something that I think we need to talk more about, not necessarily to frighten people, but for our own awareness. If we are to work with these systems hand in hand, in some cases,
There are collaborators or co-creators. In other cases, there are competition. And in yet other cases, there are bosses. Anyway, there's a lot to unpack there, but these are the things that I think about.
We're going to get today's special episode wrapped up with Merva Hickok, founder of AIethicist.org and Lighthouse Career Consulting and one of the top 100 most brilliant women in AI ethics. At the forefront of this critical field, Merva is helping shape how AI will impact our lives and work in the decades to come. In
In this great excerpt, she explains why organizations must adopt deliberate and thoughtful practices to ensure AI serves everyone fairly and equitably. One of the challenges when we talk about exercising responsible AI with various guests, I mentioned a few of them in the opening on this podcast, is that everybody who ultimately
is responsible for introducing bias and just for lack of a better word, bad automated decisions can credibly deny culpability. If you ask the developer, do you practice responsible AI? They'd say, well, I just write the algorithm. I don't make the decisions, right? The decisions are made based on the data. And then you go to whoever's responsible for collecting the data and they say, I don't manipulate the data. It's just what, it's the data.
And then you go to the organizations that are purchasing these AI-based systems from the vendors and they say, hey, we trust that the vendor is going to be exercising responsible AI. And so everybody's pointing a finger. And ultimately, the one who's harmed is the one whose credit is being denied by an AI-based decision or they're not being hired because of some bias in the data. This is a tricky one, but...
Who is responsible for these poor automated decisions and how do we address the systematic bias that creeps into this whole, we'll call it the value, the AI value chain? I think you hit it right on the nail.
calling the value chain. I think that's real. We need to look at the whole life cycle of AI development data, starting from objective or problem definition all the way to retiring a system who is involved in that whole process, who gets to make decisions.
How do they get to make decisions and who gets to be involved and say a word or say have any say in these conversations? You're also very right that there's a lot of finger pointing in this, but also I think organizations are coming to
And understanding that some sort of governance is required and how that governance looks, I think, really much, very much depends on the maturity of the organization and how much they understand the risks that come with AI, but also benefits that come from the governance responsible people.
governance of AI. A lot of the times there's this myth, unfortunately, about innovation versus regulation or innovation versus governance or innovation versus responsible AI, which I think for me is a very, very dangerous thing.
and false dichotomy that some people are creating. And we need to question who benefits from that framing. And to me, innovation goes hand in hand with responsible AI as well as governance, because otherwise you're really doing lazy development, lazy governance. You're just putting products out there, products and services out there without thinking
deliberate considerations or through that in-depth thinking. Hey, if you liked this episode, check out the show notes for the full versions of each of these great conversations. And if someone you know would appreciate these insights, go ahead and share this episode with them. Who knows, it might spark a great discussion and lead to you becoming one of the next great leaders defining the future of work with AI.
Until next time, I'm your host, Dan Turchin from AI and the Future of Work. We are back next week with another fascinating guest.