We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Technology as a Force for Good: Salesforce’s Paula Goldman

Technology as a Force for Good: Salesforce’s Paula Goldman

2021/11/30
logo of podcast Me, Myself, and AI

Me, Myself, and AI

AI Deep Dive AI Chapters Transcript
People
P
Paula Goldman
Topics
Paula Goldman: 我在 Salesforce 的角色是首席道德与人文使用官,负责在技术开发和部署过程中考量其社会影响,避免意外后果,并制定政策以防止技术滥用。我的背景是长期致力于利用技术改善人们生活,特别关注技术如何服务弱势群体。我坚信技术可以成为向善的力量,但同时也需要建立相应的防护措施和道德框架,以确保技术被负责任地使用。在 AI 领域,我特别关注算法偏差和数据偏见问题,并致力于开发能够最大限度地减少这些问题的技术和流程。我认为 AI 伦理是一个不断发展的领域,需要技术公司、政府和民间组织的共同努力。 Sam Ransbotham 和 Shervin Khodabandeh: 我们与 Paula Goldman 的对话主要围绕 AI 伦理、技术责任和负责任创新展开。我们探讨了 AI 技术的积极潜力和潜在风险,以及如何通过技术、流程、人员和治理方面的综合努力来大规模实施道德 AI。我们还讨论了在教育和招聘过程中如何培养对技术伦理的重视,以及如何建立企业文化,鼓励员工关注技术伦理问题。 Sam Ransbotham: 作为一名教授,我关注的是如何将技术伦理教育融入到机器学习和人工智能课程中,并帮助学生理解技术对社会的影响。 Shervin Khodabandeh: 我关注的是如何将 AI 伦理融入到企业实践中,并确保 AI 技术被负责任地使用,以造福社会。

Deep Dive

Chapters
Introduction to the podcast and the importance of moving from AI ethics theory to practice with guest Paula Goldman.

Shownotes Transcript

Translations:
中文

Today we're airing an episode produced by our friends at the Modern CTO Podcast, who were kind enough to have me on recently as a guest. We talked about the rise of generative AI, what it means to be successful with technology, and some considerations for leaders to think about as they shepherd technology implementation efforts. Find the Modern CTO Podcast on Apple Podcast, Spotify, or wherever you get your podcasts. AI ethics are easy to espouse, but hard to do.

How do we move from education and theory into practice? Find out today when we talk with Paul Goldman, Chief Ethical and Humane Use Officer at Salesforce. Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I'm Sam Ransbotham, Professor of Information Systems at Boston College. I'm also the guest editor for the AI and Business Strategy Big Idea Program at MIT Sloan Management Review.

And I'm Shervon Kodabande, senior partner with BCG, and I co-lead BCG's AI practice in North America. And together, MIT SMR and BCG have been researching AI for five years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build and to deploy and scale AI capabilities across the organization and really transform the way organizations operate.

Today, we're talking with Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce. Paula, thanks for taking the time to talk with us today. Welcome. Thank you. I'm really excited to have this conversation. Hi, Paula. Hi. Our podcast is Me, Myself, and AI. Let's start with a little about your current role at Salesforce. First, what does Salesforce do and what is your role?

Salesforce is a large enterprise technology company that if I had to really summarize what we do, we put out a lot of products that help our customers, which tend to be companies or sometimes nonprofit organizations or whatnot, connect better with their customers, their stakeholders, whether that's from a sales process, service, marketing, you name it.

And data plays a really, really important role, like understanding, giving them the tools to understand their customers and what they need and serve them better. Within that, my role, I'm Chief Ethical and Humane Use Officer, which I know is a bit of a mouthful. It's a first of its kind position for Salesforce.

And I work with our technology teams and more broadly across the organization on two things. One is as we're building technology, thinking about the impact of that technology at scale out in the world, trying to avoid some unintended consequences, trying to tweak things as we're building them to make sure that they have maximum positive impact and

And then secondly, we work on policies that are really about the use of our technology and making sure that we are putting sufficient guardrails to make sure that our technology is not abused as it is used out in the world. Tell us about your background. How did you end up in that role? Pretend you're a superhero. What's your origin story here? Well, I guess, you know, I have the short story and the long story. The short story is essentially, I am super passionate about how technology can improve people's lives.

And I spent a long time thinking about the tech for good side of that. Like I worked for a while under Pierre Omidyar, the founder of eBay, working to build an early stage investing practice that was investing in startups that use technology to serve underserved populations, whether that meant like getting, you know, financial services to people that otherwise would be excluded in emerging markets, alternative sources of energy or whatnot.

Having done that for a long time, I think we started to see this shift in the role of technology in society.

For a long time, I think the technology industry viewed itself as a bit of an underdog, a disruptor. And then all of a sudden you could sort of look and see the writing on the wall and technology companies were not only the biggest companies in whatever financial index you want to name, but also technology was so pervasive in every aspect of all of our lives and even more so because of COVID.

And I think we just sort of saw the writing on the wall and saw that the sort of famous adage with great

Oh, I'm going to mess this up. With great power comes great responsibility. Perfect tie-in to my superhero origin question, because that's the Spider-Man quote. Go ahead. It's time to think about guardrails, particularly for emerging technologies like AI, but kind of across the board, how to think about what do these technologies do at scale? And like any industry that goes through a period of maturation, that's where I think tech is.

That's kind of like my motivation around it. And as part of that role, I was leading a tech ethics practice. I was asked to be on Salesforce's Ethical Use Advisory Board. And through that, they asked me to come lead this practice. Paula, your background's generally been quite focused on this topic, right? I mean, even before Salesforce.

That's right. You know, that's a bit of what I was referring to. I spent a lot of time, I mean, even just straight from college, I spent a lot of time on mission-driven startups, often very technology-driven that were meant to, again, open up opportunity. For example, I spent a lot of time in India working with an affordable private school, but like thinking again about how does technology open up opportunity for these students and

Many years later, I won't tell you how many years later, a lot of those students are actually working in technology in the country. Essentially, it's been a through line in my work. The role of technology and markets is a force for good. How do we implement also appropriate guardrails and think about the power of trust in technology, which is ultimately so essential for any company that's putting out a product in the world these days?

How do you think the nature of the dialogue has changed since you've been in this field for some time as one of the pioneers in this area? How do you think the nature of the dialogue has changed, let's say, from 10 years ago? Yeah. You know, guardrails and ethical use versus now.

I would say 10 years ago, and let's start with giving credit where credit is due. 10 years ago, certainly there was a ton of leadership in academia thinking about these types of questions. And I think if you would go to a campus like MIT, you'd find a lot of professors teaching classes on this and doing research on this. It has been a longstanding field.

society and technology, science and technology, call it what you will, and many other disciplines. But I don't think it was as widespread of a topic of public conversation. And today you could hardly pick up a newspaper without seeing a headline about some sort of technology implication, whether it's AI or a privacy story or a social media story or whatnot. And certainly it was fairly rare 10 years ago to think about companies hiring people with titles like mine.

I think about that history, I actually think a lot about the analogy of what the security industry went through, you know, let's call it in the 90s with security, right? So what the technology industry went through in the 90s with security. There too was a fairly immature field. And you might well as an observer at that point have looked at the viruses that were attacking technology previously.

And thought, how could you possibly predict every risk and get a framework for getting ahead of it and fast forward to where we are now? And it's a mature discipline. Most companies have teams around this and sets of protocols to make sure that their products don't have these vulnerabilities. And I think we're kind of in the early stages of a similar evolution, especially with AI and AI ethics, where these sorts of norms will become standard. And this is a specialized profession that's developing.

Maybe let's get a little bit more specific here. And we've talked about needing guardrails. What kinds of things do people need to be worried about? If we zoom in on AI specifically here, the positive potential impact and real actually real time impact of that technology is immense, right? So think about

healthcare. One of our research teams is working on AI to spot breast cancer with machine learning. It's called ReceptorNet. That's the tip of the spear. There's so much stuff going on in healthcare that can improve outcomes and save lives. We have a project called

which is using vision to look at images of beaches and see if there's a shark there for a safety warning, right? There are many, many, many applications of AI like that that have huge benefits for humanity. At the same time with technologies, there's often unintended consequences that come.

AI is really an automation of human intelligence, and it's as good as the data that it gets fed. And that data is the result of human decisions, and it makes it imperfect. That's really important for us to look out for. It's very, very important for companies that are using AI to automate processes or especially make decisions that could impact human outcomes, whether that's a

alone or, you know, an access to a job or whatnot. I'm sure by now you've heard many times about the research that was done, I think actually partly at MIT about facial recognition by folks like Joy Balamwini and Timnit Gabrou and showing that facial recognition is more accurate on lighter skin people versus darker skin people, which can have catastrophic impacts if it's in a criminal justice setting.

There's a lot of stuff to look out for to make sure that, in particular, that the questions of bias are appropriately safeguarded when developing this technology. On that point, clearly there is a strong case to be made to make sure that, as you said, the unintended consequences are understood and mitigated and managed.

that is only going to get more complex as AI gets smarter and there will be more data. Do you think that there is a possibility of AI itself driving or being a contributor to more ethical outcomes or to more equity in certain processes? I mean, there's clearly a case for making sure AI doesn't do something crazy,

And then is there also possible for AI to be used to make sure we humans don't do something crazy? Of course. I think that's the flip side that maybe doesn't get talked about as much. But humans making decisions about who gets a loan or who gets a job are also very subject to bias. So I think there is the potential of done right work.

When AI is used in those circumstances in combination with human judgment and appropriate guardrails for the three of those things to actually open up more opportunities together. And I'm just giving examples of use cases, but I think that's probably across the board. Going back to the healthcare example.

A doctor could be tired the day he's looking at a scan for cancer. And that's why, you know, sometimes we get into these polarized discussions of AI versus humans. And it's not an either or. It's an and. And it's with a set of guardrails and responsibilities.

I wanted to do a follow up on the guardrails and responsibilities. And as you're thinking about ethical AI, either for Salesforce or more broadly for any organization, how much of the effort to do this at scale do you think is about guardrails and education and governance and more visibility and appreciation of the potential risks?

So it's process and people kind of stuff versus technology itself.

Absolutely. And when you think about part of the answer to these guardrails is the technology itself. How do you build tools at scale to watch for risk factors? And this is actually something we try to do with our customers, right? So we have integrated AI into a number of applications. For example, we have chatbots for customer service, and we have AI that helps salespeople

with lead and opportunity scoring so that they know which prospect to go after and can spend most of their time going after that prospect. And within that, we've built technological safeguards and prods

Some of it is just the way we build the technology itself, but some of it is actually having the technology prompt the human and say, for example, hey, you're building a model. You have identified that maybe you don't want race as a variable in this model because it can introduce bias. But we see here you have a field that is zip code and zip code can be highly correlated with race. I think the answer to your question is yes.

There's a very human element to this question, but to address it at scale, you actually need to automate the solution as well. And again, it kind of comes back to what we were just saying of combining the technology, the human, the process, the judgment all together to solve the problem.

At Salesforce, we do something called consequence scanning, building off of a methodology put out by a nonprofit in the UK called Dot Everyone. And we've kind of customized it for our own good. We're about to put a toolkit out around it. But we work with scrum teams at the beginning of a process. And we say, it's actually kind of simple at the end of the day. Hey, what's the best way to do this?

What are you intending to build here? And what might some of the unintended consequences be, positive and negative? And from that, we generate ideas that actually go on the backlog for that team. And that's how you sort of influence the product roadmap

It's not foolproof for sure. And you're not going to prove everything. But it's getting better. It's getting better as you might expect it to as you start to see some of these consequences at scale and you start to see the sort of pushback and critique from society. And it's always more complicated than, you know, a sort of black and white picture of, you know,

how simple it might be to fix things. It's not simple to fix AI bias, but there's also no excuse for not paying attention to it now because it's such a known problem. Paula, I want to ask you a hard, maybe an unfair question. You commented on the past in this field and how the evolution and strength of technology and tech firms has shifted the dialogue. And you've talked a lot about the present issue.

How do you think the future will be 10 years from now? How do you think this conversation we're having now, which 10 years ago wasn't commonplace, that, hey, let's look at the algorithms, let's look at unfair treatments, let's look at zip codes, whether it's a correlation to poverty levels or race. How do you think the future will be different?

Well, you're catching me on a good day, so I'm optimistic about this. But really, I don't think these are easy things to scale. Not in the beginning, as in any sort of big change in industry. I think all the signs do point to the maturation of this type of work. I will say, especially with more obvious places like AI or technology,

crypto or, you know, things that are kind of in the news because of the questions that come up around them. I will also say there's a lot of regulatory pressure and some of that, how that will play out will depend on a lot of very complicated politics. But you see it not only in the US, you see it in Europe that just released a draft AI law a couple of months ago and in other jurisdictions as well. Also privacy legislation, social media legislation, you

And those debates are bringing to the forefront what is the responsibility of technology companies versus governments and the role that civil society can play. And I think...

You can pick apart any particular proposal with pros and cons, but the debate itself is very healthy in large part. And I take a lot of hope from that because if you were to ask me what's going to happen with technology companies on their own, I actually don't think it would scale on its own, right? It's part of an ecosystem where...

Companies play a role. Civil society groups play a role. That voice is very, very important. And it's the combination of all those things, which I will say actually has surprised me in terms of how robust it's remained. And the conversation keeps deepening and widening.

Well, okay, this is great. Enough about you, but focus on me for a minute. I teach a bunch of college students. Let's talk about you, Sam. I teach a bunch of college students. That seems like a way to influence the future. What should we be telling people? I mean, Monday morning, I have a class on machine learning and artificial intelligence.

what can you vicariously tell me to tell them? What should we be talking about? Well, what have you told them? I'm curious. Has it come up? Oh, it comes up all the time. Actually, I feel a little bit negative because I always start the, you know, every class, even though it's a technical class, we start with what's happening in the news. And I feel evil because the

The news is lots of times either glorious example of something that could happen in a lab in 30 years or something terrible that's happening right now. Right. And I feel like I come to the dark side. So what should we be telling college students or people heading into this field?

I'm going to go on a little bit of a tangent and then come back to your question. I think your commentary in the news is really apt, and it does relate to the conversation about technology because it is very extreme right now. It's all the things that are wrong, which we should talk about for sure. And then all these kind of overstated claims about what will happen in 30 years, but ignoring that there's a lot of just extraordinary benefit right now also happening from technology. And I could go on and on about this, you know, thinking about

What technology did for society during COVID, how it helped businesses stay open and people stay safe and on and on. I think there is a more nuanced conversation that balances the real need for caution and societal engagement. You know, at the end of the day...

When you get this right, technology is an extraordinary force for opening up opportunity for people. And I think that kind of basic thinking about that balance in how we teach up and coming people who may indeed work in technology is very important, too.

I will also say I've been super heartened over the last years to see the blossoming of curricula that are around tech ethics, especially in a university setting. And I remember a few years ago when I was at Omidyar Network, we sponsored a sort of challenge for educators and professors to work this into CS curriculum. We see that blossoming all over the place. All

Exactly.

Exactly. You want technologists to see this as part of their job. It's so cool that we're seeing more and more ways that professors are just integrating this into the standard curricula of name your technological discipline.

One of the things that I like to do in class is give people data sets and then say that, hey, I know that column. I think I told you it was X. It actually is race. How does that change what you feel about the analysis you just did and how proud you were of the significant results you got based on using that variable? And does it matter and why? Does it work? Yeah.

That's a great educational tool. Yeah, I mean, I think it does work because it says, well, hey, you're just treating this as data, but this data actually represents something in the real world that is an attribute of a real human versus an abstract number in row seven and column three. That's another huge topic, right? And another really important thing to set when we're teaching is that data is

is a person most likely. And keeping that in mind as we're thinking about what data we're collecting, what our intended use is, whether we really need that data, super, super important. Something that we also spend a lot of time thinking about within our overall tech ethics work at Salesforce. So when you're hiring people, what do you want them to know? What kind of skills are people needing? How can you tell that someone's going to use their power with great responsibility to come back to your prior quote?

This is a great one. It's something that DJ Patel, who's the former chief technologist of the United States, he talks about this a lot too in the context of data science. In the hiring process, when you're asking someone, so often hiring processes will have an exercise that they'll ask someone to do.

Throw in an ethics question, see how they deal with it, see how coherent the answer is. I think those types of little cues, they not only help you evaluate how sophisticated someone's thinking is about those questions, but those cultural cues cannot be underestimated. And we think about that a lot as well. How do we create a culture in which everyone feels like it's their responsibility to think about these questions and

And part of that is about giving them the tools that they need to do it and the incentives that they need to do it, which include having this be echoed all around by leadership. It's like, how often are you coming up running across these questions, whether it's an interview process or it's in an all hands or it's in, you know, your one on one with your manager? Yeah.

It matters a lot, which brings me back to why I'm so excited that this is starting to really flourish in university settings, in the teaching itself, you know, in the core curriculums. Like it's those cues matter a lot.

Paul, this is all quite fascinating. I think, you know, if we come back to what Shervin kind of instigated beginning was how technology can affect society and can affect the organization as a whole, not so much, you know, lots about how we use technology, but these technologies affect us and affect society. And it's notable that those effects can be positive and value building, but

Or negative. And it just depends on how we make those choices to use it. Thank you for taking the time to talk with us. I've really enjoyed it. Yeah, thank you so much. Very insightful. It was super fun. Thank you for having me. Next time, we speak with Ranjit Banerjee, the CEO of Cold Chain Technologies. Please tune in.

Thanks for listening to Me, Myself, and AI. We believe, like you, that the conversation about AI implementation doesn't start and stop with this podcast. That's why we've created a group on LinkedIn specifically for listeners like you. It's called AI for Leaders. And if you join us, you can chat with show creators and hosts, ask your own questions, share your insights,

and gain access to valuable resources about AI implementation from MIT SMR and BCG, you can access it by visiting mitsmr.com forward slash AI for Leaders. We'll put that link in the show notes and we hope to see you there.