We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Latanya Sweeney on AI, trust, and privacy

Latanya Sweeney on AI, trust, and privacy

2025/2/26
logo of podcast Possible

Possible

AI Deep Dive AI Chapters Transcript
People
L
Latanya Sweeney
Topics
Latanya Sweeney: 我毕生致力于将科技的益处带给社会,同时避免其危害。在生成式AI时代,信任问题至关重要。我们需要重新思考我们对在线内容、数据共享以及算法的信任。从我曾祖父在种族隔离南方生存的经验中,我认识到匿名的重要性,这在如今数据透明的时代尤为重要。 我的Weld实验表明,看似匿名的数据可以通过简单的信息组合进行重新识别,这改变了数据共享的相关法律法规。我们正经历第三次工业革命,AI快速改变着我们的生活,但我们缺乏时间来适应。我们生活在一个技术统治的社会,技术设计决定了我们的生活方式,法律法规的执行也受限于技术。我们需要在民主、共和与资本主义三者之间取得平衡,才能更好地应对AI带来的挑战。 建立对AI的信任需要深层次的努力,而不仅仅是表面措施。AI总能找到操纵我们的方法,我们需要重新评估我们对在线内容的信任。内容审核是一个计算机科学问题,需要更有效的方法来解决,而不是简单地放弃。解决内容审核等问题,应设定衡量标准,而不是制定具体的规则,从而鼓励公司创新并承担社会责任。为了让科技公司对社会负责,需要采取多种措施,包括法律责任、设定指标和公开谴责等。LinkedIn等平台在推荐人才时,需要避免算法歧视,确保公平性。AI可以彻底改变教育方式,但同时也需要重新思考学生如何学习和掌握技能。 我的学生们给了我很多希望,他们对AI时代的挑战有着更清晰的认识。 Reid Hoffman: 我们探讨了生成式AI的巨大潜力及其带来的挑战,特别是关于信任和社会责任的问题。我们关注如何利用AI塑造更美好的未来,并探讨了如何通过技术手段维护民主价值观,以及如何应对AI带来的伦理和社会问题。 Aria Finger: 我们与Latanya Sweeney教授探讨了AI对信任和隐私的影响,以及如何平衡AI的益处和潜在危害。我们还讨论了技术素养、内容审核、以及AI对教育的影响等重要议题。

Deep Dive

Chapters

Shownotes Transcript

Translations:
中文

I go online to Amazon and I want to buy something and I'm looking at the reviews. Did humans write those reviews? Or did generative AI write those reviews? That's a new kind of vulnerability we have. And the issue of trust in this rise of generative AI is absolutely huge. We have to redo everything.

Hi, I'm Reid Hoffman. And I'm Aria Finger. We want to know how together we can use technology like AI to help us shape the best possible future. We asked technologists, ambitious builders, and deep thinkers to help us sketch out the brightest version of the future, and we learn what it'll take to get there. This is possible.

Imagine a world where nearly every footprint, every click, and every data point collected about us is protected and harnessed exclusively for the common good. As we know, that's far from the reality we live in right now. But it doesn't have to be.

Today's digital landscape is filled with unprecedented opportunities and profound challenges, from discrimination in algorithms to threats against democracy itself. And with AI evolving at an extraordinary pace, it's more important than ever to keep asking ourselves, how can we steer this powerful technology towards serving the public interest? How do we create frameworks that protect individuals' privacy while fostering innovation?

Our guest today is someone who's been working towards these answers for decades. Latanya Sweeney is a professor of government and technology at Harvard, the former chief technology officer at the U.S. Federal Trade Commission, and founder of Harvard's Public Interest Tech Lab and Data Privacy Lab. Her pioneering work is cited in major regulations, including HIPAA. She was also the first Black woman to earn a computer science PhD from MIT.

Latanya's research doesn't just ask hard questions. It presents solutions. She focuses on shaping a world where technology serves rather than exploits society. And her insights on AI's ethical challenges, including privacy, discrimination, and the future of democracy, couldn't be more timely. Here's our conversation with Latanya Sweeney.

So, LaTanya, you talked to my friend Krista Tippett on the On Being podcast about your experience being raised by your great-grandparents in Nashville and how, as a young girl, you found solace in math and finding the right answer. What's something you learned from your great-grandparents who were born in the late 19th century that you still find helpful today?

Oh my gosh, where would I possibly start? I think in my own work, I was called back to think about my great-grandfather many times in the early years of doing work because, in fact, he lived most of his life in the Jim Crow South. And as a Black man at that time, he had a lot of principles about how do you survive. And when you look at his principles of survival, they all came down to ways of having anonymity.

and how well it had served him. And, you know, the reality that we, as technology was changing our lives, making us all live in these sort of transparent lives and these sort of every minute of our lives captured in data somewhere. I often think about the inability to have that kind of anonymity and how if things change culturally around you, it can be turned around against you if you don't have it.

I mean, that is so fascinating. And I mean, one thing I just love about, truly about your whole being is that you are such a positive sort of light and yet grappling with these tough issues. And certainly I'm sure you made your great grandparents proud with, you know, your many, many honors. And so back when you were a grad student at MIT, you were studying technology and you overheard someone say computers are evil. Can you say more about that experience?

Well, I mean, to appreciate that experience, you'd have to roll your mind back to a time where, you know, as a graduate student in computer science, you sort of saw this technology revolution coming. You knew it was going to change everything in our lives. And of course, we believed it had nothing but upside.

You know, computers were cheaper. You know, it would not have its own bias. It would actually just lead us to a better democracy and a better tomorrow. It's sort of believing it was going to right all the wrongs of society. And so when somebody comes in and says computers were evil, I mean, clearly she did not actually understand the beautiful utopia awaiting us. And I had to definitely stop and take some time so she could understand better what was going on.

Well, I think pretty quickly you realized that while, you know, on the possible podcast, we're typically techno optimists that, you know, not everything is 100% perfect. Similarly, can you tell us how sort of that encounter led to your weld experiment, which I read about and is pretty impressive? Yeah.

Yeah. So she was an ethicist and she and I are talking back and forth high in the sky. But, you know, there's a part of me that's still kind of the engineer. Let's get pragmatic. Let's get a concrete example. And so when we had a concrete example, she focused on a data sharing that had happened here in Massachusetts with a group insurance commission. This is the group responsible for

for the healthcare or state employees and their families as also retirees. And she had said, look, they've gotten this data, they've given this data to researchers and they've sold a copy to industry. And I talked about all the amazing things that could come from such a data sharing, that we could do retrospective research. We might find ways to cut costs and that actually being able to share health data like that is important and incredibly useful. And most of the data that we've

Much of her conversation had been about how technology was changing the social contracts, that the expectations we have and our laws have certain assumptions built into them that aren't explicit. And so in this particular data sharing example, the question was, this is all great if it's anonymous.

We have that rule that they can sell it, they can share it. It had no names, it had no addresses, it had no social security numbers. Like, it's fine to share it if it's anonymous. But that data did have basic demographics, month, day, and year of birth, gender, and five-digit zip code.

And so I'm sitting, I'm doing a quick calculation. You know, there's 365 days in a year. Let's say people live 100 years. There are two genders in the data set. When you multiply that out, it's 73,000 possible combinations. But I happen to know from earlier work that the typical five digits of code in Massachusetts only had about 25,000 people. That meant that that data would tend to be unique for most people in Massachusetts.

And so my argument is disappearing right before my face. I'm trying to convince her. And I'm like, wait a second, that's not right. That's not helpful. So I wanted to see if it was true. William Weld was the governor of Massachusetts at that time. And he had collapsed in public. And not a lot was known about why he had collapsed. But information about that visit to the hospital was, in fact, in that data.

So I went up to the city hall. He lives here in Cambridge, not very far from where I am now, actually. And I bought the Cambridge voter list. It came on two floppy diskettes. I just have to tell you, my students have no idea what a floppy diskette is. But anyway, I got the voter data on two five and a quarter inch floppy disks. And it had the same demographics as the health data, month, day, year, and birth. And it had gender. It had zip code. And of course, it had various voter information.

I got it because William Well lived in Cambridge, and we assumed he voted that he was on the voter roll. And sure enough, he was. In fact, only six people had his date of birth. Only three of them were men, and he was the only one in his five-digit zip code.

That meant that that combination of date of birth, gender, and zip code was unique for him in the voter data and unique for him in the health data, and that I could use those fields to link his name and identity to his voter record.

And it was a very anecdotal experiment. But how did it generalize? I mean, Cambridge, Massachusetts is home to MIT and Harvard. It's only six square miles. So if he had been 20 years old, it seemed like it would have been a lot harder. And so the question is, like, how does this scale? And so using 1990 census population data was

was able to model and show that 87% of the U.S. population was unique by date of birth, gender, and zip code. I go from being a graduate student a month and a half later, I'm down in D.C. testifying because it wasn't one piece of data that was shared that way. That was the best practice around the world. And around the world, the laws changed, many of them citing the Weld experiment, as it became known as.

because they had to change the way they shared data. Here in the United States, that regulation was HIPAA, and in the original preamble to the HIPAA privacy rule, it cites that experiment as well.

And what I love about that story is that not only did you change your mind, which is so rare, you were arguing against yourself, but then, you know, you actually used data to prove what you were trying to prove and it actually changed the law. So that is awesome. Yeah. And who would have thought that one of the places where Governor Weld would be

enshrined in history is a data experiment, but it is awesome. So let's now even go further back because we share an interest in the history of technology, the industrial revolutions, printing press, semiconductors. Could you share your three-part framework to explain the historical arc of AI and bring us up to speed on where we are now?

Whoa. In five minutes. In five minutes, right. Or less. You know, I tend to think of the times we live in right now. I share this view with many historians that we are living through the third industrial revolution. And, you know, you were talking about the second industrial revolution, how powerful that revolution really was. It gave us electricity and cars and Kleenex and pretty much everything that we take just for granted.

But it literally moved our society from one of agricultural. We had cities and so forth. And it required us to change everything, how we think and how we operate as a society.

As the third industrial revolution, many people put its start date in the 1950s with semiconductors. And then from there it goes, many frames to many computers, personal computers. Then we get the Internet of Things. And all of these within revolutions, including AI now, is certainly a new revolution. We have no idea when this is going to end, but it has already transformed everything in our lives. How we communicate, how we get our news, how we work, how we play, everything.

And so it's been quite altering. It's happening so much faster than the second industrial revolution that we don't have time as a society to sort of regroup and figure out how do we keep the goods of the society and still maintain our societal norms. I think you covered exactly the sort of thing, but it's also like, what are some of the key considerations we should be thinking about as we navigate into the future?

I mean, this is an amazing transformative moment on so many levels.

You know, I often tell my students we live in a technocracy, that we don't live in a democracy anymore. Technocracy is another word that came out of the second industrial revolution. What it meant back then was, hey, we've got this new kind of society. We need people who are experts in economics to run our economy. We need experts in law to be lawyers in certain positions. And so this idea that you needed a skilled expert in certain places in government in order to navigate us forward

is where technocracy gets its meaning. The technocracy we live in right now is one where the design of the technology dictates how we live.

Our laws and regulations are only as good as technology allows them to be implemented and enforced. And the fact that so many of our laws and regulations can't be enforced online is a real problem. And as more of our lives are spent online, it comes down to arbitrary design decisions made by people we don't even know. So right now, it's already changed significantly. I mean, even something as simple as free speech

I have a teenage son. His notion of free speech has everything to do with what most people who spend a lot of time online think free speech is, has very little in common with American jurisprudence notion of free speech. And the idea that free speech is supposed to protect the voice of the underdog, to let the person who can't otherwise be heard be listened to,

is not something that, that's not how free speech works online. It's much more, I can say whatever I want in your face. And if you don't like it, hey, that's free speech, just stomach it, right? And so those are radically different. If you ask my class or you ask a 20-year-old, what is free speech? It's scary because more likely what I found here on campus is more often they'll give you the definition that's aligned with the online rule.

You know, we've all seen those videos of Mark Zuckerberg talking to Congress or senators and them not fundamentally understanding our world of technology, Facebook, social media. And one of the hopes that so many of us talk about is to have people in government who really do understand technology to, you know, create the laws that we need to govern this new online society.

Some might argue that those are the people that we have in charge today. I might argue something different. But you're saying we don't have the government is not governing this online space that we need to look. What would you do if you were in charge? What do you what are the what are the laws that we need to govern this new world?

Well, I think the way to think about it, the way I often talk to my students about it, you know, how do we decide our rules? How do we decide? Forget technology for a moment. How do we decide how we're going to live? And so eventually the students will say, or if you ask a middle schooler what kind of country are we, they'll say we're a democracy, you know, and we elect people, we make decisions by our vote.

And so that's true. And then if you ask high schoolers and coming into college, what else are we? Someone's going to stick out their chest and say, we're a republic. We don't actually make the decisions ourselves. We vote for people who they make decisions on our behalf because we're a democratic republic.

And then I would argue that we're also a capitalist society. And having ran a computer company for 10 years, there are many amazing things that are possible on that capitalist side. But it's also important that how the design of the technology makes new rules that we will live by is also a third factor. And what historically has been an American strength is the check and balances of these three arms.

Any one of those on their own will take us to a place that we probably don't want to go. And when they're out of sync, the problem with so much of our decision making of how we're going to live our lives is determined by technology design today. And the regulators and others who would normally provide that check and balance are unable to do so.

then we are sort of in a kind of free fall. Part of it is knowledge of the technology, having those who would be regulators and lawmakers better understand the technology, but also understanding what its relationship is to governance, what its relationship is to our society. It's been going on so long. I mean, in so many ways, privacy was sort of the first big technology society clash ever.

and sort of security, and then followed by that algorithmic fairness. And then comes, you know, these issues around democracy and so forth, or content moderation as well. And none of these clashes have ever been resolved, not a single one of them. Like, it just keeps building and building. And so we're in a dangerous situation. What may have worked 10 years ago when I was at the Federal Trade Commission is not a formula for success today.

No, it's exactly right. Let's go a little bit into the staffing. There's a whole wide variety of questions. And I agree with you about the, there's technologies it forms, there's the interest of capitalism, there's the interest of democracy, there's actually probably even others. And one of the most central things is how do we get, you know, kind of the instruments of government to be at least in the category of understanding. And one of the

problems, of course, is I think there's a very natural and essential reasons why the generative AI revolution is being driven by, you know, corporations, both the hyperscalers and the other ones. But, you know, that's where the most fundamental knowledge of how this technology is evolving, what's being built into it is. And I find that once you get out of that group, it drops substantially. Not even when it gets to...

Right. So do you have thoughts on kind of what do we do? So generative AI is a huge leap forward. And I do think many people in government realize they sort of missed the boat in this revolution and they want to get in front of generative AI.

But the problem is they can't get completely in front of it. If they really did get in front of it, they'd slow it all down. That would have other ramifications. I mean, the goal is how do we get the benefits of new technologies without the harms? So if you rush to pass laws, you're likely to pass laws that could, in fact, prohibit the best ways it could grow forward. On the other hand,

Many companies want you desperately to pass laws. Why? Because they want to get out of jail free card. They don't know the answer either. They know they can see that it's clashing. They don't know what the right answer is. But if they got a law that says, well, whatever it does, it's OK or we don't have to be held responsible.

gives them a get out of jail free. So one has to shore up the government side. In particular, what are the societal norms that have to be maintained? Which ones is this technology or this particular manifestation of the technology in a particular product, which one is it challenging or going off the rails? What needs to be there to contain it? We wanted the benefits of it, but we just don't want these harms. So identifying the harms

And then addressing the harms. The other way policy goes wrong is it's too prescriptive. It says, oh, it must do it this way or it has to do this. It has to be the opposite. I think it's about setting goals. You know, one of the beautiful thing about technology and innovation is if you set the goals on the guardrails, let brilliance take its form.

If you try to hold brilliance and say, I'm going to hold the pen and you have to do it this way, we're never going to get to the best places.

So in 2003, you wrote a paper, That's AI, A History and Critique of the Field. And in that paper, you described a fundamental division in AI between those who prioritize ideal behavior based on logic and probability and those who see human behavior and psychology as crucial for sound reasoning. Do you think that division still exists? Does it inform the research in the field? Or have you changed your opinion?

Well, there you go. So go back to when I was a graduate student. So I was a graduate student in AI.

And back then, the belief was the way AI would work is us as humans had to figure out what was an intelligent thing, action, behavior. And then us as humans design a way for this stupid machine to do this intelligent thing. And our brilliance was our ability to translate the intelligent action into this machine. Yeah.

And so experts today, the best model of that is TurboTax. You know, no matter who you are, the machine will guide you through completing your tax return and so forth. Right.

And then there was a bunch of, just a couple of graduate students over in a corner talking about statistics and doing these neural net things. And the best they could do is if I gave them a lot of data, they could draw one line of discrimination, right? So the data had to bifurcate some field. It had to bifurcate into two groups and then they would be able to have

have it plot a line for us that says, look what it can do. If the data were more complicated than that, it couldn't go any further. This was not good. On the other hand, I spent a month of my life trying to get a computer to know the difference between a picture of a dog and a picture of a cat.

So back then, that notion of building intelligence into computers, something humans would have to translate into machine, was really different than this idea of just having statistics that would figure things out just based on properties of the data itself.

And by the time I'm writing that paper, there are two major camps happening and they're sort of fueling with each other which of these camps is going to win. And what was causing that fuel wasn't that the techniques had gotten any better on the statistical side as much as computers have gotten so much faster.

And data, the ability to store data was so much larger. And who knew we were going to be capturing so much data on our daily lives? Who knew we were going to be spending so much time on a keyboard writing email, writing documents, doing all this writing that a computer, that an algorithm could then begin to apply statistics to?

And I mean, not to take away from Transformers, there were certainly very significant advances along the way. But the biggest change was that it's still very statistical in nature. And it's mind blowing how good it is. Mind blowing. I mean, it's just really, yeah, it blows my mind. I'm like, I'm constantly amazed, even very early forms of LLMs.

I teach students how to spot these clashes, do experiments to shed light on them, kind of like the Weld experiment. We just do it at scale with students. So one of my students was interested in federal comment servers. You know, one of the ways we change regulations is an agency will announce,

We're getting ready to change a regulation. We're opening up for public comment. In the old days, you would just go down to D.C. and just start yelling in Congress and so forth inside of the Senate halls. And then they moved it to online. So you provide your written comment. So there had been an example where it had gone haywire when someone really took a bot and it would just sort of mass produce the same content, just changing it up by randomly choosing between certain sentences and

And afterwards, people figure that out. But even in the babiest, earliest versions of LLM, what Max did was he had it learn on some old content of what people had said about this topic. And it was writing original responses. And each one was original. And any way that you think you could statistically identify it from what a human wrote wasn't really feasible.

So to prove our point, we ran tests on one of the online survey tools and people couldn't do better than guessing which one was from it. And then we submitted a thousand made up comments into the federal comment server. We let it percolate through and then we notified the federal government that, by the way, we just put these one thousand. So please take those out when you're assessing. But by the way, did you figure it out? And they were like, no, we had no idea.

So AI is really, it's a funny edge to it. There are things it does really well and things it still is really poor at. Well, this federal comment server actually gives one of the questions I want to ask you a very good lead in, which is diving into a more philosophical thread about AI mimicking human behavior.

You've written that the desire to construct a machine in the image of a human will not die, comparing this pursuit to artistic expressions like in films and paintings of humanity's search for immortality and self-reflection. What do you think is the deeper human yearning behind what you call the cultural dream of AI?

That will never die. I mean, we as humans, we just always, whether it's an art or whether it's a poem, we're looking for other manifestations of ourselves.

And that is also true in machines. And it just, it's never going to go away. AI will definitely be a part of it. The current trending and generative AI is certainly going to help move us in that direction more significantly. I mean, even down to just robots, you know, I have a spot around here somewhere, the robotic dog, because that is just a magnificent piece of technology. And, yeah,

You know, we personify them. Almost any human will start saying he right when they see Spot and have Pepper downstairs and people will say she. You know, you just can't help it. It's a way of recreating ourselves.

On this podcast, we like to focus on what's possible with AI because we know it's the key to the next era of growth. A truth well understood by Stripe, makers of Stripe Billing, the go-to monetization solution for AI companies. Stripe knows that when launching a new product, your revenue model can be just as important as the product itself.

In fact, every single one of the Forbes top 50 AI companies that has a product on the market today uses Stripe to monetize it. See what Stripe can do for your business at stripe.com.

And so we've talked about the great promise of generative AI. And like you just said, it's impossible not to treat it as a human because it's so good and gets so close to it. But you've also said that a big challenge is answering the question around truth. Like, how do we build trust at scale? And so I would love to hear from you. Like, what do you think it will take? And how do innovators look for these solutions around trust in AI?

So people are already trying to answer that question, right? You know, the filters on these LLMs, I'll just use them as an example. People are coming up with all kinds of band-aid approaches to try to build trust. So people will probably still keep trying to come up with band-aids. To Reed's point, people who actually know how this technology works will do something deeper than a filter on the outside in order to try to help us better understand trust.

There's a part of generative AI that will always be able to manipulate us. And I'll give you an example as to why I say that. So I was the chief technology officer at the Federal Trade Commission. And one of the things that I learned there is what will make people turn over their life savings to a stranger or get their parents to not only they turn over their life savings, but their parents' life savings and their children's life savings too.

And it has to do online when you are in community with a small group that shares a lot of the intersections with you. The more intersections that group shares with you, the more you trust them. Generative AI has the ability to build trust just between the AI and myself, right? That it understands me. And as a human, we can...

We can keep it at a distance by saying, these are the only tasks I'm using you for, dude. You stay over there. Listen, I will just ask you for these things. But the minute you find yourself and you may not even always know that you're in conversation with, it's going to be huge. How do you build trust? Because you will definitely trust it in a way that you might not trust many humans.

Right. That's the flip side. Too much trust. Yeah, that's a new kind of vulnerability we have.

And the issue of trust in this rise of generative AI is absolutely huge. It is the right question because there are just so many aspects of that trust. We have to redo everything now. I go online to Amazon and I want to buy something and I'm looking at the reviews. Now, did humans write those reviews or did a generative AI write those reviews? Or by the way, did humans who are paid by a corporation write those reviews? I mean, there's a whole stack here. Right, right.

With an AI. Yes. Right. A human says, write 10 versions of this. So all of a sudden, you know, things that I trusted before, like we can start making a list of what are the things I can't trust anymore. Right. And then online. So we got these amazing LLMs because in fact it reviewed all the stuff we put online that's publicly available and even semi publicly available. And.

Most of that was human generated. Almost all of it was human generated. But because it's so good at regurgitating and remixing and regurgitating back the remix version and so forth, you know, in a couple of years, most of the online content is going to be AI generated. And then all of a sudden it changes our notion of trust of online content.

Well, speaking on the trust side, this actually gets to another of the questions that I was looking to ask you. And it's funny because I asked Inflection, the AI company that I co-founded as Chatbot Pi. What

What key element should be incorporated into a new social contract that ensures technology upholds democratic values? Because I knew he was talking to you, so I start with Pi and get there. And Pi said... Transparency. Clear and open communication about how technology is being used, who is using it, and for what purposes is essential. Accountability. Those who develop and use technology should be held responsible for its impacts on society.

Equity. Technology should be accessible to all members of society, regardless of their social or economic status. Privacy. Individuals should have control over their personal data and how it is used. Human-centered design. Technology should be designed with the needs and values of humans in mind. I love that answer. I wonder who it stole it from. Yes, exactly. Might have trained on someone that we're talking to, possibly.

So I love that answer. You know, the thing that I find funny is, and then if I ask Pi, all right, Pi, exactly how did you come to know this? It will probably answer with, well, I know. It'll give you a nice full-throated answer. And of course, it's not being transparent or honest. It's doing the best it can. As you know, these LLMs, the basic thing is they're generative. Yeah. Which you could say is great at

essentially imagining or hallucinating, but they're trying to be constrained to answers that it thinks you want to hear, which we're trying to train it in ways of saying, yes, truthful, helpful, right, is how we're trying to constrain that set of capabilities. You know, truthful doesn't mean the same as what we mean, to the point that I'm continuing to personify now Pi.

There is a kind of the communication is missing each other. What we're after for truth is debatable sometimes. Its notion of truth is totally different. You know, it's like I'm giving you the statistical relationship between these words. What more can I do for you? You want more statistical relationships? I can take that word out and I will compute something else for you.

Yep.

And I am so grateful and thankful that Lyft exists and that technology helps me ride a bike instead of drive a car or whatever it might be. And you are one of the, you know, the pioneers of public interest technology. Can you talk about, you know, your favorite real world examples of where you saw technology successfully implemented, you know, to address a particular social change or where you think public interest technology sort of did it best?

So in the spirit of the Weld experiment, I teach students how to do this, and then they do this at scale. And the work that they've done has gone on to change laws and regulation and business practices. And one of my favorite examples of that is, in fact, Airbnb. A professor here at Harvard had shown that host on Airbnb in New York City, if you were a black host and a white host and you were offering comparable properties, the black host made about 12% less.

because that was what the market would bear, if you will. And so students wanted to do a more rigorous one. Example, they chose Oakland and Berkeley, California, and they were able to show that Asian hosts made 20% less than white hosts for comparable property. So Airbnb, of course, sends their attorneys forward and so forth, and then they change their platform. So if you run on Airbnb today, they set the price.

And part of setting the price is to make sure that that side effect doesn't happen. So many kudos for them. We've done a lot of these kinds of experiments where we've been able to find places where a societal norm is conflicting with technology. How do we keep the technology? How do we fix the norm? That would be an example.

You mentioned bikes. I just want to say one of my students. So bicycle sharing had the problem that when you would go to a station, there might not be any bicycles there.

Or when you go to a station, there's no place to put your bicycle. So one of my students, Dhruv, came up with a fantastic algorithm to solve that problem. And it's been used in cities around the country. So anyway, I'm just telling you, there's a zillion low-hanging problems to which students can address. And that's part of what we do in public interest tech as well.

I love it. And I think it's so important because it's something I struggle with because, you know, the underlying technology might not be racist or sexist or whatever. And there's no ill intentions from the technologist. But when it bumps up against real world discrimination, it creates this outcome. And so I think we need to get to a place where we're not accusing anyone of anything. We're just saying, like, now that we know that

we have a duty to fix it. Now that we know we can actually use technology to solve the problem and like how magical that is because you can study it and because you have the data, you can see, oh, well, Airbnb made this simple change and all of a sudden black and white hosts are actually getting the same amount of money. What a beautiful outcome. So I love that in particular. Thank you.

And this takes us back to the earlier part of our discussion around these three pillars of democracy, republic, and capitalism. So when they're out of sync, we got the leaked Facebook documents that Frances Haugen produced, and we made them public on fbrarchive.org. But one of the things that jumped out at me right away from them was how poorly, how badly they do content moderation.

The second thing that jumped out at me is, and we don't know how to do it any better. In other words, this is clearly a computer science problem that no computer science school or group of thinking has really started to address. Why? Because it's inside of the silo of a capitalist company who has a fiduciary responsibility to their shareholders. And so they don't want their bottom line affected.

But somehow we've got to find a way that it's okay to say, I don't know how to do this. Here's money or whatever. Can we get a thousand great minds to work on how do we do content moderation better, for example? Instead, we're moving towards, we're not going to do content moderation. That's not okay. One of the things that I've been trying to kind of get to is, we'd like you to be solving this problem better

And here is like what the metrics might be if you were iterating to it from within, you know, your company. So, for example, because your attention algorithms are increasing agitation, you know, hatred, other kinds of things,

We'd like you to baseline kind of like what is kind of a reasonable agitation metric. And then I want you to be measuring your dashboards of how your algorithms are working and promoting and making sure you're not overly increasing it. I mean, obviously, some is part of the natural human condition. That's one. How would you think about for content moderation, misinformation, fake news, other kinds of things? Like, how would you think about

trying to kind of facilitate the, hey, if you were doing X, Y, and Z, inventing technology for it, because, you know, to have it be a invention loop, a reasonable economics loop, what would be the gestures you'd make?

So, Reep, the word I love the most of all the words you said was metric. Set a bar and let's see how you can get to this bar. Because on the one hand, if you're the company, you have a fiduciary responsibility. You're trying hard not to screw up your money machine. And I mean that in an affectionate way, not as some kind of hypercritical complaint.

But I mean, that's what they are supposed to be doing. Right. So they don't want to mess that up. So they're not quick to want to come forward with how the sausage is made and where the sausage problems might be. But on the other hand, when the sausage is having a problem, that's where that problem is causing a societal harm.

I don't want to come in and say you have to do X, Y, Z. When government does that, it's not actually working at its best and it's not actually good because they can miss it either way and society doesn't get the best benefit of the technology or the society may get even worse problems.

And the company has a sort of carbage to do so. So instead, it's much better to set metrics. Metrics that say, oh, wait, we didn't realize teenage girls were having this problem. Now that we know this, this is what you've got to guarantee. Or this is the promise that you have to make. Or this is the minimum. And you need to show us that you're doing that.

The companies, though, unfortunately, they don't like metrics because they much rather have a rule, right? They'll tell you, well, if we just put a notice, is that good enough? So they want an out-of-bound solution because that's easier for them. So a metric is painful for them, too. Yeah, I think that may be foolish because the problem is if we're giving them rules, what we really need is metric outcomes, right?

the rules are going to be so crude as to be very damaging to all kinds of outcomes. It's much better to say being apply your innovation. For example, you say, look,

content moderation, a publishing house will say, you must hire tons and tons of editors to do editing all this content moderation. That's just like, oh my God, that's not going to work. And that's part of the reason why they fight so hard on this topic where you say, well, actually, these are the metrics that we're trying to look for at a content moderation. Can you innovate on your technology and

to be within these metrics. And obviously we might move and modify them in as long as you're dynamic over time and kind of improving them, which would be part of good governance on this. And they don't have to be necessarily even publicly revealed. They could be measured through auditors and discussions with governmental agencies. This would be a far healthier dynamic position to be in, which is one of the reasons why this is exactly what I try to tell government regulatory people. This is the kind of thing that we should be talking about.

Yeah, but think about it this way. We've been talking about generative AI. If content moderation is just a set of practices, I have a checklist of things I do. I make sure somebody saw this. I make sure my counts are this or that.

then we still get bad content moderation. If content moderation is a metric and it's cheaper for me to do it by training an AI to do it, all of a sudden we get innovation right at the place where we need it the most. Not innovation for content moderation coming from the outside. As you point out, they know their technology and their company functioning better than anyone else, but to incentivize them to spend a resource on that.

is really important. You know, I agree that if we have the government making these regulations, then the key is to have that metric that a company can strive for however they like. And to your point, maybe they come up with a new, you know, a new chatbot that does it for them. We want it to be as cheap as possible to comply with these regulations.

But in the absence of government regulation, how do we foster a greater sense of social responsibility within the tech sector? Like Airbnb, you said you gave them the findings and they changed. There was no law. What do you think the levers are if there is no, in the absence of government regulation, to get some of this, you know, sort of more positive technology into the world?

It is a series of things that have to be put in place. Some of it is a sense of responsibility. I mean, if social media was financially responsible for some of the harms that people are experiencing, that's one way to incentivize them quickly to find solutions. Sometimes the answer might be a legal answer like that. It might be setting a metric with fines or embarrassment.

I think all of these things play a role. But the most important thing is the point that you're making is it's all about how do I get them to be more responsible to society for the harms that their technology may be producing?

So I asked this question with a little bit of nervousness, but I think it's good in this context. I usually take a lot of pride in LinkedIn being much better on all these issues than other social networks. Any commentary on LinkedIn, how we should improve other things? It's always important to do. And here I am talking to an expert, so I figured might as well ask. So Reid, I just have to tell you straight up, you know, I admire you that you would ask me that question in a recording. Yeah.

Truth and improvement really matters. In my class, I have this chart of these long lists of laws and regulations. And I say, look, people died and fought for these laws. And let's go visit them and see how they're no longer possible to be enforced. So one of them that LinkedIn shows up on the list.

has to do with employment. So when recommendations are made to employers, I'm just going to say, because it's not just a LinkedIn thing, it's any group who has a large set of resumes, there's a business opportunity to recommend to an employer who might be a good candidate. And so I say, yeah, I want to hire some people. So the first group you give me, I say, oh, I like these. I don't like these.

And the system is going to learn my preferences over time. But my preferences might be that I don't want women or my preferences might be a discrimination against age. And all of the ways that we would have detected that before happening at scale, we can't do in that closed conversation, the closed loop.

And so this is a place where that's, we need the technology to make that assurance to us that, you know, well, yeah, I did learn their preference, but their preference is going too far one way. So I'm just going to automatically keep pushing in some of these others. Well, one of the things that may be good news for you is I actually do know that people at LinkedIn do make an effort on this topic. We actually care about it. I don't know if we're doing it well enough. That's a different question, but yeah.

The other thing that, you know, both Ari and I actually engage in is this thing called opportunity at work, which is to try to make sure that simply degree certifications are not overly weighted so that you actually can have access to different kinds of job opportunities, especially within the tech industry by talent, not just by, oh, I have a degree from Stanford. Yeah. You know, kind of as a way. So it's stuff that we completely agree with the vector you're talking about.

And it's not trivial to do. I mean, I don't want to make it sound like, oh, and you can just open up your box, you know, because it's a learning system and we want it to do its thing. I mean, it's very much the generative AI kind of issue we were talking about earlier. If you make it an appendage after the fact, it has other consequences that may make your service not as good.

Yep. And the precise way that we update these laws, because the technology does change the landscape in which we all operate, is one of the things that's way behind. So completely agree with all your points on that.

Well, one of the things that Reid and I have been talking a lot about lately is agency. And so you might know Reid recently wrote this new book called Super Agency. And the idea of super agency is when like millions of people get simultaneous access to new technological breakthroughs. And so it has those network effects. So it doesn't just benefit you. It benefits everyone around you and everyone around you having that additional agency benefits you even more.

I love how you are both a techno-optimist, but you very much see the harms and want to make sure that they're equally applied to everyone so that everyone can benefit. When you think of agency and super agency, perhaps in particular in the age of AI, how do you think it could be applied to education? How could you use it as a professor in the classroom? How can Harvard use it? You've spent a long time in academia. What does AI mean for the next wave of education?

What a fantastic question. Oh, my God. So I had the opportunity to lead the college in trying to get its head around what to do with generative AI. And of course, it means rethinking our classes and how we teach and how we learn. So in some classes, the ability to have to interact with the AI, to interact with an LLM and so forth, as a perspective, I mean, in a philosophy class, being able to say,

Driverless cars. What would Immanuel Kant think about driverless cars? How would you apply dialectical logic to this? And then the students can argue whether or not they agree with that interpretation or not. The ways in which this can be done is amazing. So the opportunities are there. It requires a total rethinking. How do students learn to write or program? Because I can ask an LLM, could you write me a program that does

my assignment. And so it makes us also have to think about there are times where we want these students to develop a particular skill, but also what is the future like when they go to a job? Are they going to, if they get hired as a programmer, does their boss want them sitting there writing the code from scratch when they could build on the shoulder of an LLM to give them the first draft? Helping my colleagues begin to navigate what is this and what does it mean was a huge honor actually. And

And it's been exciting to see the new uses that it's been put to. Of course, the students are always one step ahead. Well, speaking of students, any recent insights that you may have gained from your students? You know, it's really interesting. I live with 420-year-olds. So I also have a role as a faculty dean. This is my ninth year doing this. And they give me so much hope for the future.

So much hope. They're amazing people. They have, you know, you forget what it was like to be 20 and where you're trying to figure out what you're going to keep and what you're going to throw away and who you are. But also just the energy and boldness that many of them exhibit is quite refreshing.

It's fascinating to me when I get discouraged and so forth sometimes to just have a conversation with them and them see a different kind of light or a different kind of way forward.

Do they see problems? There's a long list. In fact, there's a group of 30 students who are organized who don't even take my class. They just want to meet and talk about my issues. So it's very much on their mind. If I were to go back 20 years ago, students were interested in my privacy class too, but they didn't feel the same kind of urgency and they didn't feel the same kind of passion and they didn't feel the same kind of need.

I do think the students today, their eyes are much wider open. I mean, you have students who want to take a class with you. It's not even a class. I think that's a good sign. I think you're doing something right. I think we're good to move to rapid fire. So I will start. Is there a movie, song or book that fills you with optimism for the future?

Well, I just got through reading a book called Queen Bess. It excites me for the future because it's a bit political in a time where at least 40 percent of our country is caught up in political concern. And it asks the question, who would you bring from the past to help you navigate the future? And in this particular author, her answer was Queen Elizabeth.

Right? The first who were like ran, written through all these rough times. And then you put her in today's setting. What would she say? What would she do? So I find it very delightful. That is actually a great rapid fire question. I feel like I want to ask so many people that. Like it's a good, that's a great cocktail party question. Who would you bring from the past to help navigate the future? Absolutely. So where do you see progress or momentum outside of your industry that inspires you?

Well, I'm actually am inspired in my industry by AI, frankly.

Yes, it's scary, a bold new future that at a time we're not ready for it. But on the other hand, what it might be capable of and where it might could take us is exciting. Whether or not we actually get to that vision of utopia. I mean, in many ways, I'm still the graduate student having that conversation in the lounge with.

with the ethicist that's trying to make good on that vision of technology. I love it. And that brings us right to the final question perfectly. Can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity's way in the next 15 years? And what's the first step to get there?

Oh, then I will have succeeded in my life mission of delivering to society the benefits of technology without the harms. Amen. That's awesome. Latanya, great pleasure. I look forward to talking to you again. Oh, thank you guys. I really appreciate it. Possible is produced by Wonder Media Network. It's hosted by Ari Finger and me, Reid Hoffman. Our showrunner is Sean Young.

Possible is produced by Katie Sanders, Edie Allard, Sarah Schleid, Vanessa Handy, Aliyah Yates, Paloma Moreno-Jimenez, and Malia Agudelo. Jenny Kaplan is our executive producer and editor. Special thanks to Surya Yalamanchili, Saida Sepiyeva, Vinasi Delos, Ian Alice, Greg Beato, Parth Patil, and Ben Relis. And a big thanks to Max Boland, Joshua Shank, and Little Monster Media Company.