We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode At The Money: Algorithmic Harm with Professor Cass Sunstein

At The Money: Algorithmic Harm with Professor Cass Sunstein

2025/6/4
logo of podcast Masters in Business

Masters in Business

AI Deep Dive AI Chapters Transcript
People
C
Cass Sunstein
Topics
Cass Sunstein: 算法既可以根据个人喜好提供定制化服务,如Jedi Knights使用算法提供符合用户兴趣的信息和产品;但同时,Sith也会利用消费者信息不足和行为偏差进行剥削。例如,针对不了解健康产品的消费者,算法可能推送虚假广告;针对过于乐观的消费者,算法会利用其乐观偏见。我认为,算法根据经济状况调整价格,对富人收取更高价格,在一定程度上是合理的,因为这提高了系统效率。但如果算法意识到消费者缺乏信息,无论在定价还是质量方面,情况都会变得糟糕。算法越来越擅长识别消费者的弱点并加以利用。总之,算法在提供便利的同时,也存在被滥用以损害消费者利益的风险,需要加以警惕和规范。

Deep Dive

Chapters
This chapter defines algorithmic harm using examples from Uber pricing, Amazon book recommendations, social media feeds, and music streaming services. It also introduces the concept of how algorithms can create cultural balkanization by reinforcing existing tastes and preferences, limiting exposure to diverse perspectives.
  • Algorithms determine prices (Uber), content (TikTok, Instagram), and even supermarket prices.
  • Algorithmic harm is defined as exploitation of consumers' lack of information or behavioral biases.
  • Algorithms can reinforce existing tastes, leading to cultural balkanization and hindering individual preference development.

Shownotes Transcript

Translations:
中文

This is an iHeart Podcast.

In business, plans change fast and your brand has to keep up. That's why teams rely on 4imprint for promotional products that deliver. 4imprint offers thousands of options including apparel, drinkware, tech, and trade show gear. Many available with 24-hour turnaround, helping you move quickly and never compromise in quality. You'll enjoy free samples, expert support, and every order back by their 360-degree guarantee. So it arrives right and on time. Explore more at 4imprint.com. 4imprint.

Blumberg Audio Studios. Podcasts, radio, news. Algorithms are everywhere. They determine the price you pay for your Uber, what gets fed to you on TikTok and Instagram, and even the prices you pay in the supermarket.

Is all of this algorithmic impact helping or harming people? To answer that question, let's bring in Cass Sunstein. He is the author of a new book, Algorithmic Harm, Protecting People in the Age of Artificial Intelligence, co-written with Oren Bar-Gill. Cass is also a professor at Harvard Law School and is perhaps best known for his books, The

on Star Wars and co-authoring Nudge with Nobel laureate Dick Thaler. So Cass, let's just jump right into this and start by defining what is algorithmic harm?

Okay, so let's use Star Wars. So let's say the Jedi Knights use algorithms and they give people things that fit with their tastes and interests and information. And people get if they're interested in books on behavioral economics, that's what they get at a price that suits them. If they're interested in a book on Star Wars, that's what they get at a price that suits them.

The Sith, by contrast, take advantage with algorithms of the fact that some consumers lack information and some consumers suffer from behavioral biases. So we're going to focus on consumers first. If people don't know much, let's say, about health care products, an algorithm might know that, that they're likely not to know much.

and might say, we have a fantastic baldness cure for you. Here it goes. And people will be duped and exploited. So that's exploitation of absence of information. That's algorithmic harm. If people are super optimistic and they think that some new product is going to last forever when it tends to break on first usage, then the algorithm can know those are unrealistically optimistic people and exploit their behavioral bias. Okay.

So I referenced a few obvious areas where algorithms are taking place.

Uber pricing is one. The books you see on Amazon is algorithmically driven. Clearly, a lot of social media, for better or worse, is algorithmically driven. And even things like the sort of music you like on Pandora. What are some of the less obvious examples of how algorithms are affecting consumers and regular people every day?

Okay, so let's start with the straightforward ones and then we'll get a little subtle. So straightforwardly, it might be that people are being asked to pay a price that suits their economic situation.

So if you have a lot of money, the algorithm knows that. Maybe the price will be twice as much as it would be if you were less wealthy. That, I think, is basically OK. It leads to greater efficiency in the system. It's like rich people will pay more for the same product than poor people. And the algorithm is aware of that. So that's not that subtle, but it's important. Also not that subtle is targeting people directly.

based on what's known about their particular tastes and preferences. Let's put wealth to one side. And so it's known that certain people are super interested in dogs, other people are interested in cats, and there we go. And all that is very straightforward happening. If consumers are sophisticated and knowledgeable, that can be a great thing to make markets work better. If they aren't, it can be a terrible thing to make consumers get manipulated and hurt.

Here's something a little more subtle. If an algorithm knows, for example, that you like Olivia Rodrigo, and I hope you do because she's really good, then there are going to be a lot of Olivia Rodrigo songs that are going to be put into your system. And let's say no one's really like Olivia Rodrigo, but let's suppose there are others who are vaguely like her, and you're going to hear a lot of that.

Now, that might seem not like algorithmic harm. That might seem like a triumph of freedom and markets. But it might mean that people's tastes will calcify and we're going to get very balkanized culturally with respect to what people see and hear. So they're going to be Olivia Rodrigo people and then they're going to be Led Zeppelin people and they're going to be Frank Sinatra people. And there was another singer called Bach, I guess. I don't know much about him, but there's Bach and there would be Bach people.

And that's culturally damaging, and it's also damaging for the development of individual tastes and preferences.

So let's put this into a little broader context than simply musical tastes. And I like all of those. So I haven't become balkanized yet. But when we look at consumption of news media, when we look at consumption of information, it really seems like the country has self-divided itself into these happy little media bubbles that

that are either far left-leaning or far right-leaning, which is kind of weird because I always learn the bulk of the country and the traditional bell curve, most people are somewhere in the middle. Hey, maybe they're center-right or center-left, but they're not out on the tails. How does these algorithms affect our consumption of news and information? Right.

About 15, 20 years ago, there was a lot of concern that through individual choices, people would create echo chambers in which they would live. And that's a fair concern. And it has created a number of, let's say, challenges for self-government and learning. What you're pointing to is also emphasized in the book, which is that algorithms can echo chamber you.

An algorithm might say, you know, you're keenly interested in immigration and you have this point of view. So, boy, are we going to funnel to you lots of information because clicks are money and you're going to be clicking, clicking, clicking, clicking. And that might be a very good thing from the standpoint of the seller, so to speak, or the user of the algorithm. But from the standpoint of you, it's not so fantastic anymore.

And from the standpoint of our society, it's less than not so fantastic because people will be living in algorithm driven universes that are very separate from one another. And they can end up not liking each other very much. But even worse than not liking each other, their view of the world changes.

aren't based on the same facts or the same reality. Everybody knows about Facebook and to a lesser degree, TikTok and Instagram and how it very much balkanized people into things. And we've seen that in the world of media. You have Fox News over here and MSNBC over there. How significant of a threat

does algorithmic news feeds present to the country as a democracy, self-regulating, self-determined democracy? Really significant. And there's algorithms and then there are large language models, and they can both be used to create situations in which, let's say, the people in

Some city, let's call it Los Angeles, are seeing stuff that creates a reality that's very different from the reality that people are saying in, let's say, Boise, Idaho. And that can be a real problem for understanding one another and also for mutual problem solving.

So let's apply this a little bit more to consumers and markets. You described two specific types of algorithmic discrimination. One is price discrimination and the other is quality discrimination. Why should we be aware of this distinction? Do they both deserve regulatory attention?

So if there is price discrimination through algorithms in which different people get different offers, depending on what the algorithm knows about their wealth and tastes, that's one thing. And it might be OK. People don't stand up and cheer and say hooray. But if people who have a lot of resources are given an offer that's not as, let's say, competitive.

seductive as one that is given to people who don't have a lot of resources just because the price is higher for the rich than the poor, that's okay. There's something efficient and market-friendly about that. If it's the case that people who are, let's say,

not caring much about whether a tennis racket is going to break after multiple uses and other people who think the tennis racket really has to be solid because I play every day and I'm going to play for the next five years, then some people are given the, let's say,

Immortal Tennis Racket and other people are given the one that's more fragile. That's also OK, so long as we're dealing with people who have a level of sophistication, they know what they're getting and they know what they need. If it's the case that for either pricing or for quality, the algorithm is aware of the fact that certain consumers are particularly likely not to have relevant information, then everything goes haywire.

And if this isn't frightening enough, note that algorithms are an increasingly excellent position to know this person with whom I'm dealing doesn't know a lot about whether products are going to last. And I can exploit that. Or this person is very focused on today and tomorrow and next year doesn't really matter. The person's present biased and I can exploit that.

And that's something that can damage vulnerable consumers a lot, either with respect to quality or with respect to pricing. So let's flesh that out a little more. I'm very much aware that when Facebook sells ads, because I've been pitched these from Facebook,

They could target an audience based on not just their likes and dislikes, but their geography, their search history, their credit score, their purchase history. Like they know more about you than you know about yourself. It seems like we've created an opportunity for some potentially abusive behavior. Where is the line crossed here?

From, hey, we know that you like dogs and so we're going to market dog food to you to we know everything there is about you and we're going to exploit your behavioral biases and some of your emotional weaknesses. OK, so suppose there's a population of Facebook users who are super well informed about food and really rational about food.

So they particularly happen to be fond of sushi and Facebook is going hard at them with respect to offers for sushi and so forth. Now, let's suppose there's another population, which is they know what they like about food, but they have kind of hopes and false beliefs both about the effect of food on health.

then you can really market to them things that will lead to poor choices. And I've made a stark distinction between fully rational, which is kind of economic speak, and, you know, imperfectly informed and behaviorally biased people, also economic speak, but it's really intuitive. There's a radio show, maybe this will bring it home, that I listen to when I drive into work. And there's a lot of marketing about a product that is supposed to relieve pain.

And I don't want to criticize any producer of any product, but I have reason to believe that the relevant product doesn't help much. But the station that is marketing this product to people, this pain relief product, must know that the audience is vulnerable to it.

And they must know exactly how to get at them. And that's not in the interest of, that's not going to make America great again. To say the very least. So we've been talking about algorithms, but obviously the subtext is artificial intelligence, which seems to be the natural extension and further development of ALGOS.

Tell us how, as AI becomes more sophisticated and pervasive, how is this going to impact our lives as employees, as consumers, as citizens? ChatGPT, chances are, knows a lot about everyone who uses it. So I actually asked ChatGPT recently. I use it some, not hugely. I asked it to say some things about myself.

And it said a few things that were kind of scarily precise about me based on some number of dozens, not hundreds, I don't think, of engagements with ChatGPT.

So large language models that track your prompts can know a lot about you. And if they're able also to know your name, they can instantly basically learn a ton about you online. And we need to have privacy protections that are working there. Still, it's the case that AI broadly is able to use algorithms and generative AI can go well beyond the algorithms we've gotten familiar with.

Both to make the beauty of algorithmic engagement, that is, here's what you like, here's what you want, we're going to help you. And the ugliness of algorithms, here's how we can exploit you.

to get you to buy things. And of course, I'm thinking of investments too. So in your neck of the woods, it would be child's play to get people super excited about investments, which AI knows the people with whom it's engaging are particularly susceptible to, even though they're really dumb engagements. Really, really interesting. So since we're talking about investing, I can't help but bring up

both AI and algorithms trying to increase so-called market efficiency. And I always go back to Uber's surge pricing. As soon as it starts to rain, the prices go up in the city.

It's obviously not an emergency. It's just an annoyance. However, we do see situations of price gouging after a storm, after a hurricane. People only have so many batteries and so much plywood and they kind of crank up prices. How do we determine what is the line between something like surge pricing and something like, you know, abusive price gouging? Okay. So you're in a terrific area of...

behavioral economics. So we know that in circumstances in which, let's say, demand goes up high because everyone needs a shovel and it's a snowstorm, people are really mad if the prices go up, though it might be just a sensible market adjustment.

So as a first approximation, if there's a spectacular need for something, let's say shovels or umbrellas, the market inflation of the cost, while it's morally abhorrent to many and maybe in principle morally abhorrent from the standpoint of standard economics, it's OK.

Now, if it's the case that people under short-term pressure from the fact that there's a lot of rain are especially vulnerable, they're in some kind of emotionally intense state, so they'll pay kind of anything for an umbrella, then there's a behavioral bias which is motivating people's willingness to pay a lot more than the product is worth.

So let's talk a little bit about disclosures and the sort of mandates that are required. When we look across the pond, when we look at Europe, they're much more aggressive about protecting privacy and making sure big tech companies are disclosing all the things they have to disclose. How far behind is the U.S. in that generally? And are we behind when it comes to disclosures about algorithms or A.I.?

I think we're behind them in the sense that we're less privacy focused, but it's not clear that that's bad. And even if it isn't good, it's not clear that it's terrible.

I think neither Europe nor the US has put their regulatory finger on the actual problem. So let's take the problem of algorithms not figuring out what people want, but algorithms exploiting a lack of information or a behavioral bias to get people to buy things at prices that aren't good for them.

That's a problem. It's in the same universe as fraud and deception. And the question is, what are we going to do about it? A first line of defense is to try to ensure consumer protection, not through heavy-handed regulation. I'm a longtime University of Chicago person. I have in my DNA not liking heavy-handed regulation, but through helping people to know what they're buying.

and helping people not to suffer from a behavioral bias, such as, let's say, incomplete attention or unrealistic optimism when they're buying things. So these are standard consumer protection things, which many of our agencies in the U.S., homegrown, made in America,

They've done that. And that's good. And we need more of that. So that's first line of defense. Second line of defense isn't to say, you know, privacy, privacy, privacy, though maybe that's a good song to sing. It's to say a right to algorithmic transparency.

So this is something which neither the U.S. nor Europe nor Asia nor South America nor Africa has been very advanced on. So this is a coming thing where we need to know what the algorithms are doing. So it's public. What's Amazon's algorithm doing? That would be good to know. And it shouldn't be the case that some efforts to ensure transparency invade Amazon's legitimate rights.

Really, really fascinating. Thanks, Cass. Anybody who is participating in the American economy and society, consumers, investors, even just regular readers of news,

needs to be aware of how algorithms are affecting what they see, the prices they pay, and the sort of information they're getting. So with a little bit of forethought and the book Algorithmic Harm, you can protect yourself from the worst aspects of algorithms and AI. I'm Barry Ritholtz. You're listening to Bloomberg's At The Money. At The Money.

The data that matters for your investments. The entire auto sector is higher today. And analysis on the companies making news on Wall Street. Tesla's been a stock that's been in focus. Shares have really been all over the map this morning. Listen to the Stock Movers Report from Bloomberg. Let's talk.

This is an iHeart Podcast.