Under Biden, Americans' cost of living skyrocketed. Food, housing, auto insurance. Lawsuit abuse is a big reason everything's more expensive today. Frivolous lawsuits cost working Americans over $4,000 a year in hidden taxes. President Trump understands the problem. That's why he supports loser pays legislation to stop lawsuit abuse and put thousands back in the pockets of hardworking Americans.
It's time to make America affordable again. It's time to support the President's plan. You can go to kitted.shop and use the code SMART50, S-M-A-R-T-5-0 at checkout, and you will get half off a set of thinking superpowers in a box. If you want to know more about what I'm talking about, check it out, middle of the show.
Welcome to the You Are Not So Smart Podcast, episode 309. Welcome to the You Are Not So Smart Podcast, episode 309.
In 1974, two psychologists, Daniel Kahneman and Amos Tversky, worked together to forever, as the New Yorker once put it, change the way we think about the way we think.
The prevailing wisdom before their landmark research went viral in the way that things went viral in the 1970s was that human beings are, for the most part, rational optimizers who are always making the kinds of decisions and judgments that best maximize the potential of the outcomes under their control.
This was especially true in economics, where they were trying to understand the behavior of marketplaces. And the assumption, the wisdom of the day was that, yeah, there's some irrationality here and there, but it's random. They didn't see human error as something that could be predicted, something that was systematic, but that was all about to change.
around 1974. And the story of how this happened, how Amos Tversky and Daniel Kahneman created this paradigm shift so powerful that it reached far outside economics and psychology to change the way all of us see ourselves is a very, very fascinating story. One that required the invention of something called the psychology of single questions. We're going to talk about that in this episode. We're
And let's start with one of those questions. Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with the issue of discrimination and social justice.
and also participated in anti-nuclear demonstrations. Which of the following is more probable? 1. Linda is a bank teller. Or 2. Linda is a bank teller and is active in the feminist movement. Okay, so which of these two feels right to you? Feels right. In their research, when Kahneman and Tversky asked this question,
more than 80% of people arrived at the wrong answer. Their guts told them, their intuitions told them that one of these two answers was clearly more likely than the other, more probable than the other. And more often than not, it was the wrong answer. Now they asked this in the 1970s at a time when the feminist movement of that era was very top of mind. It was in the news. It was everywhere. Yet today, uh,
Most people still tend to arrive at the wrong answer, especially if not given much time to contemplate. So why is that? Well, here's social psychologist Andy Luttrell to help us understand and also to tell us which of these answers is the right answer.
So I think our intuitions are not statisticians. That's the fundamental breakdown. Statistically, there's no way that it's more probable that Linda is both a bank teller and active in the feminist movements than just the sheer probability of her being a bank teller. Because to be a bank teller and a feminist is like, it's more specific than being a bank teller. But Andy, I just, Linda is 31 years old, single, outspoken, very bright,
She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in anti-nuclear demonstrations. It seems like, I mean, it has to be more likely that she is... Like, if I had described a different Linda than that, then I wouldn't feel this way. So how can it be true that...
It's not more like, how can it, how can it be true? When we ordinarily move through the world, we're not thinking about those probabilities. We're not like actually computing the actual probabilities of things being the case. We just are looking at what we see and filtering that through our experience. And so we go, well, what I see looks a lot like feminist activities. Like this just looks like someone who would be involved in some sort of social justice kind of initiative and,
But that wasn't the question you were asked. The question isn't how similar is this description to a feminist? The question is a question about probability. And so the way these, what we call heuristics work is that people end up answering a question that's not the question they were asked. They answer a question about similarity, thinking they're answering a question about probability. So the answer is that it is more probable that Linda is a bank teller
then Linda is a bank teller and anything else, anything else, in addition to being a bank teller. That's just math. That's just statistics, probabilities. But despite this, it doesn't feel right. And this has become known in psychology and statistics and economics and all sorts of domains as the Linda problem.
Because it's an example of the representativeness heuristic. Two very weird words. One sort of made up, the other very old. Representativeness is the made up one. Heuristic comes from the Greek, heuriskin, which means to find or to discover. It was popularized scientifically in computer science back in the mid-20th century when they were experimenting with algorithms. The early AI research of the 1950s and 60s.
It was soon claimed by psychology, and it means in that context a shortcut, an easier path, cognitively speaking. And the concept of a heuristic in psychology emerged along with the concept of bounded rationality. We have limited brain power, limited time, limited patience, so we are satisficers, not maximizers. We will settle for good enough in all sorts of situations. For instance,
When deciding where to eat, we usually go with the easiest option, the nearest place. Not because that's our favorite, not because it is particularly good, but because it would be a hassle to open up your phone, search for the best food in town, talk to people around you, ask for what they think is the best place. What do you recommend? Then take the time and effort to travel to where you're
Everyone says you should go where your phone says you should go, where the search engines say you should go, where the maps or the Uber or whatever. It's easier to just go to the closest place, even if that's the cafeteria. Herbert Simon, the computer scientist, used the analogy of an ant on a beach. It can seem like the ant's behavior is complex and deliberate, but it's actually just using heuristics to overcome each second-by-second obstacle that it encounters.
And we are also doing something similar constantly. All of our little decisions all day long are often heuristical. But when you look at the entirety of your decision-making over the course of a day or a week or a month or a year, it can seem like you've been making decisions thoughtfully, carefully, maximizing, doing the best thing you could possibly do in that situation. But we're rarely doing that.
So yes, the Linda problem. You are bypassing the laborious task of calculating probabilities, and your brain is looking for clues that fit a stereotype instead, like a lazy detective jumping to conclusions because it's easier. It literally takes less effort.
And to be clear, that's why there's a stereotype in this. Andy and I, both very supportive of feminism, so were Kahneman and Tversky back in the 1970s when they came up with this, but it's the stereotype that makes it work. They worded the question that way because feminism was in the news a lot when they wrote this particular question. People were talking about it, they had all these preconceptions about it.
And all the representativeness questions are like this. Things that were in the news, things that people were discussing, things that either had established stereotypes or stereotypes were forming around them. They had people read descriptions that fit the current stereotype of engineers and lawyers and computer scientists. And just with the Linda problem, people tended to say it was more likely the person in each story was two things instead of one thing.
which is not how probability works. But it feels true, and people rarely introspected in their research beyond that feeling. They rarely did the math that would prove their guts to be incorrect. And you can reliably produce this effect if you word a question in just the right way. And this discovery, this approach, this entire domain, uncovered and popularized by Kahneman and Tversky,
is sometimes called the psychology of single questions. Asking questions in just the right way, certain questions will reliably produce errors in judgment and decision-making, which provided incredible insights into human behavior. And at the time Daniel Kahneman and Amos Tversky figured all this out,
These insights upended many of the common assumptions in the field of economics of that day, of that era. So much so, they just ended up upending economics itself. So much so, it all led to a whole new field, behavioral economics. And for that, for all this work, Kahneman received the Nobel Prize in 2002. Unfortunately, Tversky had passed away in 1996, so the Nobel Prize went to Kahneman.
And this is just some of this story. Kahneman and Tversky changed the way we think about how we think and how all this came to be. The grand story, how they met, how they did this work, how their predecessors in both psychology and economics, how they cleared the path for them.
It's a fascinating tale with many, many examples like this. Insights into how we think and feel and behave and decide and choose and plan. And this whole thing is something that Andy Luttrell, who you heard at the beginning, recently made a podcast about. A whole podcast series telling the whole story.
full of lots of examples and lots of history. And it features Daniel Kahneman himself telling you about it. And not just Kahneman, the series features many other scientists who were part of this paradigm shift, all telling their stories. Several who also earned Nobel Prizes. The series is called They Thought We Were Ridiculous, The Unlikely Story of Behavioral Economics. Where did that title come from?
When we talked to Danny Kahneman, he constantly talked about, he kept using this word ridiculous. And he said, economists thought we were ridiculous. They thought heuristics and biases were ridiculous. And I think he even at one point says, like, I couldn't believe, like, to any psychologist that,
economists' beliefs are ridiculous. He just kept using that word ridiculous, and that sort of filtered into how we framed the show. And it's not just you. You've got some illustrious co-hosts. Who's joining you in this podcast? So this is a co-production between my show, Opinion Science, and this other podcast, Behavioral Grooves, which is operated by Tim Houlihan and Kurt Nelson. And we sort of met up years ago. Early in my podcast journey, I sort of connected with them
And then they pitched me on this idea. They heard this story about where the name behavioral economics came from. And I had just done an episode on cognitive dissonance where I like told this big story about this idea in psychology. And they were like, can we do that? But with behavioral economics. And I said, sure. How hard could it be? And it took forever.
So, in this episode of the You're Not So Smart podcast, I'm going to play the entirety of episode two of They Thought We Were Ridiculous, which is all about the psychology of single questions, how that came to be. But first, before I play that entire episode, I want to play you three clips from episode one of that series.
That's an episode that sets the stage for the whole story by spending time with some of the people who started to notice anomalies all throughout economics before Kahneman and Tversky. Situations and scenarios that seem to suggest people are not, as much of economics believed at the time, purely rational creatures who make optimal decisions whenever possible. The first clip involves a blizzard, the second concerns cashews, and the third is best explained through billiards.
Here's the Blizzard and Cashew stories back to back, and both feature Nobel Prize winning economist Richard Thaler. These days, he's a distinguished professor at the University of Chicago Booth School of Business. But in the early 70s, he was an economics grad student at the University of Rochester. Thaler was being trained as an economist. He knew how people were supposed to make decisions, and he was a good student.
But he kept noticing that people in his life were acting weird, at least weird according to an economist. Like one time he and a friend were living in Rochester, New York, and they got free tickets to see a basketball game in Buffalo. And there was a big blizzard and we decided not to risk driving to Buffalo in a blizzard.
Which makes sense. It would be dangerous to make a long drive in a blizzard. But what was weird is that his friend said, Well, we had been given these tickets. He said if we had paid for those tickets, we would be gone.
And this didn't make sense. If it's dangerous to drive, it's dangerous to drive. What difference does it make whether you spent money on the tickets? Also, it was funny. When we talked to Thaler, he had just gotten a text from his daughter who was living in California. There was a heat wave there. Where the high temperature was going to be 110. But she and her friends had planned this camping trip. And rather than just cancel it, they were all trying to figure out how best to cope with the heat.
And my daughter, Jessie, said, Dad, that Blizzard story is still working here in Northern California. So that's the Blizzard story. Thaler collected anomalies, interesting stories and examples that seem to run counter to the common wisdom in economics. And he wrote them on a blackboard, and they talk all about that.
And as he is discussing this, while they were interviewing Thaler, he pauses to show them a model of a bowl of cashews he has sitting on a shelf in his office. Here's Andy, after that moment, explaining why. One of the stories that made it to the Blackboard is about a time in grad school when he had friends over for dinner.
They were all waiting for dinner to be ready, so Thaler put out a big bowl of cashews that people would snack on. But a lot of the cashews got eaten very quickly, so he took the cashews away and hid them in the kitchen. So we wouldn't ruin our appetites. And everyone at the party seemed happy that the cashews got hidden too. So in his Nobel lecture, he actually opens with this story, and he explains why the whole thing was weird from an economist's point of view.
Since it was a group of economics graduate students, we began to analyze it, which shows you the danger of going to a dinner party with a group of economists. And so the analysis was, A, that we were happy, and B, that we were not allowed to be happy.
because it's a basic axiom of economics that more choices are always preferred to fewer. And before we had the choice to eat the nuts or not, and now we didn't. So what were we doing being happy? Eventually, his office blackboard was full of all these observations.
You know, I had this long list of funny things people did, but it wasn't clear what to do with it because they were just stories. And here's the story about billiards, which comes near the end of the first episode. And it really sums up what this whole series is about.
Lots of anomalies that raise major questions about basing the field of economics on this assumption that people are rational and out to maximize their own gains. And in some ways, maybe it's because I'm a psychologist, but it just feels like, yeah, why were we ever assuming that people were rational to begin with? Like, I've seen the decisions I've made. They're not all good. So I kept wondering whether these rational economists ever really believed this.
When we talked to Richard Thaler, I asked him this, and he told a story about himself and another brilliant thinker, Amos Tversky, who we're going to get to know better in the next episode. So in the 80s, there was a dinner in which I sat at the same table with Amos Tversky and an antagonist to kick-ass.
And the antagonistic economist was regaling us with stories about what idiots people are when it comes to economic decision. And these included his wife, the dean, the president of the country, whoever it was at the time, most business operators,
And, you know, Amos kept egging him off. And then, you know, at the end says to him, so, you know, I don't get it because in your models, you assume everyone is rational. But over dinner, you tell us everybody's an idiot, which is it?
So economists may secretly understand that people are irrational, that they're constrained by mental limits and a lack of self-control. But in traditional economics, it doesn't matter. And it's captured in a famous line by the economist Milton Friedman. Which is the as-if line.
In other words, people don't need to actually be hyper-rational. They just need to make decisions as if they were. And Friedman had a famous analogy of a billiards player who plays as if he knew trigonometry and physics. And my response to that is...
That might be true for an expert billiards player. But what about a typical guy at a bar who's aiming at whatever ball is closest to a pocket and often misses and makes predictable misses? Some shots, you can be pretty sure he's going to miss this way. You know, economic theory isn't supposed to be a theory of experts.
There are a zillion of these stories in the entire five-part series, detailing all sorts of psychological studies, and not just how they rattle economists, but how they changed psychology and many other domains, how they really, honestly changed our understanding of ourselves. So, after this commercial break, you'll hear the entirety of episode two of They Thought We Were Ridiculous, which features Daniel Kahneman explaining the psychology of single questions.
Then, after that episode that I'm going to play on this episode, we will briefly return to the host of the podcast, opinion science, social psychologist Andy Luttrell, to hear some of his thoughts about the series. All of that, after this break. Music
¦
Okay, here's the thing I was talking about at the very beginning of the show before the show started. The School of Thought. I love this place. I've been a fan of the School of Thought for years. It's a non-profit organization. They provide free creative commons, critical thinking resources to more than 30 million people worldwide. And their mission is to help popularize critical thinking, reason, media literacy, scientific literacy, and a desire to understand things deeply via intellectual humility. And
So you can see why I would totally be into something like this. The founders of the school of thought have just launched something new called kitted thinking tools, K I T T E D thinking tools. And the way this works is you go to the website, you pick out the kit that you want and
There's tons of them. And the School of Thought will send you a kit of very nice, beautifully designed, well-curated, high quality, each one about double the size of a playing card, Matt Cello 400 GSM Stock Prompt Cards.
and a nice magnetically latching box that you can use to facilitate workshops, level up brainstorming and creative thinking sessions, optimize user and customer experience and design, elevate strategic planning and decision-making, mitigate risks and liabilities, and much, much more. And each kit can, if you want to use it this way, interact with this crazy cool app.
Each card has a corresponding digital version with examples and templates and videos and step-by-step instructions and more. You even get PowerPoint and Keynote templates,
There's so many ways you could use this. Here's some ideas. If you're a venture capital investor, you could get the Investor's Critical Thinking Kit and use it to stress test and evaluate different startups for Series A funding. If you're a user experience designer, you can get the User Design Kit to put together a workshop with internal stockholders for a software product. Or if you're an HR professional, you could mix and match these kits to create a complete professional development learning program tailored specifically for your team over the course of the next decade.
So if you're the kind of person who is fascinated with critical thinking and motivated reasoning and intellectual humility and biases, fallacies, and heuristics, you know, the sort of person who listens to podcasts like you are not so smart, you're probably the kind of person who would love these decks. If you're curious, you can get a special 50% off offer today.
That's right, half off offer right here. You can get half off of one of these kits by heading to kitted.shop.
K-I-T-T-E-D dot shop and using the code SMART50 at checkout. That's SMART50 at checkout. 5% of the profits will go back to the school of thought. So you're supporting a good cause that distributes free critical thinking tools all over the world on top of receiving a set of thinking superpowers in a box. Check all of this out at Kitted.shop or just click the link in the show notes.
Under Biden, Americans' cost of living skyrocketed. Food, housing, auto insurance. Lawsuit abuse is a big reason everything's more expensive today. Frivolous lawsuits cost working Americans over $4,000 a year in hidden taxes. President Trump understands the problem. That's why he supports loser pays legislation to stop lawsuit abuse and put thousands back in the pockets of hardworking Americans.
It's time to make America affordable again. It's time to support the president's plan. I know for sure that most studies show that new habits fail simply because you don't have a plan. And without a plan, it's really hard to stick with a new habit or routine. That's why Prolon's five-day program is better than any trend out there. It's a real actionable plan that
for real results. Prolon. It was researched and developed for decades at USC's Longevity Institute. It's backed by leading U.S. medical experts. Prolon by El Nutra is the only patented fasting mimicking diet. Yes, fasting mimicking
You're going to trick your body and your brain into believing that you are fasting. And when combined with proper diet and exercise, it works on a cellular level to deliver potential benefits like targeted fat loss, radiant skin, sustained weight loss, and cellular rejuvenation.
So I received a five-day kit from Prolon. And let me tell you, right away, I knew this was going to be fun and fascinating. The kit comes in this very nice box with this very satisfying Velcro latch. They did a great job with this
presentation packaging. And inside, on the underside of the lid, you get this message often attributed to Hippocrates about letting food be thy medicine. And below that, a QR code that takes you to your personal page for guidance, tips, and tracking. And then under all that, each day's food, snacks, vitamins, and supplements packaged within its own separate box. Each day has its own box. And
When you get in there, right away, it looks like it's going to be easy and fun. And it was, both of those things. I was totally willing for all this food to taste bland and boring in service of the concept, in service of the mimicking of a fasting experience. But it turns out, it all tasted great. And day one,
is all set up to prepare your body for the program. And they give you everything you need to stick to it. It's very clearly designed by people who know what they're doing. And Prolon tricks your body into thinking you aren't eating. But here's the thing. It works without being painful because you do get to eat.
You eat all sorts of little bits and bobbles that are like minestrone soup and these crunch bars and olives. And there's so much stuff in each day's kale. I've got one right here. Almond and kale crackers and a Choco Crisp bar, an intermittent fasting bar, minestrone soup, algal oil. I love how right away I was like, oh, I can't wait to try this stuff out. And yes, you do get hungry, but not nearly as hungry as I thought you would get.
it. And most importantly, by day three, I noticeably felt great. I had this, oh, I'm in on a secret feeling. And when it comes to hunger, by day five, I didn't feel like I was missing out on anything at all. The hunger was very minimal. And by the end of all this, my skin looked noticeably glowy and healthy and
And yeah, I lost six pounds. Six. It was easy to follow and I felt reset afterwards, like I had rebooted my system. To help you kickstart a health plan that truly works, Prolon is offering You're Not So Smart listeners 15% off site-wide, plus a $40 bonus gift when you subscribe to their five-day nutrition program. Just visit
ProlonLife.com slash Y-A-N-S-S. That's P-R-O-L-O-N-L-I-F-E dot com slash Y-A-N-S-S to claim your 15% discount and your bonus gift. ProlonLife.com slash Y-A-N-S-S.
And now we return to our program. I remember discovering that
economists actually believed that stuff. I mean, I remember that as, you know, because they were in the building next door. And it seemed, you know, that something that to a psychologist would look ridiculous was doctrine. And, you know, that's what their theory was based on. But we were not thinking of changing economic theory.
Welcome back to They Thought We Were Ridiculous, a podcast series about young social scientists who dared to challenge the most basic assumptions of their field and won.
I'm Andy Littrell. I'm Kurt Nelson. And I'm Tim Houlihan. And this time, this scrappy field of behavioral economics picks up steam with a few critical insights and collaborations. In the last episode, we met Richard Thaler, the guy who kept noticing that his beloved field of economics couldn't actually explain what he saw his friends and family doing.
And we saw how the stage was set for him by the economist Herb Simon, who as far back as the 1940s was writing about how people don't always make choices in the strictly rational way economists say they do. But why didn't his ideas catch on back then? Here's Richard Thaler. You know, Herb Simon...
came along well before me and was talking about bounded rationality. But he kind of gave up talking to economists. He found them too annoying. And the reason that he didn't make any headway is he didn't have the idea of systematic bias.
Systematic bias. He's saying that it wasn't enough to just claim that people stray from being rational. What was missing was a keen sense of how people were irrational. Sure, people make choices that aren't optimal, but do they consistently make the same kind of mistakes over and over again? If we could crack that nut, we'd be in business. And luckily, two young psychologists had some bright ideas at exactly the right time.
Hi, my name is Amos Tversky from Stanford University. Amos Tversky grew up in Israel, went to college at Hebrew University of Jerusalem, and finished a PhD in psychology at the University of Michigan. Unfortunately, he died in 1996 when he was just 59. We would have loved to talk to him.
But what he was almost certainly best known for was the work he did with his friend and longtime collaborator, who we did get to talk to, Daniel Kahneman. By the way, you should call me Danny because that's what everybody calls me.
Danny was also from Israel and spent much of his childhood in Paris, where he and his family survived the Nazi occupation during World War II. Fast forward to 1969, and he was teaching a psychology class at the Hebrew University of Jerusalem. He'd heard that Amos Tversky was a rising star in the field. They actually overlapped for six months at the University of Michigan before this, but never had the occasion to get to know each other. They were just swimming in their own lanes.
So now they were both at Hebrew University and still not having much to do with each other. By the way, there was no competition here. As some students were guessing at the time, it really was just two brilliant guys doing their own thing on separate tracks. That is until one day, Kahneman invites Tversky to present to his students in this seminar.
Kahneman said he could present on whatever he wanted, so Tversky decided to talk about some work by his old colleagues at Michigan. Research looking into whether people's intuitions match the laws of probability and statistics. Spoiler alert, they don't, but we'll come back to that. Kahneman was intrigued, and the two met for lunch later that week to keep the conversation going.
One lunch turned into another, and by the end of the year, you could usually find them together, talking and laughing and debating. We did everything together. I mean, we, for the first few years, we actually were not working on the problem. When we were alone, we only worked together.
But from the outside, it was hard to see where this chemistry came from. In his biography of the two, Michael Lewis writes, "Danny was always sure he was wrong. Amos was always sure he was right. Amos was the life of every party. Danny didn't go to parties. Amos was loose and informal, but Danny had an air of formality. Danny was a pessimist. Amos was not merely optimistic. He willed himself to be optimistic."
Danny took everything seriously. Amos turned much of life into a joke. And yet, it just worked. We wrote every word and every paper together. We would go to a particular place in Jerusalem and sit together for hours and, you know, do a few sentences a day. It's called the Van Leer Institute. It's a very nice place. And at the time, you know, there was coffee at Lib and Cookies.
And we were great consumers of coffee and cookies. And it was a lovely place. How many coffee and cookies would you estimate went into that paper? A lot. A lot. But what was it that this inseparable duo spent all the time working on? What was their big breakthrough? Well, let's start with a question for you. What's more common, death by homicide or death by stomach cancer?
The right answer is death by stomach cancer. But a lot of people wrongly think that homicide is a more common cause of death. And why is that? Because we tell those stories more. We remember those stories more. So when we're cornered and we're asked to guess what's more common, we quickly get the sense that we've heard plenty of news reports about shootings and other violent events. So that feels like it's more common.
Even though there are more stomach cancer patients of the past, they don't come to mind as easily. Kahneman and Tversky called this the availability heuristic. A heuristic is like a mental shortcut people take when they make estimates or judgments and they don't already know the right answer.
What you do instead of computing probability the way it ought to be done, what do you do instead? Or, in other words… When you answer an easy question instead of a hard one. So in this case, the hard question is which cause of death is actually more common –
But the easier question is which of these evokes a more vivid memory? And it's not like this is a terrible strategy. Most of the time, things come to mind easily because they're actually more common. But we apply this mental trick a little too exuberantly, and we find ourselves making mistakes when the trick breaks down.
Like the research on asking people about the cause of death statistics. In the grand scheme, the more common some ailment really is, the more common people think it is. Cancer, car accidents, and heart disease are tragically common, and intuitively we generally recognize that that's the case. We also know that smallpox, lightning, and botulism are uncommon because we reasonably don't have that many examples of them to turn to. But tornadoes, drowning, homicide—
those don't actually happen that often. But those cases are quick to make the news and to spread through stories. So this mental trick, if you remember something easily, it probably happens a lot. It's generally a smart way to save mental effort, but at the end of the day, it's still just a trick.
So that's the availability heuristic, and they told the world about it in 1974 in a research article in the journal Science, which is a huge, important journal across all the sciences. But they actually reported their discovery of three heuristics, not just availability. Yep, and to appreciate the second heuristic, let's do another question.
I want you to estimate the total number of babies born in the United States each year. I can tell you it's more than 100, right? Have a guess. Number of babies born in the U.S. every year. Do you have a number in mind? Okay, well, I actually don't know the real answer, but that's not really the point. Anyway, the point is, how did you come up with your answer?
Was it just a pure, unbiased estimate based on the things you know to be true about babies in the United States? Probably not. Because I gave you some leading information. I said it was more than 100. Now, that's objectively not helpful or relevant, really. Like, of course it's more than 100.
You should just dismiss that bit of information completely because it's not useful. But that's not what people do. And we know this because sometimes researchers will ask people the same question, but a little differently. Like, how many babies are born in the United States each year? I can tell you it's less than 50,000.
Okay, less than 50,000? Also not helpful. If we were being completely rational, we would just make our guess the same way we would if you said it was more than 100.
But in a recent study that asked those questions, people's guesses depended a lot on the question itself. If the researchers noted that the answer is more than 100, people on average guessed that about 3,000 babies were born in the US each year. But if the researcher noted that the answer is less than 50,000,
Now people's average guess was that almost 27,000 babies were born in the U.S. each year. To swing from 3,000 to 27,000 as your guess based on some meaningless information? Now that's a bias.
Kahneman and Tversky called this the anchoring heuristic. When we estimate a number that we don't already know, we will anchor on a starting point and move in the direction of the right answer until we feel like we've gotten there.
This is reasonable enough, but the problem is that people usually don't adjust far enough away from their starting point. If I start at 100 babies, I'll mentally increase that number 200, 300, 1,000 until hitting 3,000 babies born each year. And that seems about right.
But if I start at 50,000 babies, I'll mentally decrease that number down 40,000, 35,000, until I hit 27,000 babies born. That seems right too. But it's clear from comparing these different ways of framing the question that people are overly influenced by their starting points. Just like an actual anchor keeps a boat from straying too far in the ocean, a
Hey there, this is David McRaney just dropping in for a second to say that the average number of babies born in the United States every year, at least right about now, is 3.6 million. It's just worded strangely in the study to produce the effect. Okay, back to the show.
Okay, we've seen availability, we've seen anchoring. Their third key heuristic was representativeness. For me, representativeness was always the more interesting one. This is just something that happens to you, that instead you are asked a question about probability and you answer a question about similarity, but you think you have answered the correct question.
Here's another classic research problem to consider. This one is about Linda's buddy, Tom W., who is also a popular hypothetical character. Here's how Kahneman and Tversky described Tom to research participants in the early 70s. Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity and for the neat and tidy systems in which every detail finds its appropriate place.
His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and flashes of imagination of the sci-fi type. He has a strong drive for competence, he seems to have little feel and sympathy for other people, and does not enjoy interacting with others. He nonetheless has a deep moral sense.
They said that this was a personality portrait of Tom W. in high school, but now he's in grad school. How likely is it that he's studying social work or medicine or computer science? They gave a list of nine areas and had people rank them from what Tom was most likely to be doing to what he was least likely to be doing. At the top of people's list, computer science. And I mean, come on, mechanical writing, sci-fi, a loner. This guy's a computer scientist for sure. But remember...
They were asking people this question in the early 70s. There were not a lot of computer scientists, and people knew it. But introduce them to Tom W., and they think, statistics be damned, this guy's a computer scientist if I've ever seen one. Substituting plausibility for probability. And it's not just these research participants who are biased.
Kahneman knew he was onto something from the moment he let Tom W. loose in the world. I put an all-nighter at the Oregon Research Institute, and my task was to come up. I mean, I came up with Tom W. I wrote Tom W. that night. And then the first person...
to arrive was Robin Dawes. Robin Dawes was another psychologist who studied human judgment, and he was a sophisticated statistician. No way he'd make an error in reasoning. And I asked him, well, Robin, answer that question. And then he read carefully, and then he had a slight smile, like somebody who solved the problem. And he said, computer scientist? Yeah.
Ah, so even the smartest rational thinkers fall prey to the representativeness heuristic. We all ignore base rates. Of course, Robin was an expert on base rates, so clearly he was not using base rates and he was completely aware of them.
So we've got three heuristics, or mental shortcuts, that people use to make judgments when they aren't in the mood to be perfectly rational. Availability, anchoring, and representativeness. All of these make at least some sense, but they often result in wrong answers, biased judgments. But we should highlight something important about how Kahneman and Tversky told the world about these biases.
There are a lot of big, important ideas in the social sciences that do not catch on. What made these heuristics different? It's that instead of devising complicated, intricate experiments, they instead develop the psychology of single questions. Single questions. How common is homicide?
How many babies are born in the U.S.? Is Linda more likely to be a bank teller and an active feminist? That's the whole study. One question asked in a slightly different way to different people, and people's answers to a single question can tell the whole story.
It turns out that was essential to the success of the enterprise, which was a bit surprising. And that's because when we wrote the paper in science, people outside the profession
reading those questions could feel that this was working on them. They would not have believed it if we had described it in the language of experiments. But we were using these questions as demonstrations. I mean, you could sense what is going on immediately. And that's really...
I think that's the story of why this particular work had so much impact. It is an accident of the medium that we chose. And this might make it seem like the research was easy. Oh, you just ask people one question and you're done? No. It had to be the perfect question. A question that lives right in the pocket of bias. That seems to make sense to people, but holds a secret, hiding in plain sight.
some feature that tickles our irrational brains so precisely that we're compelled to get the answer wrong and get it wrong in a particular way every time. That's what Kahneman and Tversky were doing with all of that time they spent hanging out together. You know, we spent the day, our days talking and finding things very amusing.
and looking for irony in the way our own thinking went. So we were looking for questions that we would find it tempting to answer wrongly. And that was our heuristic for searching for heuristics. We had that general idea, and then we were looking for examples. And the examples turned out to be quite neat.
Okay, so Danny Kahneman and Amos Tversky rocked the social sciences with their landmark 1974 paper on heuristics and biases. It's difficult to convey the incredible impact that paper had on the field. Analysts put it in the 10 most cited social science papers ever. That means other scientists are constantly referring back to it in their own work.
But wouldn't you know it, Kahneman and Tversky have another paper in the top 10. They were only just getting started. Prospect theory basically is an attempt to describe realistically the main elements of people's choices under risk. Your brain is already soaked with a lot of ideas, and we don't want to give you any more homework.
but to appreciate the gist of prospect theory. Think about whether you would prefer if I just gave you $20 or if I gave you a lottery ticket with a 20% chance of winning $100 and an 80% chance of winning nothing. So either $20 for sure or a 20% chance of winning $100. In cases like this, people tend to choose the sure deal. I'd rather take less of a sure thing than let the gods decide.
But here's another question. Would you rather take a sure loss of $75? You just have to give me $75 of your money. Or would you rather place a bet? Take a lottery ticket with a 25% chance that you don't lose anything, but a 75% chance that you'll actually have to give up $100. So either give me $75 for sure, or a 25% chance of giving me nothing, but a 75% chance you'll have to give me $100 instead.
In this case, people tend to take the bet. But of course, rationally, there's no real difference between the two questions. And yet, when it's about getting money, people want the sure deal. And when it's about losing money, people will try their luck to get out of it. So prospect theory came about as a way to make sense of these funny ways people grapple with risk and uncertainty. You know, this could be an important paper.
And if it turns out to be an important paper, we want a distinctive name for it. And so prospect theory was just a distinctive name for a theory. But that really came because just in case it turns out to be important, we want it to be distinctive. Economists thought that the whole thing was ridiculous. And they also thought that the work on heuristics was ridiculous because...
They didn't think that everybody does everything right, but they thought that errors were random. That is, it's the idea that errors are systematic that violated their view of the world, because their view of the world was that people are rational plus random perturbation. And that, by the way, were the key insights.
Dick Thaler. That's what Dick Thaler, he read our paper. That's what struck him. Oh, it was a little systematic. Ah, Dick Thaler, the economist we met in the last episode, the economist who said... My biggest discovery was discovering Kahneman and Tversky.
Thaler had been keeping notes on how people in the real world didn't make decisions like the people in economic models. The models say people are rational, but people in the real world aren't rational, at least not always.
And for a time, this was a curiosity, a set of anomalies. The main thing he was interested in was simply... These departures from rationality. But these departures might just mean that people make random errors. They're not thinking clearly, and so they're throwing darts at the board of economic decision-making. But what if it's not random? What if, as Kahneman just emphasized... Errors are systematic. That was a big black belt going on.
And if you could predict when they were going to happen, then you were in business. So Thaler was obviously excited when he was reading the early work on heuristics and prospect theory. It's one of those perfect moments where lightning struck twice at the same time. Thaler and the other young economists were pushing back on the assumption that people are rational, and Kahneman and Tversky were studying the psychology of cognitive biases. It was the perfect peanut butter and jelly moment.
It was good because they didn't know any economics and I didn't know any psychology. So as the columnist said, we're gains from trade. The foothills of the Santa Cruz Mountains around Stanford University in California are chocked full of beautiful views and rolling hills.
And this is where we find the Center for Advanced Study in the Behavioral Sciences, CASBS, or sometimes just called CASBAS. Since 1954, behavioral scientists have gathered for extended stays at CASBAS to develop big ideas. And in 1977, one of those researchers was Danny Kahneman. The same year, Amos Tversky was visiting the psychology department at Stanford.
And it was at CASBA's in 1977 that they finished writing their paper on prospect theory, walking for hours and sitting down together to perfect every sentence.
The summer before, Dick Thaler was visiting a colleague at Stanford and heard that his new idols were visiting the United States. They were going to be staying at Stanford for a year. So he hit the pavement, trying to cobble together any formal reason to keep him in Stanford for a while longer, just hoping to spend some time with Kahneman and Tversky. He lucked out and landed a visiting position at the National Bureau of Economic Research. It was a very complicated year,
Danny was at the center. Amos and Barbara... Barbara was Amos' wife. ...were visiting the psychology department. And Treisman, who was not yet married to Danny, she and Danny were at the center. Her husband was at Berkeley. So they had five psychologists converging on the Bay Area for a year.
I would guess that Amos Tversky would have either walked or ridden a bike up to the Casbus Hill several times a week. Thaler would have had like a two-minute walk from the NBER satellite office at that time to Casbus. So there was physical proximity. That's Mike Gattani, communications director for Casbus. They have lots of beautiful rolling hills around.
behind us that you can walk through, both Thaler and Kahneman separately described, walking through the hills together, having conversations, one a psychologist, one an economist. I was about 100 meters down the hill from where Danny was, and Amos used to come visit often. And Danny and I would walk around, there's just wilderness up there, and we would
take long walks and think big thoughts. And, you know, he was, he's much younger, but he was just super smart and very funny. And he and I were neighbors and we would spend a lot of time walking together. There was also, it was very funny because Dick is very funny. He always was. And so it was a joy. And we learned economics.
Richard Thaler, Danny Kahneman, and Amos Tversky forged a unique and lasting bond that year as they took walks, exchanged ideas, and became close friends. You can't miss their respect and affection for each other all these years later. We asked Thaler to describe Danny Kahneman and Amos Tversky. Danny was the warrior of the two. He always expects the worst.
Amos was more confident and brilliantly analytic. His talks were stunning. At his desk, there was a legal pad, a pencil, nothing else. And we asked Danny Kahneman to describe Richard Thaler. I think of him as a genius. I mean, I think he's just extraordinary.
And he has a flair and a sense for what's important. We have that running joke that they call him lazy. And he doesn't spend any time on things that do not matter. And he's very wise. He has both irony and wisdom. And, you know, that's a very powerful combination. So let's just recap. The setting was Casbus in 1977.
Thaler is regularly talking to Kahneman and Tversky, and seeing Kahneman and Tversky work on their landmark theory and thinking hard about economics and psychology, Richard Thaler describes that it was this that pushed him to go all-in on his heretical perspective. He later called it, quote, the most important year of his life, end quote. But he was only getting started. ♪
Kahneman, Tversky, and Thaler were exchanging ideas in the late 70s, and they continued to meet and talk and think big thoughts. But the engine needed a little more steam before it could become the kind of movement with enough force to really push back against the old guard. They needed someone who could provide more formal support for the growing revolution. Luckily, there was such a guy, Eric Wanner.
Wanner did a PhD in psychology at Harvard in the 1960s, started life as a professor, but gradually transitioned away from doing the science himself.
By the early 80s, he joined the Alfred P. Sloan Foundation, a nonprofit that makes grants for research and education. And in 1983, he suggested that the program consider an initiative to support what might be called the Psychological Foundations of Economic Behavior. Like so many others, he was enamored with Kahneman and Tversky's work and had gotten to know them a bit when he was an editor at the Harvard University Press. He even pitched the idea to them early on.
We met him at the bar and we had a drink together. It's true that we met in a bar. I mean, don't let him sound like we were just drinking and we thought this up in our alcoholic haze. And he said he wanted to put some money into bringing psychology and economics together. I said, well, how about this little program? I have this idea. We'll try to get
psychologists and economists together and we'll call it behavioral economics so we can think about that. And I remember what the answer was. There were two. One, that this is not a project on which you can spend a lot of money honestly. I remember that phrase.
And the other one was that you should not give that money to psychologists who want to reform economics. You should give that money, spend that money on economists who want to learn psychology. I had to convince him that money could be spent responsibly and that responsible science could be done between economics and psychology just because it would fail so often. I mean, the psychologist's been yelling, right?
rational economic man is a myth forever and ever. And just yelling is not enough. And more yelling would not really help. So what he told what he said, what they said to me was, well, okay, you know, we will, we'll, we won't say no, we'll join up. But we don't think a lot of money could be spent very responsibly on this. So it was kind of a very cautious yes.
They ended up setting up a small fund and calling the program Behavioral Economics. And I think the very first grant was for Dick Thaler to spend a year with me. That was one of the first grants, certainly the first grant in Behavioral Economics. And that was an important year. Eventually, Wanner moved to the Russell Sage Foundation, where he became its president. I was going to get to run a foundation the way I wanted to run it.
You know, give me a little freedom and I can, you know, I can make a mess of a lot of things. The Behavioral Economics Grant Program came with him. The people who would become some of the movers and shakers in this world assured us that this was a crucial moment. It really played an essential role. I'm not sure if behavioral economics would even exist without the Russell Sage Foundation. The Russell Sage Foundation was very important in the history of behavioral economics.
That was George Lowenstein and Drazen Prelek. Each of them spent time in workshops and summer camps sponsored by the Russell Sage Foundation. What they all point to is the importance of bringing together smart people with different backgrounds to solve big problems. As head of Russell Sage, Wanner wasn't so much of a matchmaker as he was a party host, so to speak. Russell Sage manifested the funding for creative researchers to explore their work.
This funding allowed relationships to bloom, time to think, and new ideas to bubble up and be refined. It fueled a train that was starting to pick up speed. My board at Russell Sageman said, well, you did behavioral economics. What's the next thing you're going to do? And you know, ha ha, exactly. It's worth a big laugh because you have to be extremely lucky. You can't just upset science any old time you want to and make a go of it.
The movement was growing. Emerging research was pushing back at the assumption of rationality that classical economists were still clinging to. More and more people were doing important work on the psychological and biased side of economics.
By the 1990s, it was time for a meeting of the minds. And what better place than CASBUS, that spot in the foothill surrounding Stanford University, where Thaler finagled his way into FaceTime with Danny Kahneman years ago. Here's Mike Gattani again, communications director for CASBUS. For years running in the 1990s, they tried to get all these guys in together. But it's just so hard to get people's schedules to align. Anyway, things just lined up for the 97, 98 year process.
Five rising stars in behavioral economics spent that year together, most of whom you'll hear from in this series. Richard Thaler, Colin Kammerer, George Lowenstein, Drajan Prelik, and Matthew Rabin. Danny Kahneman even popped by to give a talk that year.
That year was super generative. People worked on books, they finished papers that would go on to make a big impact, and they discussed the seeds of things that would grow into major developments, like how neuroscience can explain human decision-making, how behavioral economics could inform public policy, and the ethical dilemma of nudging people to make more optimal choices for themselves. One of the attendees, Drajan Pralik, pinpoints exactly what made the year so magical.
I think it was really the fact that we had endless hours of unstructured time together. You really need time without agendas, without structure. It's the kind of conversation that is really priceless that these centers are built to sustain. The other thing that was interesting was the only thing you had to do at that center was to go to lunch.
That's Colin Kammerer.
And when someone would say something we thought was really wrong in terms of basic economics, we would all raise our hands. And suddenly we were like ambassadors from rational choice economics. Right. Even though our presence at the center was to do exactly the opposite.
And again, I think that's another way you can recognize a behavioral economist is that there's certain – like if somebody says money incentives don't really work very reliably to change behavior, it's like, no, no, no, no, no. That's the one thing that's reliable. It's just that it costs money. So it was really interesting. There was this almost like a common enemy effect. When somebody said something that we thought was really against basic economics, which we all kind of believe in, we just think the enhancement is even better. It kind of came together. Yeah.
So behavioral economists understood classic economics inside and out. And their movement was growing. What started as a frustration with the idea that people are rational became psychologically savvy with the help of Kahneman and Tversky. This ushered in an era of collaboration, new research, and mathematical models that showed classic assumptions were wrong because people play by different rules. They were all in, drinking their own Kool-Aid, but
But what did everyone else think? Were psychologists okay with economists edging onto their turf? And did traditional economists take the criticism and update their views no problem? And at one point, Merton Miller gave an interview, I think at the Chicago Tribune or something, and said, you know, what do you think of Dick Thaler's work in behavioral finance? He said, well, you know, I don't think it's a serious theory. It hasn't helped us explain anything. You know, but every generation has to make its own mistakes.
That's next time on They Thought We Were Ridiculous. And instead of outro music, this is me again, David McCraney. This is the You Are Not So Smart podcast. And do please check out the other episodes in They Thought We Were Ridiculous. I'm going to sit down with Andy now for a brief conversation about some of the things I found fascinating in these first two episodes.
Starting with this whole concept of the psychology of single questions. Here's social psychologist, Andy Luttrell. Yeah. Yeah. I'm glad you, I think of, of the, there's a couple of moments in the series where I'm like particularly proud of the writing and the psychology of single questions is like one of those little scenes that I feel like, yeah, I feel very good about how we were able to construct that idea and, and,
Convey, like, how powerful it was. And also, the thing is, it's not just any question, right? Like, when you say psychology of single questions, you're like, oh, you just ask people a question, and that, like, how hard could that be? But these were, like, razor-precise questions that just, like, cut the fat off of the edges and was just like...
you're going to be wrong and I know you're going to be wrong and you're going to be wrong in such a particular way that it gives me a insight into your brain by the one answer you give me to this question. Yeah, that's, it's still what got me into this is still excites me in the same way. Like I, I love psychology, you know, like a lot of undergrads, psychology is fun for a million different reasons, but,
It was stumbling into these awesome questions. It's like a piece of powerful code. It's language as powerful code that if I can present it to another human being, it's going to produce behaviors and reactions and I can sit back and feel superior to them. So that was very appealing. That was very appealing to me at that period of time in my life. And, um,
Somehow I was able to transmogrify that into, oh wait, this is humility. I should be feeling this. I should be feeling humility is helping me be a better human being. And I started to dig on that aspect of it. You have to ask it just the right way. And that is killer to me. I'm not really saying anything except I'm high-fiving you and then low-fiving you because it's really funny. For what it's worth, one thing, I don't think this analogy made it into the episode, but it makes me think of like optical illusions too, where it's like,
all of these pixels have aligned in the perfect way that people will see things that aren't there, right? Or they'll experience motion when there isn't any motion. You can't help, but it's so simple. It's one little picture, right? It's a black and white illustration and I can't help but see something that's not there. And I don't really care that that's what you see, but the fact that you're seeing it tells me a lot about how your brain normally interprets reality. Um,
And that's what these questions do.
Did you learn anything from this? You didn't already know? Is there anything like surprised you to the point that you were like, if I hadn't made this podcast, I would not, even though I am a professional psychologist, know X. Yeah, I was not already. I was exposed to behavioral economics in grad school. It's definitely not my personal area of expertise. So I know about heuristics and biases. I've heard about some of the ideas, but this is really a chance to explore what has turned out to be this really influential idea
perspective in social science. I also, I think there's a moment in the third episode where we talk about how psychologists reacted to this, and there's a real feeling of resentment. I feel like, hey, guys, you're doing what we do, but you won't call it psychology. You have to call it behavioral economics. So I think there was like a giant chip on social psychologists' shoulders, which I felt to
too. And I think I became more appreciative of the fact that this really is an innovation within economics. And this really does kind of belong to economics. And it was an opportunity for economists to
to think about psychology, which they're not always want to do. And so I'll let them have it. For what it's worth, too, I do feel like one of the interesting things is that I at least was really focused on making a behavioral economics series and telling that story. But people kept wanting to slip into behavioral science. And I think that's kind of where this all went. And so behavioral economics was kind of a way to prove that psychology could be
practically useful in like businesses and governments but but very quickly it spun out into like well this is just the importance of psychology in these domains um and at one point kahneman gives a little bit of a rib jabbing to thaler about that book nudge where he's like yeah you know nudge is just a book about social psychology it's like owned by the behavioral economics people but most of what they talk about is basic social psychology um
And so I do think that that's interesting. Like behavioral economics was helpful in showcasing the potential, like the policy potential of psychology. And now behavioral science has become the moniker and it's become more expansive than just economic decision making.
After a while, if you know enough about psychology, especially this side of psychology, biases, fallacies, heuristics, and everything that builds up into those things and everything that those things fractalize out into, you're supposed to have an almost like Buddhist sense of truth and humility of like, okay, I'm aware of how bounded our rationality is. I'm aware of how
I'm constructing a subjective reality that is not a one-to-one representation of what's going on out there. All these things, and then including all of these very reliable ways that we will make errors in judgment, reasoning, and decision-making. How do you feel knowing all that? How do you walk around the world feeling this way? What does it do to you? I've always thought that knowing a lot about social psychology has made me just a more understanding, patient person. I'm so much more ready to
I don't want to say tolerate, but I'm so much more understanding of when people do things that don't make sense because like, yeah, of course that's, we got, we've got these weird brains that were never designed to like be rational optimizing machines. We've got brains that are just kind of clunking along and trying to make the most out of the world we live in. And, you know, these biases are,
I think it's important to note that we focus on how they lead us astray, but by and large, they serve us, right? Like by and large, these are just tools that our brain is using to simplify the process of living a complicated life. And, and we wouldn't have them if they didn't serve us at least some of the time. And so sure, like we're going to make mistakes. We're going to see the world in faulty ways, but that, how else were we going to do this? Like how, how else were we going to survive on a planet that,
When we're just sort of these hunks of meat that have been assembled in a kind of haphazard way. That is it for this episode of the You Are Not So Smart podcast. For links to everything that we talked about, head to youarenotsosmart.com or check the show notes right there inside your podcast player. The podcast that was the feature of this episode is They Thought We Were Ridiculous. You can find the website for that at ridiculous-podcast.com.
Or you can go to Andy Luttrell's homepage for his podcast, OpinionsSciencePodcast.com, and find links. There's links back and forth on both websites. You can find my book, How Minds Change, wherever they put books on shelves and ship them in trucks. Details are at DavidMcRaney.com, and I'll have all of that in the show notes as well, right there in your podcast player. On my homepage, you can find a roundtable video with a group of persuasion experts featured in the book.
Read a sample chapter, download a discussion guide, sign up for the newsletter, read reviews, and more. For past episodes of this podcast, go to Apple Podcasts, Amazon Music, Audible, Spotify, or youarenotsosmart.com. Follow me on Twitter at David McCraney. Follow the show at NotSmartBlog. We're also on Facebook slash youarenotsosmart. And if you'd like to support this operation, go to patreon.com slash youarenotsosmart.
Pitching in at any amount gets you the show ad-free, but the higher amounts get you posters, t-shirts, signed books, and other stuff. The opening music is Clash by Caravan Palace. If you really, really, really, really, really, really want to support the show, easiest thing, the best thing, actually the most helpful thing is just tell people about it. Share episodes that meant something to you. Put it on LinkedIn. Put it on Instagram.
Well, you know, the social networks and check back in about two weeks for a fresh new episode that you can also share. Thank you very much for all of the support, all the messages that I get, all the emails. I respond to everything and it's been really great, especially these last like seven, eight episodes. There's been a real noticeable amount of positive feedback and I appreciate that.
A lot of more cool stuff is coming soon. I have more than 20 episodes worth of material, and I have a plan to do a series about cognitive biases soon. And there's just all sorts of stuff coming. So thank you very much. I love making this show. And do check out any of the trail stuff over at OpinionSciencePodcast.com. Thank you.
Under Biden, Americans' cost of living skyrocketed. Food, housing, auto insurance. Lawsuit abuse is a big reason everything's more expensive today. Frivolous lawsuits cost working Americans over $4,000 a year in hidden taxes. President Trump understands the problem. That's why he supports loser pays legislation to stop lawsuit abuse and put thousands back in the pockets of hardworking Americans.
It's time to make America affordable again. It's time to support the President's plan.