We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 315 - May Contain Lies - Alex Edmans

315 - May Contain Lies - Alex Edmans

2025/6/9
logo of podcast You Are Not So Smart

You Are Not So Smart

AI Deep Dive AI Chapters Transcript
People
A
Alex Edmonds
D
David McCraney
Topics
David McCraney:推理是认知的基础,我们的大脑通过感官输入和心理模型,结合先前的经验和知识,不断进行猜测和预测。这种猜测并非总是准确,但它帮助我们快速理解和应对复杂的世界。大脑会根据已有的经验来消除歧义,并依赖于“说得通就停止规则”来简化决策过程。然而,这种简化也可能导致我们停止寻找更多信息,从而产生错误的认知和判断。因此,我们需要意识到推理的局限性,并不断反思和修正我们的假设。

Deep Dive

Shownotes Transcript

Translations:
中文

Comcast is committed to bringing access to the internet to all Americans, including rural communities across the country, like Sussex County, Delaware. We were being left behind. Everybody around us seemed to have internet, but we did not. High-speed internet is one of those good things that we needed to help us move our farming, our small businesses, our recreation forward. Learn more about how we're bringing our next generation network to more people across the country at comcastcorporation.com slash investment in America.

You can go to kitted.shop and use the code SMART50, S-M-A-R-T-5-0 at checkout, and you will get half off a set of thinking superpowers in a box. If you want to know more about what I'm talking about, check it out, middle of the show.

Welcome to the You Are Not So Smart Podcast, episode 315.

So when Alfred Sloan was CEO of GM, he closed the meeting by asking, are there any objections to my decision?

There were no objections. So he said, well, then I propose that we postpone the decision until the next meeting so that you have opportunities to come up with concerns.

My name is David McCraney. This is the You Are Not So Smart podcast. That was economist Alex Edmonds. And Alex Edmonds is a professor of finance at London Business School. He serves on the World Economic Forum. He is a fellow of the British Academy, the author of several books. And he once delivered a TED Talk that has been viewed more than two million times titled,

what to trust in a post-trust world. He has a new book out titled May Contain Lies. And in it, he outlines something called the ladder of mis-inference, which is why he is joining us on this episode. I wanted to ask him, Alex Edmonds, about this ladder. And I will do that. I will do that very thing in this

just a moment because first I feel like this is a great opportunity to lay some psychological foundations not only for his ladder of mis-inference but for something I've wanted to talk about on this show forever what this whole inference thing is all about music

Inference in psychology and neuroscience and all the sciences that study the brain and mind refers to the guessing game played by your brain in the presence of every single thing you think, feel, perceive, and do.

I think a few years ago, this would be more difficult to explain, but with AI being so intensely hyped these days and part of our lives everywhere we turn, the concept of a large language model predicting the most likely next word in a sentence at scale is a pretty

a pretty good entry point to making sense of inference in the brain. Brains do this. They predict the most likely outcome, the most likely conclusion, the most likely result. All brains, from snakes to mountain lions to people. But unlike large language models, people can reason and love and experience emotions and contemplate meaning itself. But we do so with a lot of help from the portions of our brains that

Okay, so what is this thing? Inference is a cognitive process by which, in the presence of sensory inputs and or mental models, draws upon prior experiences, existing knowledge, and contextual clues to generate assumptions.

and expectations and predictions about what has not yet happened or what has not yet been presented as evidence. We take all that and then we add our biases and goals and identities and concerns about our reputations and relationships and well-being and our desires and fears and traumas and all the rest to make a guess.

Psychologically, that guess feels like knowledge, but it's not. It's just a guess. Neurologically, those guesses can come across as straight-up reality. For instance, there's a portion of your retina where the optic nerve exits on its way to the brain.

And that results in a blind spot in your vision. But as a seeing person, when you look around and don't see a blind spot in your vision, even though there definitely is one there, that's because your brain fills it in by inferring what ought to be in that missing portion of your visual field. It's doing a little bit of that in your periphery and it's doing a little bit of that all the time. And in that blind spot portion of your vision,

you don't see what's there. You see what your brain thinks ought to be there. In a way, it's a guess based on context, based on experience, based on interpolation of visual data, but it's not real. And of course, nothing is real when you get down to it. When it comes to vision, it's all a simulation. But in this portion of the simulation, it's,

You're not seeing something based off of actual inputs from the electromagnetic spectrum hitting the back of your eyeball. It's all coming from inside like a dream. In moments of intense ambiguity, the brain will do this sort of thing outside of the blind spot. We've discussed this a few times in the show via the example of the dress. It's the one that some people see as black and blue and others see as white and gold.

What's happening there is in that famous image, in the photo, the photo makes the dress seem overexposed. When your brain assumes something is a bit overexposed, it will reduce that overexposure before you experience it in consciousness. If it's too blue, your resulting experience will be a little bit less blue. If it's too yellow, your resulting experience will be a little bit less yellow.

With the dress, the nature of the overexposure is almost perfectly ambiguous. It's unclear whether it is exposed more so in natural light or more so in artificial light. Natural light tends to be a bit more blue. Artificial light tends to be a bit more yellow. So the more experiences you've had over your lifetime in which you've seen objects overexposed in natural light,

the more likely your brain will assume the dress is overexposed in natural light and thus will remove the blue tint, resulting in white and gold. The more experiences you've had over your lifetime in which you've seen objects overexposed in artificial incandescent light, the more likely your brain will remove the yellow tint, resulting in black and blue. In both cases, in a moment of ambiguity,

Your brain disambiguates the ambiguous before it reaches your conscious experience by making a guess based on your prior experiences or, as they say in psychology and neuroscience, your priors. An enormous amount of our day-to-day lives are built on disambiguation via our priors. If you hear a knock at the door, you will infer the origin of those sounds based on context, on expectations, on prior experiences.

You will then infer what will happen if you take all manner of possible actions based on your inference. And then you'll produce a complex output of inferences, of inferences, of inferences, of inferences, all the way down. That, in the end, will not feel complex when you choose how to react.

As you grow from baby to child to wherever you are now, all your billions of experiences, all the causes that regularly have led to effects, they all become a foundation of priors you use mostly unconsciously to generate the pattern recognition that you use mostly unconsciously to produce probabilistic predictions, expectations, and explanations, mostly unconsciously. Most of our cognition is inferential in this way.

The world is too complex, too fast, too ambiguous for the brain to process it in real time. Instead, it constantly generates hypotheses, tests those against inputs, and then updates them so that we can make more and more useful assumptions.

This all leads to a term in psychology that isn't really a term in psychology, but is sometimes used by psychologists and that I love called the makes sense stopping rule. The makes sense stopping rule is a cognitive shortcut, a heuristic by which your brain stops seeking more evidence, more sensory input, more data, more contemplation, and

once it reaches a conclusion, an inference, that it feels is plausible. And as a social primate, plausible often means that which you feel other people will consider to be reasonable, in the sense that if you have to defend yourself, it will be supported by good reasons. Not that any of that is usually conscious, but it is how the processing seems to flow.

The problem is, thanks to the makes sense stopping rule, you will stop seeking more information when something makes sense to you, regardless of whether it's actually true. And that can lead to a false sense of confidence in your decisions and or a false sense of understanding when it comes to a topic or a situation.

The more formal term for all these mental phenomena coming together in sort of a makes sense and stop looking for more stuff kind of way is satisficing. When it comes to satisficing, we tend to take on a thinking strategy that aims for a satisfactory outcome instead of a perfectly optimal one.

Whether shopping for socks or making plans for dinner or making plans for your future career or home or relationship, we tend to commit to a decision when we meet our individual threshold of acceptability.

The same is true of conclusions. The same is true of interpretations of news stories. The same is true of deciding whether or not we're going to share a bit of content in our information streams. And this is most likely because for most of our evolutionary history, we had neither the time, resources, nor information necessary to take an exhaustive course

extra super rational, unbelievably scientifically sound path toward plausibility. And so today we are biased toward speed, not accuracy. So yeah, many systems, biological in nature, combine efforts inside your brain to produce what we might call the architecture of assumption.

Its output is not perfect, and it doesn't aim to be perfect. It aims to be good enough. And most of the time, it is. Most of our inferences are good enough. But here's the thing. When they aren't good enough, we often don't know they're not good enough. And not only do we not know they're not good enough, we believe, we infer that they are good enough. As Mark Twain once said, it ain't what you don't know that gets you into trouble.

It's what you know for sure that just ain't so. Which is a quote I used to use many times and cite Mark Twain as the origin, but when I fact-checked it, it turned out that, yeah, he never said that. It just made sense to me that he did, and then I shared it believing he had.

After the break, we will discuss how this all contributes to the ladder of misinference and what we can do to avoid the mistakes it produces with economist Alex Edmonds, author of May Contain Lies, a book about that ladder and all manner of ways we get fooled by misinformation. Music

Discover a spectacular oceanfront destination with crystal blue seas, 360 days of sunshine, and the cool Bahamian breeze. Bahamar, located in Nassau, Bahamas, offers your choice of three luxury hotels and over 45 fine dining and nightlife options. You'll find a 15-acre tropical water park, John McEnroe Tennis Center, Jack Nicklaus Signature Golf Course, and a great place to stay.

John Batiste's all-new jazz club, and the Caribbean's largest casino. Visit Bahamar.com today and discover your next vacation. If your small business is booming and ready to expand, you might say something like... Booyah!

Crushed it. But if you need someone who can actually help protect your growing business, just say, Like a good neighbor, State Farm is there. And just like that, your State Farm agent can help you get the coverage you need for your new space. For your small business insurance needs, like a good neighbor, State Farm is there.

The School of Thought. I love this place. I've been a fan of the School of Thought for years. It's a non-profit organization. They provide free creative commons, critical thinking resources to more than 30 million people worldwide. And their mission is to help popularize critical thinking, reason, media literacy, scientific literacy, and a desire to understand things deeply via intellectual humility.

So you can see why I would totally be into something like this. The founders of the school of thought have just launched something new called kitted thinking tools, K I T T E D thinking tools. And the way this works is you go to the website, you pick out the kit that you want and

There's tons of them. And the School of Thought will send you a kit of very nice, beautifully designed, well-curated, high-quality, each one about double the size of a playing card, Matt Cello 400 GSM Stock Prompt Cards.

and a nice magnetically latching box that you can use to facilitate workshops, level up brainstorming and creative thinking sessions, optimize user and customer experience and design, elevate strategic planning and decision-making, mitigate risks and liabilities, and much, much more. And each kit can, if you want to use it this way, interact with this crazy cool app.

Each card has a corresponding digital version with examples and templates and videos and step-by-step instructions and more. You even get PowerPoint and Keynote templates.

There's so many ways you could use this. Here's some ideas. If you're a venture capital investor, you could get the Investor's Critical Thinking Kit and use it to stress test and evaluate different startups for Series A funding. If you're a user experience designer, you can get the User Design Kit to put together a workshop with internal stockholders for a software product. Or if you're an HR professional, you could mix and match these kits to create a complete professional development learning program tailored specifically for your team over the course of the next decade.

So if you're the kind of person who is fascinated with critical thinking and motivated reasoning and intellectual humility and biases, fallacies, and heuristics, you know, the sort of person who listens to podcasts like You Are Not So Smart, you're probably the kind of person who would love these decks. If you're curious, you can get a special 50% off offer. That's right. Have a look.

Half off offer right here. You can get half off of one of these kits by heading to kitted.shop, K-I-T-T-E-D.shop and using the code SMART50 at checkout. That's SMART50 at checkout. 5% of the profits will go back to the school of thought. So you're supporting a good cause that distributes free critical thinking tools all over the world on top of receiving a set of thinking superpowers in a box.

Check all of this out at kitted.shop or just click the link in the show notes. And now we return to our program. I'm David McCraney. This is the You Are Not So Smart podcast. Our guest in this episode is Alex Edmonds, a professor of finance at London Business School.

In his book, May Contain Lies, he examines how narratives, statistics, and studies can mislead us if we're not really aware of how inference works and where we are on a ladder of potential misinference, especially when we align our preconceived notions with the thing that we are scrutinizing or most likely not scrutinizing if inferred.

It confirms our preconceived notions, our biases, our priors, and what we would like to be true, what we wish was true, or what just seems to make sense. All right, here is my interview with Alex Edmonds. Let's just start from like first principles here.

What is confirmation bias, Alex? As if it's the first time anyone has ever heard this term before. What are we talking about? It's the temptation to accept a result uncritically if it confirms what we want to be true and to reject a result out of hand if we don't like what it says.

So that's what I'd call biased interpretation. So that's one part of confirmation bias, which is that we respond to information we receive based on whether we like it. But there's also a second part, and therefore it's important that you ask for the definition. The second part is biased search.

So the information that we look for to begin with is information that we'll like the sound of. So if we're on the right wing, we might only look at Fox. If we're on the left wing, we might look more at MSNBC. So it's not only about the interpretation of information, but the search for information. Okay, there's a couple things in there I want to pull out briefly. One is the term like and want and agree with, like...

If we come across information that we, and it confirms something we want to be true, that word want is humongous to me. And I want to get a little bit deeper in there. What do you mean by want? And what is it that's fueling and motivating and driving this wanted to be true statement? Yeah.

Yeah, so this is important. So want is, there could be, you want it to be true because there are clear, tangible benefits from this. So if I'm a proponent of sustainable investing because I've written books about it, I'm an advisor to sustainable investing firms, I would like results claiming that sustainable investing pays off. But also this idea of want can be much, much weaker and much more subtle.

we might, quote, want something to be true just because it confirms our worldview. And so there might not be anything in it for me, but if it confirms our worldview, we might accept this. And so this is really important because we might think confirmation bias only applies to huge things where there's a lot of skin in the game, or if it's ideological, like your views on immigration or abortion. But even something small, such as we think that something natural is better than something artificial, that is sometimes enough to trigger us to be

exhibiting this bias. You use these two terms, like you talk about naive acceptance of things, which is a really good way to put it. I love that. But also this blinkered, or as I've often heard, selective skepticism. So I'm looking over things and I see things, okay, well, I'd really rather that not be so. And this other side of the coin, I love how this comes into play. It could be something where you're actively searching, you're Googling up, you're Wikipedia-ing,

But it also could just arrive. It could just land in your lap. You know, you open the newspaper metaphorically because that rarely happens anymore. But information arrives somehow. And there are times when you're like, that sounds true to me. And there are other times you're like, hmm. And all of a sudden you become skeptical and it's selective or blinkered as you put it. Let me hear a little bit more about that.

Yeah, so one example is the Deepwater Horizon disaster. So there, there was an inconvenient truth, which was the rig was not safe. And so you could not remove the rig without there being a potential explosion. They did a test known as the negative pressure test. They did the test three times. Every time it failed, they just didn't want that to be true. So they thought of another exhalation.

to explain why the test was not reliable in that circumstance, they invented a different test that they ran instead, that different tests passed, and so this led to the disaster.

Another example could be Silicon Valley Bank. So their own models predicted that because they'd gone so into treasury bonds, then they would have huge losses if interest rates rose. They didn't want that to be true. They enacted blinkered scepticism and they said, well, let's try to come up with a different model to give a better answer that our bank was not at risk.

So this is dangerous because you might think, well, these biases, only dumb people should fall for it. I'm a smart listener to your podcast. I will never fall for this misinformation. But actually, some evidence suggests that smarter people are more likely to fall for misinformation because the smarter you are,

the more able you are to engage in what's known as motivated reasoning, you come up with reasons to excuse why a particular piece of evidence should be dismissed. Yeah, the evidence is pretty clear. And yes, there's many meta levels to our conversation, considering that we are discussing evidence and how it may or may not confirm our hypothesis. But the evidence so far is pretty strong that the smarter you are and the more educated you become,

The, it's just, you're just becoming better at justifying and rationalizing, which leads to the bizarre downstream effects of what you're discussing. So I love that you point this out too, very early on in the book, um,

I think this is a pretty good introduction to confirmation bias. It reminds me of Dan Gilbert had this when he was trying to make the shortest possible metaphor. You step on a scale of, if it, if you like what it says, you just go about your day. If you, if you step on a scale and you don't like what it says, you step on and off about five more times. That's a, that's a great way to put it. And also I love that you mentioned in the book, the, the research, the,

the scientists who did the fMRI and had the strong, oh no, I'm getting attacked by a bear feeling when their amygdala lit up in the presence of information they didn't like. I got to interview them right after they did that research. And at the time, they had no idea why. They were just like, look, I have this data and I don't know. All I can tell you is I have this data.

Okay, so we have confirmation bias. These are the big two, and you open the book with the big two. The other is black and white thinking. You really get into it and demonstrate how dangerous and difficult this can be. For people who think they know what we're talking about or have never heard this before, what are we talking about when it comes to black and white thinking?

So this is the idea that something is either always good or always bad. There's no shades of grey. For example, this was how Atkins approached the Atkins diet. He said carbs are always bad. And this is black and white in many ways. Number one, it's black and white in that it's always bad. There's never any moderation. So it's not that carbs are OK as long as they're 20 percent of your daily calories. No carbs have as few as possible.

But it's also black and white in that it bunches all types of carbs under the same umbrella. It doesn't matter whether it's refined sugar or simple carbs or complex carbs. Anything which is called carbs is bad. And so why is this so appealing? Well, it's just easy. And we like simple shortcuts and heuristics. It makes our complex world simple. And therefore, anything which gives us simple advice like eat as few carbs as possible, that's something that can catch on because it accords with our biases.

I dig how you describe it. You describe it as, you know, why would we go this way? Because as you write, we like things that can be learned and applied quickly. This is the reason we have so many of these heuristics. And a speedy, good enough thing is often better than a complex, nuanced, I have to really think about this to achieve something approaching total accuracy thing.

But you add something to that that I had never seen before. And I think it's so easy to just say the world is made of shades of gray and there's a gradient to things that it's a bit more complicated than that. And almost no matter what it is you're discussing, it's always more complicated because you could always go into more, more rich detail. But you add that before you hand wave, uh,

with the phrase shades of grey, consider the concepts of moderate, granular and marbled. I would love to hear you talk more about those three things. Yeah, so these are three reasons for why something might be shades of grey. So I want to be precise as to why the world is not black and white. So the first is moderation, which is it's rarely the case that something is always bad or always good. It might be good only up to a point or bad only after a point. So one example is drinking water.

Particularly if you're running a marathon, water drinking seems to be a really good thing. You need to hydrate, but you can suffer from water intoxication. And sadly, this happens sometimes to runners where they're told hydrate as much as possible. They follow this advice dutifully. They hydrate a lot. And this leads to sometimes death because there's so much water that it dilutes sodium and other essential minerals to tiny concentrations.

So that's the idea of moderation. So something is not always in one direction. It might start off positive and then turn negative, like the effect of water on marathon performance. But the second idea is granular. So this means that there's not just one big bucket. Within that bucket, some things might be good and other things might be bad. So within carbs, yes, it may well be the case that simple carbs are bad, but complex carbs might be good for you.

Similarly, you learn at school that cholesterol is generally bad for you, but there's good cholesterol, which is high density lipoprotein. And so rather than the simple idea, let's avoid as much cholesterol as possible. There are some things that are good and some that are bad.

Finally, the idea of marbled is that certain things might be neither unambiguously good or bad. So why I call it marbled is if you think about some marbled meat, it's got streaks of fat intertwined with it. So you don't know whether you're getting fat or whether you're getting muscle. And so my field of sustainable investing is the fossil fuel industry necessarily bad. You

Yes, climate change is a really important threat. But sadly, in many countries, we don't yet have enough renewable energy to get by without fossil fuels right now. In Africa, 600 million people don't have any access to electricity. So the idea that we should deprive them of fossil fuels and the potential for economic development is something which would be a drastic outcome.

So even cases in which we think that something might have no redeeming qualities, it might still have them. And so there's much more nuance needed rather than never invest in fossil fuels, never hire somebody who's ever been in prison for your company. We can rehabilitate people who've been in prison. So it's just to encourage more nuanced thinking. I...

Had a guest on the show, Laurie Santos, who coined the G.I. Joe fallacy, which was knowing that knowing is not half the battle is half the battle. You do have to recognize the half part of that. So the knowledge is not enough. What are we going to do with it? What are good ways to apply the fact that we have learned there are more than 200 cognitive biases there?

And most of them are just sort of variations or deeper explorations of confirmation and motivated reasoning and black and white thinking.

You have this beautiful thing and I love it very much. I'm going to share this with the world starting with this podcast, but I will cite you endlessly and show this to people every time I get a chance to talk about these topics going forward. I love your ladder of misinterpretation. I could describe it and then have you tell me little things about it, but that would take a valuable opportunities for you to actually just get going. I will just say it's, it's going to have these rungs. It starts with statement goes up to fact, data, evidence, and proof.

What is this thing and how can we use this? I have other questions to follow up, but you can just get going. Thanks. So after highlighting the biases, I wanted to look at, well, how to address them. So how to correctly interpret information. So what I wanted to do was to categorize all of the types of misinformation out there into just four buckets for easy application. And so these four buckets are four steps on the ladder of misinterpretation, which highlights the four types of mistakes that we might make.

So the first rung of the ladder is the difference between statements and facts. And I highlight that a statement is not fact because it may not be accurate. So what do I mean by this? So it may well be that you hear a famous quote from somebody, but that quote might have never been said by somebody. It might not be given the full context or it might be a misquote.

So we often think, well, if there's a footnote at the end of a sentence and there's a paper cited in the footnote, this must mean that the sentence is gospel. But it's not. We need to check the facts. Now, the second set of the ladder is a fact is not data. It may not be representative. So let's say something is absolutely true. We've checked. It still may be misleading because it could be a handpicked example.

So then you might think, well, is not the best solution to that, having hundreds of data points so that you don't have one isolated example. But then that's the third step of the ladder, which is data is not evidence. It may not be conclusive. So what's the difference between data and evidence? So what is evidence to begin with? Evidence is something which points to one interpretation of the data and not the others. Even if you have a robust correlation,

With tons of data points, it might not be causation, there could be other things going on. The final step of the ladder is evidence is not proof, it may not be universal. So evidence, even if it's watertight, only applies to one situation. I'm wondering, how do we apply the knowledge of a statement's not a fact, a fact is not data, data is not evidence, evidence is not proof,

If I'm out there Googling, what are some simple tips for applying this ladder to what I'm searching for?

So first, it's to see what step on the ladder you're on and then ask questions to make sure that you're not making a misstep to the next rung. So if you're given a story, a person started waking up at 5am in the morning and this changed their life, just look for the large scale. Look at, is this unique or is there a large scale study showing the link between waking up at 5am and changing your life? But if instead you were on the data part of the rung, you had large scale data, you had

then you need to ask yourself what are the alternative explanations for that correlation. So it could be that waking up at 5am does transform your life,

Or that the people who choose to wake up at 5 a.m. are probably also eating healthily and exercising. And maybe those are the things that transform your life. So try to look at the alternative explanations. You have so many great examples to like a smoker who lives to 100. How often does this happen? Or eating whole grains leads to people being less likely to have heart disease. Someone's telling you that.

But is there something about the kind of person who tends to eat whole grains? I get this when it comes to searching. What about if you're just having a conversation with someone or you're in a professional setting at a meeting or you are making decisions as someone who is running things at an institution or a company? How, when the evidence comes,

comes to you, not because you're actively searching it, what are some good ways you could apply this ladder? Try to find out some counter arguments and some alternative viewpoints. So given that the world is not black and white, if there are only ever positive views on your potential strategy, try to get some negative ones. So when Alfred Sloan was CEO of GM, he closed a meeting by asking, are there any objections to my decision?

There were no objections. So he said, well, then I propose that we postpone the decision until the next meeting so that you have opportunities to come up with concerns.

So why that was a great example is that it highlighted that he recognised that he was not infallible. Not every decision he came up with would be 100% accurate. So if there was an absence of concerns, it wasn't because his decision was flawless. It's because people did not have the opportunity to poke holes in it.

Who do you hope reads this book? I mean, writing a book is a whole gigantic intellectual exercise that somewhere about halfway through you start thinking, maybe I shouldn't have done this. Now that you've done it and it's out there, who do you hope reads this book and what do you hope they get from it?

I'd love a broad audience to read the book. And if I was to be quite optimistic in my answer to the question, I'd say everybody. Now, that might seem really optimistic. And doesn't everybody want the whole world to read their books? But this is quite different from my normal writing. So my normal writing and my first book, it would only be read by finance and business people. It was squarely in finance and economics. But misinformation is something which applies in so many fields. And even if your day job,

does not involve information or data. You do this in your daily life. Anytime you pick up a copy of Men's Health or Women's Fitness or Runner's World, you are acting on the basis of research. If you think I'm going to train for my race, do I use high intensity interval training, low intensity steady state? That's important.

If I'm a mother or a parent, do I decide to breastfeed or bottle feed? That is important. Anybody who's interested in self-improvement, do you follow the advice to wake up at 5am every day or to do daily journaling or daily meditation? Again, these things, evidence can help you.

And you might think that my idea of trying to be scrutinizing and discerning, isn't that a lot of work? Don't we just want to relax and enjoy life? But these things are here to make your life easier. Because if you can find a more efficient way of training for your marathon or losing weight, that saves a lot of time. So I think discernment and being skeptical with information can let us live more freely and more fulfilled lives rather than being constrained by the universal black and white statements that a lot of these books try to put out.

That is it for this episode of the You Are Not So Smart podcast. For links to everything we talked about, head to youarenotsosmart.com or check the show notes right there in your podcast player. My name is David McCraney. I have been your host in those show notes. You can find my book, How Minds Change, wherever they put books on shelves and ship them in trucks. Details are at davidmccraney.com. And I'll have all of that in the show notes.

as well, right there in your podcast player. On my homepage, davidmccraney.com, you can find a roundtable video with a group of persuasion experts featured in the book, talking all about it. You can read a sample chapter, download a discussion guide, sign up for the newsletter, read reviews, all sorts of things. For all the past episodes of this podcast, go to Apple Podcasts, Amazon Music, Audible, Spotify, or

You are not so smart.com. You can follow me on Twitter and threads and Instagram and blue sky and everything else. That's like that at David McCraney at symbol, David McCraney, follow the show at not smart blog.com.

We're also on Facebook slash you are not so smart. And if you'd like to support this one person operation, no editors, no staff, just me, go to patreon.com slash you are not so smart. Pitching in at any amount gets you the show ad free, but the higher amounts, that'll get you posters and t-shirts and signed books and other stuff. The opening music, that is Clash by Caravan Palace. And if you really, really, really want to support this show,

The best way to do that, just tell people about it. Either rate and comment on it on all these platforms or just tell somebody directly, hey, you should check out this show. Point them to an episode that really meant something, that really connected with you. And check back in in about two weeks for a fresh new episode. ♪♪♪

With everything from great gas to great deals, when you start with Sitco, you're good to go. But good to go where? Good to go skydiving? Good to go for a shiatsu massage? To a balloon animal convention? To a hibachi restaurant? Good to go to a drum circle? Wherever the road leads, when you start with Sitco, you're always good to go.