If you love a Carl's Jr. Western Bacon Cheeseburger, if you're obsessed with onion rings and barbecue sauce, next time, tell them to triple it. If you need that El Diablo heat, heat, heat, and more meat, meat, meat, triple it. If you're gaga for house-made guacamole, bacon, and spicy Santa Fe sauce, you already
You can go to kitted.shop and use the code SMART50 at checkout and you will get half off a set of thinking superpowers in a box.
If you want to know more about what I'm talking about, check it out, middle of the show.
Welcome to the You Are Not So Smart Podcast, episode 307. Welcome to the You Are Not So Smart Podcast, episode 307.
My name is David McGraney. This is the You Are Not So Smart podcast. And let's open this episode about fake news with a question. Have you ever commented on a post or shared an article on social media or sent a link to a friend's
only to later learn that it was either partially or completely fake. Like actual fake news. Has it ever actually tricked you just by seeming as though it was probably true?
Whether you've shared a bit of fake news or not, you've probably been exposed to quite a bit of it. And you've probably received a link from a friend or a family member or seen a post on their social media feed that was, to you, obviously fake, but to them, raised no skeptical alarms.
Back during the Obama years, back when social media had only been around for about a decade, there was this website named Literally Unbelievable that cataloged examples of people doing this. Specifically, it cataloged people mistaking satirical articles from The Onion for real news.
The Onion, if you are unaware, is a wholly fake news website that writes satirical articles and headlines for the sake of comedy. And the founders of Literally Unbelievable were inspired to start their website after noticing how many people were sharing and commenting on Facebook an Onion article with the headline, Planned Parenthood Opens $8 Billion Abortion Plex.
You can still read this article on the Youngin's website. The lead paragraph reads, "...planned parenthood announced Tuesday the grand opening of its long-planned $8 billion abortion plex, a sprawling abortion facility that will allow the organization to terminate unborn lives with an efficiency never before thought possible."
And the article goes on to detail how the mall-like facility will feature coffee shops and bars and restaurants and retail stores, a tin screen theater, and more. Thousands of people shared this article on Facebook, believing it was true. Thousands of people commented on Facebook underneath this shared article, expressing how angry they were that this sort of thing was happening in Obama's America.
Now, Literally Unbelievable has since ceased operations, but you can still check it out on the Internet Archive's Wayback Machine. I did that just now, and here are some of the more popular Onion articles people believed were true back in the 2010s. 42 million dead in bloodiest Black Friday weekend on record. Congress threatens to leave D.C. unless new capital is built.
Obama begins inauguration festivities with ceremonial burning of constitution. Scientists successfully teach gorilla it will die one day. Now, what I love about Literally Unbelievable, it also makes me sad, but I do love this. It's how it revealed just how quickly digital media adapted to commenting and sharing and subscribing.
Both legitimate and completely fake news websites learned very quickly how to generate powerful clickbait, the kind that bypasses our skepticism and encourages engagement, the kind that avoids the, there's no effing way reaction and instead favors the, yeah, that sounds about right reaction. There are many psychological terms for all of this, one of which is disconfirmation bias.
That's the human tendency to apply excessive skepticism and scrutiny to information that contradicts your beliefs while readily accepting evidence that supports them. It's the impulse to hold opposing ideas to an impossibly high standard of proof while accepting familiar beliefs without raising an eyebrow.
When it comes to news headlines and news stories and internet content in general, we are often, without our conscious awareness, quite selectively skeptical of incoming information. We selectively apply critical thinking, often accepting without question news stories that confirm our assumptions and attitudes and beliefs about the world, while powerfully scrutinizing news stories that seem to contradict our assumptions and attitudes and beliefs.
And there are many things that contribute to your skepticism or lack thereof from topic to topic, moment to moment. But when it comes to political headlines, we seem particularly selective. Psychology has conducted some fascinating research into all of this and landed on arousal as the number one motivation to share an article with others. It's the number one thing that is likely to bypass your skepticism.
The more a headline or article arouses you, the more likely you are to share it without fact-checking it first.
Arousal is the psychological term for when your autonomic nervous system gets triggered in a way that directs your attention to the matter at hand. And the emotions that most trigger arousal when it comes to news headlines are those that make the people you don't like, politically speaking, look bad in a humorous way, or the sort that make the people you do like look particularly good,
Or, and this is often the easier way to generate arousal, headlines that, politically speaking, generate fear and or anger. And the research into all of this is ongoing. Here are two example headlines from a very recent study into all of this. Imagine you are scrolling social media and you run across one of these headlines from what appears to be a reliable news source. Headline one says,
Trump says, I don't like poor people during private meeting with business moguls. And here's another headline, free stay for veterans at Trump hotel in Washington, DC. In the study, they showed people headlines like these, these headlines in particular, and measured their likelihood of sharing them on their social media feeds or sending them to friends.
Now, both of these headlines are fake. One arouses people who don't like Donald Trump, and one arouses people who do. And in both cases, people not only believed these headlines, they said they would, yes, probably share them. And I'm wondering, what would you do? If you ran across one of these two headlines, would you...
Believe them? Would you share them? Because the researchers who wrote these headlines and presented them to study participants not only found that people were highly likely to share them depending on their political ideology, they also found something that I found rather arousing, psychologically speaking. So much so that I want to share with you what one of those researchers told me.
The more you believe that you're not going to do this, the more likely you are to do this.
That was the voice of Samuel Woolley, who is a scientist who studies propaganda and disinformation. More specifically, he studies how nefarious actors purposefully manipulate our thoughts, feelings, and behaviors using the tools of propaganda and disinformation. And he also studies who exactly is using such tools right now and how
why they're using them, and how. The more you consume only one perspective or one silo of information, the more you think that you are more objective because you're only seeing that one side. That's the voice of Katie Joseph, who is also a scientist who also studies propaganda and misinformation.
Specifically, she studies the neuroscience of how brains interact with misinformation and propaganda delivered via digital media and algorithmically generated content streams. And as Katie Joseph just described, the more a person becomes informationally siloed, and thus, in effect, the less objective they become, oftentimes, they're
that results in a perception on their part that they are becoming more objective. The people who, in a sense, believe that they were the most objective and least biased when it came to their being influenced by political concordance, when it came to their partisan bias,
They were the most biased and least objective. That's Michael Schwab, a social psychologist at Stanford who studies polarization, disinformation, and the
the negative cognitive impact of inequality and the effectiveness of interventions aimed at defending against all of those things. In this whole bigger conversation we're having around this "intuitification" or AI slop or increasing fracturization or fragmentation of the information ecosystem, it is concerning to say that, "Okay, I've been following this one influencer or this one thread of information,
And I've watched every single one of their videos, so I do think that they're authentic or I've explored so many angles of this one topic area. But you're in that one little silo and you think that you're more and more objective, so you're not encouraged to look further. In this episode of the You're Not So Smart podcast, we sit down with three researchers, all of whom scientists who studied disinformation and propaganda, whose new paper, The
found something surprising. Sure, nefarious actors can leverage our confirmation bias, our propensity to believe in and share information that confirms our assumptions and worldview, but this new research shows that our dis-confirmation bias must also be taken into account.
After the break, we meet these scientists, ask them why they conducted this research, learn how they conducted this research, and ponder what are the major takeaways, including what should we and can we do about this. All that after the break. I know for sure that most studies show that new habits fail simply because you don't have a plan.
And without a plan, it's really hard to stick with a new habit or routine. That's why Prolon's five-day program is better than any trend out there. It's a real actionable plan
for real results. Prolon. It was researched and developed for decades at USC's Longevity Institute. It's backed by leading U.S. medical experts. Prolon by El Nutra is the only patented fasting-mimicking diet. Yes, fasting-mimicking
You're going to trick your body and your brain into believing that you are fasting. And when combined with proper diet and exercise, it works on a cellular level to deliver potential benefits like targeted fat loss, radiant skin, sustained weight loss, and cellular rejuvenation.
So I received a five-day kit from Prolon. And let me tell you, right away, I knew this was going to be fun and fascinating. The kit comes in this very nice box with this very satisfying Velcro latch. They did a great job with this
presentation packaging. And inside, on the underside of the lid, you get this message, often attributed to Hippocrates, about letting food be thy medicine. And below that, a QR code that takes you to your personal page for guidance, tips, and tracking. And then under all that, each day's food, snacks, vitamins, and supplements packaged within its own separate box. Each day has its own box. And when you get in there, right away, it looks like it's going to be easy and fun. And it
was both of those things. I was totally willing for all this food to taste bland and boring in service of the concept, in service of the mimicking of a fasting experience. But it turns out it all tasted great. And day one,
is all set up to prepare your body for the program. And they give you everything you need to stick to it. It's very clearly designed by people who know what they're doing. And Prolon tricks your body into thinking you aren't eating. But here's the thing. It works without being painful because you do get to eat it.
You eat all sorts of little bits and bobbles that are like minestrone soup and these crunch bars and olives. And there's so much stuff in each day's kale. I've got one, I got it right here. Almond and kale crackers and a Choco Crisp bar, an intermittent fasting bar, minestrone soup, algal oil. I love how right away I was like, oh, I can't wait to try this stuff out. And yes, you do get hungry, but not nearly as hungry as I thought you would
it. And most importantly, by day three, I noticeably felt great. I had this, oh, I'm in on a secret feeling. And when it comes to hunger, by day five, I didn't feel like I was missing out on anything at all. The hunger was very minimal. And by the end of all this, my skin looked noticeably glowy and healthy. And yeah, I lost six pounds. Six.
It was easy to follow and I felt reset afterwards, like I had rebooted my system. To help you kickstart a health plan that truly works, ProLon is offering You're Not So Smart listeners 15% off site-wide, plus a $40 bonus gift when you subscribe to their five-day nutrition program. Just visit
ProlonLife.com slash Y-A-N-S-S. That's P-R-O-L-O-N-L-I-F-E dot com slash Y-A-N-S-S to claim your 15% discount and your bonus gift. ProlonLife.com slash Y-A-N-S-S.
TaxAct knows you probably don't need help filing taxes. But if you get stuck, we have live experts you can talk to. And who knows, you could hit it off and become long-term tax friends. Staying up late at night, talking about deductions, refunds, personal exemptions. Heck, you could even fall in love and create a little dependent of your own one day. Or they could just answer your filing questions.
Tax Act. Let's get them over with.
Okay, here's the thing I was talking about at the very beginning of the show before the show started. The School of Thought. I love this place. I've been a fan of the School of Thought for years. It's a non-profit organization. They provide free creative commons, critical thinking resources to more than 30 million people worldwide. And their mission is to help popularize critical thinking, reason, media literacy, scientific literacy, and a desire to understand things deeply via intellectual humility. And
So you can see why I would totally be into something like this. The founders of the school of thought have just launched something new called kitted thinking tools, K I T T E D thinking tools. And the way this works is you go to the website, you pick out the kit that you want and
There's tons of them. And the School of Thought will send you a kit of very nice, beautifully designed, well-curated, high quality, each one about double the size of a playing card, Matt Cello 400 GSM Stock Prompt Cards.
and a nice magnetically latching box that you can use to facilitate workshops, level up brainstorming and creative thinking sessions, optimize user and customer experience and design, elevate strategic planning and decision-making, mitigate risks and liabilities, and much, much more. And each kit can, if you want to use it this way, interact with this crazy cool app.
Each card has a corresponding digital version with examples and templates and videos and step-by-step instructions and more. You even get PowerPoint and Keynote templates.
There's so many ways you could use this. Here's some ideas. If you're a venture capital investor, you could get the Investor's Critical Thinking Kit and use it to stress test and evaluate different startups for Series A funding. If you're a user experience designer, you can get the User Design Kit to put together a workshop with internal stockholders for a software product. Or if you're an HR professional, you could mix and match these kits to create a complete professional development learning program tailored specifically for your team over the course of the next decade.
So if you're the kind of person who is fascinated with critical thinking and motivated reasoning and intellectual humility and biases, fallacies, and heuristics, you know, the sort of person who listens to podcasts like you are not so smart, you're probably the kind of person who would love these decks. If you're curious, you can get a special 50% off offer today.
That's right. Half off offer right here. You can get half off of one of these kits by heading to kitted.shop. Okay.
K-I-T-T-E-D dot shop and using the code smart 50 at checkout. That's smart five zero at checkout. 5% of the profits will go back to the school of thought. So you're supporting a good cause that distributes free critical thinking tools all over the world on top of receiving a set of thinking superpowers in a box. Check all of this out at kitted.shop or just click the link in the show notes.
And now we return to our program. My name is David McRaney. This is the You Are Not So Smart podcast. And in this episode, we are exploring a newly named cognitive distortion. It's called the concordance over truth bias.
And our guests on this episode are three of the scientists whose new paper outlines how it works. One of those scientists is Samuel Woolley. I'm Sam Woolley. I am a professor over at the University of Pittsburgh. I just landed here after spending five or six years at UT Austin. I am here at Pitt. I am what's called the Dietrich Chair of Disinformation Studies here.
So that means I focus on the purposeful spread of false information. Broadly speaking though, my research and work has been on propaganda online. So the ways in which propaganda spreads online with a particular focus on what my colleagues and I call computational propaganda.
the involvement of algorithms, AI, bots, anything computational in spreading manipulation of public opinion. So we wrote a book called that a while back, and I have another book that came out last year, along with a few others, called Manufacturing Consensus, which riffs on Herman and Chomsky's work on manufacturing consent, but for the digital age. So that's me in a nutshell.
I really am interested in studying the producers of propaganda and disinformation, but also the impacts of it too. Another one of those scientists is Katie Joseph. I was a former misinformation researcher, though I occasionally still do research. The long and short of it is I'm very interested in how people make decisions and
And so when I was an undergraduate, I studied social neuroscience and then also international security to understand the micro and the meta mechanisms of why are we making the decisions we're making in society.
And that led me to doing my master's looking at this topic, like how partisanship impacts perceptions of misinformation, because I saw very early in the literature that people don't act based on their belief systems per se, but most often based on social norms.
And the mechanism through which social norms are manipulated most effectively and easily and potentially cheaply, depending on how you're doing it, is through manipulation of algorithms that shape discourse on the information ecosystem. And the third scientist who will tell us all about this research is Michael Schwalb. Yeah, so my name is Michael Schwalb. I'm a postdoctoral fellow in the psychology department at
at Stanford University, and my research looks at the cognitive processes in polarization, misinformation, and economic hardship, and interventions to change these processes. There's a fourth scientist who also worked on this paper, psychologist Jeffrey Cohen, but I just didn't have time to add him to the interviews. We will get into the research and their takeaways in just a moment, but first...
I wanted to know how this became a topic that Samuel Woolley, Katie Joseph, and Michael Schwab wanted to study. Katie is brilliant. She worked with me first at Institute for the Future and then at University of Texas as a researcher on my team, as the research lead on my research team. And this was conceived of as originally her master's thesis at Stanford. She was really interested to know whether or not
Ruth actually mattered when people were reading the news. We all have deep experience with propaganda. Every single person. That's part of the reason I've loved studying it is because a lot of my work, actually, some of it was quantitative and survey-based, but a lot of it was just interviewing all different types of people and hearing people's perspective. And I always learned something new because...
that everyone's being targeted by all these different types of messages that's shaping our behavior, myself included. And so I just wanted to know, I wanted to talk to people about like, why do you believe that? Or like, why are you creating a, you know, AI propaganda tool? Or why are you, you know, mostly it has to do with money. At the end of the day, a lot of the manipulation of the information ecosystem, people are getting paid to do it or they're getting compensated in some way.
through non-monetary means. And it's just really interesting to study all the different levers that shape our behavior and social norms at scale without us even realizing that we're being shaped. Yeah, so there were two motivations or inspirations for the paper. The first was actually a colloquium talk that we saw about misinformation where the speaker claimed that politics didn't really matter.
that much in determining whether people believed fake news and that the problem was more an issue of people being like cognitively lazy. People just weren't like engaging in deep enough cognitive processing.
And the researcher was presenting on what was a really big part of the field at the time. And it didn't really cohere with our priors. And the methods that the researchers were using in this talk didn't, to us, seem optimal. So we ran an exploratory study using the same research paradigm, but except we tried to address some of these
methodological concerns in the previous research. And in our pilot, we found the opposite results, namely that politics did matter a lot for people's beliefs. And so then we conducted this preregistered, scaled-up version of the study that became this paper. I also asked Woolley, Joseph, and Schwalb why this kind of research was important right now in this strange time for both politics and journalism, but also why
algorithms and just information exchange in general? Well, look, like, you know, mis- and disinformation have always been problems and politicians have always lied. But there are some things that are very substantively different about the nature of false information and lies in our current world. One is social media, right? Like, and the internet more broadly. The way that we receive our information is
is relatively unfettered in a lot of senses. Anyone can produce content on many of these sites. And so for a long time, there was a perception, long time relatively speaking, I guess, there was a perception that these tools would be great for democracy and for learning. Like you could go anywhere and get whatever information you wanted. You could use social media to spread your take on things.
It just didn't bear out to be that way. After the Arab Spring and Occupy Wall Street, if people can remember back to those times when everyone was celebrating social media as this massive democratic organizing force,
people started to realize like you know social media has actually been co-opted for control by a lot of different groups uh and especially powerful political groups and powerful corporate actors and things they understand how to leverage the noise factor of social media the amplification factor the suppression factor of social media to spread particular messages
And so we get our information much more quickly than we ever have before. We don't have very good verification processes for that information. Yet at the same time, we're being told by the powers that be that somehow this is better for free speech. And when I, you know, after 10 or 12 years of studying this stuff, I've come to the conclusion that in reality, free speech doesn't actually exist online.
that there's the illusion of free speech, that there's some degree of free speech. Yes, anyone can post anything online, but when it comes to trends, when it comes to algorithmic control, when it comes to the most sophisticated information operations and propaganda on social media, it's still the most powerful political groups and corporate groups that are able to most effectively control the message. And so the open marketplace of ideas, this thing where
the best ideas are meant to rise to the top doesn't really seem to exist online. It's just the illusion of that. And so free speech is a very useful stand-in for what actually is quite potent control of the informational environment by some very powerful people. The main concern I have is
My belief is that, and many people share this, is the reason that we're so fractured right now is because of this prioritization of our information exchange that's based on the extraction of our data and attention.
We have built this surveillance system that does rival countries like China, where it has been and is continued to be used to be carried out genocide against the Uighur population and others. In the US, we've built a similar surveillance system, but for the sake of targeted advertising.
But interestingly, in my research and my perception, but of course I have a bias, so maybe I have a filter, but targeted advertising has been found not to be effective. It's like it doesn't work for the user and it doesn't work for the buyer of the ads like these businesses. There was a study that had been done in the UK, it was around, I think, 2018, 2019, 2020. And
And they looked at millions and millions of the most expensive targeted ads. And over 50% of the money was lost to data brokers where they're like, we don't know where that money went. And then ultimately it only reached like, I think it was 12% of impressions. So we're, we built this mastermind system where it doesn't actually even allow the ads to get to the audience that they're supposed to get to. And now with that, the experience,
use of generative AI, more and more and more of these platforms, which are already dominated by fake accounts, are now uniquely persuasive fake accounts that are populated by generated content. So companies are, it's very, very hard to detect them because it's not like one image is being used to cross hundreds of thousands of accounts.
And it goes against the company's bottom lines to detect all these fake accounts because they make all their money. One of the main metrics is daily active users and AI generated accounts are also very active. And so this whole infrastructure we built around surveillance targeted ads is built on a lie and it only benefits the companies that are brokering that, which is how they became some of the top companies globally.
in terms of revenue in the whole world is because they're selling this lie that targeted advertising works even though it's increasingly not working. And so we've built this masterbound system and now we do see that there is this converging that many people had forewarned because we've seen it in other countries where it's like the technology is also merging now with the nation state in some elements and there's been open acknowledgement that they want to build more comprehensive surveillance systems.
And so I'm very concerned because when we look at capturing of information ecosystems throughout history, there wasn't this level of comprehensive surveillance in terms of like all of your correspondence, all of your location data, all of your internet search history. It's like before you may have been able to like hide a book or like, you know, hide letters. I mean, I think of the many changes we've faced in society over the past decade,
like half century or at least the last couple of decades, I think two of the biggest ones, the democratization of reporting, i.e. the rise of the internet. And the second is the huge increase in affective polarization.
So ideologically, we might not be that polarized. There's actually a lot of debate in the field whether ideological polarization has risen. But there is consensus that we have had a really big increase in affective polarization, i.e., the degree to which people feel divided and antagonistic towards one another along political lines. I mean, so much so that at this point—and this is research from Shonto Iyengar over at Stanford—
he's shown that of all the possible cleavages in society, you know, whether it's race or gender or other demographic factors, partisanship is now the number one cleavage in terms of what's dividing people, what's causing animosity. So I think these are two really big changes, really big phenomena in society. And I was really interested in research on how these two phenomena, in a sense, interact with one another. And, you know, I also think that the loss of
a shared sense of reality, or I forget who said that, someone called this the tragedy of the epistemic comments. I think the loss of that shared reality with other people, I think it makes connection across partisan lines just like a lot harder. And I think that's a really, I think it's a critical issue for tons of reasons.
Okay, let's get into the actual study, how it was done and what they found. This may come as a surprise, but heading into this, there was some debate in psychology among people who researched this sort of thing as to whether the truth eventually wins when it comes to disinformation and propaganda.
Here's Michael Schwab again. So the existing research up to now found that the effect of headline truth was about four times greater
than the effect of headline political concordance. Or said another way, some of the claims that were made was that, in a sense, truth trumps politics. And so in our hypotheses, based on our pilot and based on our priors, we predicted the opposite. Namely, we predicted that we'd find a significant effect of headline political concordance, i.e. partisan bias in participants' beliefs of
and reported likelihood of sharing partisan news headlines. You know, what matters more in believing and sharing political news, the truth or its concordance with our own political views? We also sought to unpack the effect of political concordance or partisan bias by seeing, was it stronger for true or fake headlines? Or said another way,
to explore which was stronger, the acceptance of convenient falsehoods or resistance to inconvenient truths.
What we did was we went out and talked to people who were census-matched online adults. They gathered more than 1,000 proponents and opponents of U.S. presidential candidate Donald Trump, and they made sure, as a whole, they were a representative sample of the demographics of the United States population based on the census.
And they did all of this before the election. We showed them headlines ahead of the 2020 election that were both fake and real. And we wanted to figure out whether or not they would understand that the headline was true regardless of their political affiliation. These participants, they were told they'd be taking a survey about news headlines to measure their recall of those headlines.
And the researchers did this to avoid priming these participants when it came to things like accuracy or their ideology or their identities. So whereas past research at the start of the studies would say, we're interested in seeing how accurate you rate each headline,
We instead used a cover story that the study was about recall. It was about memory. So we told them up front, you know, this is a study about memory. Would you be willing to commit to not, you know, writing down the names or using any memory devices and just giving it your full attention? We framed it as kind of a reading comprehension, social media study, memory study, as opposed to an evaluation of accuracy. So people weren't tipped off that they were supposed to think that it was accurate.
They then presented these people with fake political headlines and real political headlines. Each participant read 16 headlines in all, one at a time, in random order. And...
After they saw each of the 16 headlines individually, the headline would show, would come up. They would then rate the headline, not just on how likely they thought it was the events in the headline were to be true or how likely they would to share it. They also wrote, they also indicated,
how interesting the headline was, how often they see headlines like that, or how they would respond to the headline with a bunch of different emoji, and to try to give it a sense that there's other factors going on here. Half of these headlines, they were about Trump, and the other half were just non-political filler headlines. And of the political headlines, half of those were positive, showing Trump in a favorable light. For instance, here's one of the headlines from the study.
Donald Trump, serious contender for Nobel Prize in Economics.
And half of these headlines were negative. They showed Trump in an unfavorable light. Like this one, Trump's former accountant says Trump is not a billionaire. So of those 16 headlines, eight were partisan and the other eight were filler headlines to try to give it a good balance so they didn't think it was just about politics. And of the eight partisan headlines, half of them were negative.
favorable towards Donald Trump and the other half were unfavorable towards Donald Trump. Okay, now we're really getting into the actual study. So here's what these participants did. For each headline, they rated one to five how likely it was that the events described in this headline are true.
And then they rated, one to five, how likely they were to share the article with friends or family. And the researchers, they masked all of this by asking for ratings of all sorts of other stuff. Yes. So it's basically because each of these...
We had these four breakdowns of the headlines, you know, like very true, could be are true, but, you know, could be confusing to people. False, but, you know, could be confusing to people and very false. And then we had people rate like, how likely do you think this is to be true?
And we also rated how aligned is this with your political belief of being pro-Trump or anti-Trump. And it's worth noting that the true headlines in this study were very much true headlines from places like Reuters and CNN. And for each true headline, the researchers independently fact-checked them just to be extra sure they were indeed true. And then we wanted to
have different degrees of fakeness. So we had one category of fake headlines that were fake, but not like outlandishly fake. And then we wanted to kind of see if there was a limit to this effect. So we also created headlines that
At the time, you know, this was back in 2019 or so, to us, we're like, these are clearly outrageously fake headlines. Like, we think this is probably a limit of the phenomena. Yeah, for both the true and fake headlines, they varied how questionably true or outlandishly fake they were. One of my favorite outlandishly fake ones was Trump beats Grandmaster Chess Champion Magnus Carlsen.
And a less outlandishly fake headline was Donald Trump killed pedestrian while driving in 1973. And as mentioned earlier, a not very outlandishly fake headline was Trump said, I don't like poor people during private meeting with business moguls.
They also ran variations of this design just to make sure their methods were sound. All real headlines sometimes, different mixes of positive and negative, that sort of thing. We had different layers of serving different types of people and had people in different conditions. So like
One-fourth of the people were in the condition of only seeing true information, and then the rest were in seeing a mixture of the true and false headlines, as well as some non-political filler headlines, which were real viral headlines. And actually, I feel like the headlines I remember the most from that study were the non-political filler, like about this penguin traveling thousands of miles to meet this man who saved him. And then we had people...
No, I remember from your study, the headline that Donald Trump beats Magnus Carlsen in chess. I'm still remembering that headline more than the other headlines. And that was so fun to come up with all of those different scenes, like, you know, Trump dressing like the Pope hat and orgy. Yeah.
It was just so funny. I remember actually between studies, maybe this isn't right, but I remember there was a few headlines we actually had to change over time because they became true. We had to alter them. Not that one spot that became true, they were more on the fence. People saw the 16 headlines, they rated them.
And then afterwards, they did a memory test to preserve the cover story of how many of the headlines they recalled. And then after that, they did a number of surveys around their partisanship, their degree of objectivity illusion, their extreme views of Trump. So how much did they think Trump was likely to solve problems?
peace in the Middle East or launch a nuclear war or, you know, both positive and negative kind of extreme views of Trump. Do they think Trump was a saint or do they think he was a genius or and so forth? And then some other, you know, other survey measures like one-sided media consumption and demographics. So what did they find? We've discussed this all over the place. At the end of the day, what were the findings of this study?
I think that people who are familiar with confirmation bias would initially go, well, yeah, you know, for some things. Because I think a lot of this research right now, when it makes headlines, ironically, it's about susceptibility to fake news. The idea being if you're in the tank for one side or the other, you'll be more susceptible to fake news that makes the other side look bad.
And sure, yes, you found that that's a thing that is true. That is a thing that people do. But the thing that I love about this is it reveals another thing which people might not be aware of themselves. And instead of taking it away from you, I want to ask you, what did you find when it came to the way people respond to you? True headlines. Headlines that are true, but they don't make your side look good.
You don't respond well. You know, that when, even if a headline is true and if it doesn't jive with your, especially if you're a very partisan person, especially if you have sort of like a one-sided media diet, that even when a headline is true, if it doesn't add up to jive with your beliefs, that you're very unlikely to believe it's true.
Which would then translate to you're unlikely to share it. You're unlikely to put it on your social media. Yes, exactly. So it means that if the article doesn't jive with your political beliefs and your perspective, even if it's true, you're not going to share it. Or put it another way, that the truth is inconsequential according to this study.
I mean, maybe even a little bit surprising to us at the time was this effect of political concordance was stronger than the effect of headline truth up to kind of around two times the size of the effect. Concordance in psychological terms is the perceived agreement or consistency of novel information with your existing beliefs, attitudes, assumptions, and allegiances.
They're more influenced by the concordance of headlines over the truth. And we describe that phenomena as a concordance over truth bias. And so concordance over truth is when how people rated the veracity of the political headlines that were concordant with their either anti-Trump or pro-Trump viewpoint, and then subtracted the rating of the veracity of the true headlines, regardless of how they aligned with their political belief. And...
When we looked at, okay, maybe our headlines aren't fake enough. Do we still see this with the outlandishly fake headlines? We found that it held up also that participants still rated outlandishly fake concordant headlines as more likely to be true and to be more likely to share them than real headlines that were discordant with their prior views.
So, the listeners can't see me, but I'm putting my head on my desk because it's so frustrating. As someone who teaches and studies journalism, partisans were more resistant to information that was true that did not line up with their perspective of politics or of the world.
than they were to content that was extremely clearly fake. We talked about how in this study the fake news ramped up and ramped up and ramped up, that even when the content became pretty clearly fake to you or I perhaps,
They still wouldn't share the true news over that fake content that jived with their political beliefs. And also another interesting little wrinkle in all of this is that maybe it's unsurprising, but I found it interesting is that people tend to remember the fake stories much better than they do the
They also tested the participants' aptitude when it comes to something psychologists call cognitive reflection, a feature of human cognition we have covered quite a bit on this podcast. And they did this because some psychologists before this study took the position based on research that suggests this is the case, that people don't fall for fake news because of motivated reasoning, but because they fail to think positively.
about their own thinking. Here's an actual question that psychologists sometimes use to score people on their cognitive reflection skills. If a hole is three feet around and three feet deep, how much dirt is in it? The answer is none. It's a hole. Another one is if you are running in a marathon and you pass the person in second place, what place are you in?
And the answer is you are now the person in second place. And here is another. In a lake, there is a patch of lily pads. Every day, the patch doubles in size. It takes 48 days for the patch to cover the entire lake. How long would it take for the patch to cover half of the lake? The answer here is
If every day it doubles in size, then the day before it covered the entire lake would be the day it was half the size. So if it covered the whole lake on day 48, it covered half the lake on day 47.
So yes, before the study we are discussing in this episode, when it comes to fake news, some psychologists were of the opinion that it all boiled down to lazy thinking and that people were more likely to believe and share disinformation when they felt most safe and most permitted and most encouraged to be lazy in their thinking, to just trust their intuition.
But that was not what the researchers in this study found. When it came to how people rated these headlines, they found that it wasn't partisanship, education, or cognitive reflection skills that most predicted how people would respond.
You looked at education levels, you looked at analytic reasoning ability, and you looked at some measures of partisanship. What seemed to be the predictors of this bias? What was the most predictive of these things? Yeah, it's a great question. So it was robust across low, medium, and high education levels. We found it persisted even amongst people with advanced degrees.
So education did not interact with or buffer people from this concordance over truth bias. When we looked at analytical reasoning, we similarly found that concordance over truth bias was similarly robust amongst people who were low in analytical reasoning and people who were high in analytical reasoning or depth of cognitive processing. It was pretty robust across demographic factors,
And when we looked at ideology, we found it to be prevalent across both sides of the political aisle. So people have different ideologies, but they share a tendency to be more influenced by political concordance over factual accuracy, at least in our data. The top three predictors of those top three, the number one was the objectivity illusion, the degree to which people
believed in the objectivity and lack of bias of their own political side relative to the other side. The people who, in a sense, believed that they were the most objective and least biased when it came to their partisan bias, they were the most biased and least objective.
The key to the paper finding is that people who have strong objectivity illusion and strong one-sided media consumption show this concordance over truth bias the strongest. And it's that cycle that you're talking about where it's like, even when you do encounter information that is factual but not congruent with that belief that you're developing, you dismiss it more than the false information that confirms your belief.
So it's something I think this study shows a lot of of what we're seeing generally in the information ecosystem. I think a lot of the literature is
The political concordance of news headlines
Determine people's belief in and intention to share news stories more than the actual truth of the headlines. Encourage them to believe and share them. Participants were more likely to reject true headlines that seemed discordant with their political beliefs than
than they were to accept false headlines that seemed discordant with their political beliefs. In other words, you are more likely to be skeptical of true news that makes your side look bad than you are to believe fake news that makes your side look good. I think it makes a lot of sense.
Because, you know, the strongest bias, cognitive bias we all have is, you know, confirmation and truth bias. Like, we believe ourselves, just with motivated reasoning, as we referenced, like, we believe ourselves to be consistent people. And we also, we...
in our world, we're like taking in with that filter. Because also if you don't have a strong truth bias, that's not a healthy way to be moving through the world of assuming that things are true. And assuming things that are confirming your bias is a very, you know, you don't have to invest as much cognitive energy. So these biases have a
a reason why we hold them so prevalently in terms of like, if we believe that what someone's telling us is true, that helps us build relationships and communicate with people and share information and resources and much more effectively because we're automatically building in that trust.
And then if we have this confirmation bias, that enables us to not spend so much cognitive energy questioning everything. And it's really interesting because obviously people who have more training and who believe themselves to be more educated, in my opinion,
would say, okay, yeah, I have more evidence to draw from to confirm what I believe. And of course, there's other studies that show like nudges where it's like, when people are told like, are you going to put money down on what you're saying is accurate? Or are you gonna put money down on sharing this thing? If you think it's true or false, then people do take even more critical step. But I think just not providing these nudges, it's natural for everybody to rely on those biases.
And this is a bit of an aside, but something I've always thought about too over the last few years of studying this is previously it used to be like you'd be on a panel and everybody would be like, we need to encourage critical thinking. Like critical thinking is the number one thing. And that's part of the, you know, cognitive reflection is like, are you going to just jump to the exact conclusion or are you going to think and realize what the accurate answer is?
But also when we've spent time with people who have different conspiratorial beliefs, like they're really critically thinking, like they're going jumping through hoops to think even more critically or like dive even deeper into what they believe to be like the true sources of information. And they,
they actually kind of use information, rigorous information gathering techniques, but it's in a silo of information that's not necessarily based in reality or scientific method. So it's just very interesting when we emphasize this critical thinking, I think there's always this kind of like extreme over jump that can happen just for people who aren't trying to convince themselves of a conspiracy, but are just seeing politically congruent information and want to believe it. So given all of this,
What are the big takeaways? I asked our scientists that very question, and here's what they said. We focus primarily on the issue being a supply-side issue of the supply of fake news. And clearly, that's important, and that's a big part of the picture. And if we didn't have as much fake news, people would be believing in it less. They might still come to their own conspiracy theories, but...
But the other big part of this picture is that people are disbelieving discordant news. They're disbelieving inconvenient truths that they don't,
that don't cohere with their prior beliefs or that they're not motivated to believe. And I think what we've seen in the news cycle for the last four to eight years or so is that's actually kind of what's really in the headlines is people not believing certain truths. So I think it's really important as we think about how to manage this phenomenon going forward that it's not just about focusing on curtailing or fact-checking fake news. It's also on how do we
How do we shift? How do we think about education? How do we think about intellectual humility in the sense of not just about being critical of the news, but also being critical of our own minds? It makes sense that people are like, oh, I've spent years learning about this topic that I am an expert, even if I don't have a degree or established expertise. But the interesting thing around a quote unquote, like kind of established pathways of expertise is
ideally they have checks and balances from people who have other perspectives. So it's like, you know, even in this paper, we're in discourse with other researchers who have come to other conclusions and are going back and forth about like, okay, we approached it with this scientific method and we, we,
we concede that maybe there are faults with this method that maybe were correct in this other study, but this other study found it this way. So it's kind of like having that discourse with people with different perspectives. And even in the academic setting, some would say that this is sometimes not helpful, but people build their careers off of
finding these niches and being in contrast to other academics who are studying these niches. And maybe that leads to even another silo that's a meta silo, but at least there's these divergent viewpoints that are in conversation where with how the information ecosystem is designed today, it's all around extraction of data, attention, manipulation of behavior. Like that's how we all know this. That's a priority of how we're interacting with information.
And so they just want to keep serving you what you're going to keep consuming, which is going to be what's congruent with you. So it's not that even people want to become one-sided. People don't know what they don't know. And unfortunately, that's the design of how we're interacting with information. And now with ChatGPT, you don't even know that you're maybe not even finding out about, or you don't even know that you're reading some kind of distilled homogenization of the
thousands of data sets that also are drawn potentially from just the free information on the internet, which is, we all know, not the best and brightest of human wisdom. And it's also prioritizing a very specific kind of demographic and historical context and user base, as opposed to all the wisdom that exists that's diverse and rich on our planet intergenerationally as well. And it does also challenge some of our notions about the existence
the efficaciousness of certain types of media literacy campaigns and interventions in this space too. Like we weren't studying particular interventions, but one of the big questions people always ask is like, well, what are the solutions to the problems of disinformation and propaganda online? And if it's true that the people who think they're the least likely to fall for these kinds of
this kind of stuff are actually some of the most likely then what does that mean for you know fact checking initiatives what does that mean for how we train people does it mean that we need to inject a bit of humility into the into the training does it and how would that work how would we do that um how do we make people who are already sort of
entrenched in their perspective about not just themselves politically, but also their political party, how do we work to change their behavior? And that is, I think, the million-dollar question that sort of arises from this research. I thought, you know, there's this paper like back in 1994 by Wilson and Brecht that summarizes this really well. They're like, people need to be made aware of the existence of the bias, but they also crucially have to accept that they're vulnerable to the bias.
And these are like, they're five steps. They're like, step three is they have to be motivated to correct the bias. Like if you are aware you've got the bias and you accept that you're vulnerable to it,
But you're just not motivated to do anything about it. That also might not be enough. And they have to grasp the magnitude, the direction of how it impacts them and have a strategy to overcome it. So it's like a lot. But having education around understanding that our introspections can be fallible and it's good to have...
intellectual humility. I think it's like an important step for being able to be less susceptible to concordance over truth bias, less susceptible to partisan bias. I would say my motto is I'm always learning. And I, you know, to the point of intellectual humility in this paper, I do try to, you know, say, I don't know when I don't know and be open to maybe. But as a concept, I'm
Because I think I have a friend who used to study this. This is an aside, but she was like, you know, American discourse is very much about right or wrong. Like even the language that we use is like,
I agree, I disagree. And we don't bring enough into our language around maybe. And maybe it opens up more potential for discourse and connection. And I've learned so much from people who have very different viewpoints from me. And like we were saying, we resonate with the reasons why people consume conspiracies, even if I don't agree with their ultimate outcome. But right now, the reasons that I do feel I have elements of optimism are...
At root, I have a belief, which is I believe that all people want to give love and be loved. And that at root, like that drives people to love.
they actually, when you look at polling, like a majority of people agree on a majority of things around education, elements of healthcare, elements of gun control, elements of housing, elements of environmental protection, you know? And so it's like, there actually is this kind of shared belief system that originates as cliche as it is from the power of love. But you can just see it in everyone's
at daily life in our own life. It's like, you know, I don't want to go out and be mean to someone or not believe in them. And, and a lot of people don't want to be in that perspective either. And it also doesn't have a positive feedback to people's own enjoying of their life. So that's one of the reasons I'm optimistic is that core belief I have in human behavior is,
Before, if you were reading a book or you were in a conversation or you were taking a class or whatever, you kind of knew, oh, I'm going to consume information on this topic for this amount of time and it's from this source. And so you had just more consent around what you were consuming and how it was potentially, you know,
integrating into your life and you could choose more. But now, because so much of information is just fed through a feed as opposed to something someone actively searches for, or it's fed to someone by an influencer that they follow, which surprise, surprise, influencers are influenced by
a lot of factors themselves, you know, it's not that they are there. Their, their primary role is to continue to hold attention or to gain money from the attention that they've gained. And so that leaves them very vulnerable to even being influenced by like coordinated groups from the bottom up or like top down, you know, uh, requests and, and, um,
And so it's just really interesting that we've just been handing over the reins increasingly to these other elements that are causing us to consume information and shape our belief systems and shape the store of knowledge with which we are then filtering the future knowledge. And we don't even know what we're consuming. We don't receive like a digest of being like, oh, on Instagram, you consumed this.
25 minutes of cat videos that had a positive balance and 45 minutes of weightlifting videos, you know, and then this had this outcome on your behavior, you know, so we don't have that full loop or that consent or that knowledge about the information we're sharing. We're just like allowing our whole perceptions of the world to be shaped beyond our, our hands, you know, and unfortunately not just with political information, it can offer you concordance around, you know, I,
ran into some random person on the street and they were telling me about how they had been looking at anti-nausea content on TikTok and it took them down a whole like bulimia pathway. And I was just like, oh no, I'm so sorry. But you know, we do see that with a lot of different social media platforms where it's like, yeah, any type of concordance or that hint of where they think you're going, it could push you into a behavior change that is not what you actually want to and can derail elements of your life.
Yeah, there's just been a bait and switch, right? Like, you know, the original infrastructure of the internet, the Wikipedia's of the world, and even the early social medias of this world no longer exist. And we have people like these billionaires telling us that it's an us problem. And then we have research from folks like our group and others that suggest that we're not very well equipped to deal with this problem on our own.
Yes. And again, I'm always scared. Like this is, I say, I told you, you can't trust the media. I'm like, oh God, this means you're not going to, this means you're going to think everything by the New York Times is wrong and everything. But like, you're only going to listen to a clown penis dot fart. Look, I think that people have to learn to live with dialectics, right? Like to be honest.
say, yes, it's true that there is a corporate media control. It's also true. And it's also true that there are a lot of good journalists out there still working really hard to try to produce high quality information. It's just that they're working against a tide, which is the tide of social media and the internet and the noise economy that is basically impossible to combat without guardrails and without culpability.
For me, based upon my background and what I've studied for a very long time, while the findings of the study are super concerning, they're also somewhat edifying in the sense that they suggest that we do need to work on problems associated with polarization, associated with people's political identity, and that we do need to unpack a little bit of the extent to which that itself has been
or whether or not it's mutable. There's a friend of mine, Britton Heller, who one time she quoted the art of war. I mean, she said, we have to build a golden bridge over which our enemies can retreat. That's from the art of war. It's this idea that if we just supply people with true information, this study suggests that that's not enough.
But it doesn't mean that that's something we shouldn't do. It means that that's not enough on its own. We also have to address people's political identities, their cultural identities, their belief systems. And so the solution to this problem of disinformation and misinformation, which has been so spoken about in the media lately,
is not a simple solution. It's a solution that's going to take time. It's a solution that's going to require things like thoughtful media literacy campaigns and informational literacy campaigns that are not just built for a one-size-fits-all audience, but that are bespoke to particular communities that understand things in a particular way and potentially
And this isn't in this research, but one of my beliefs that's grown out of this is that you've got to have members of those communities leading those kinds of initiatives. There's got to be buy-in from them, that it can't be academics or fancy journalists or anyone else that helicopters in to do this work, because the reality is that the research shows that oftentimes that backfires in and of itself, especially amongst conspiracy theorists and hardcore partisans, because they're much less likely to have institutional trust.
And part of that's compassion, right? Part of that's understanding affect and compassion and other things like that. It's relational. And that's not very sexy to scientists all the time.
♪♪♪
That is it for this episode of the You Are Not So Smart podcast. I am your host and editor and reporter and writer and everything else for the You Are Not So Smart podcast for all 307 episodes so far. If you enjoy this show, if you've gotten anything out of it, your support is going to help keep it going in the future. You can support the show at Patreon.com.
patreon.com slash you are not so smart. And there's a link in the show notes. There's a link to everything that we talked about in this episode in the show notes, right there in your podcast player and also over at you are not so smart.com. But yes, this has always been a one person operation and your support is greatly appreciated. You can find my book, How
How Minds Change, wherever they ship books in trucks, wherever they sell them, wherever they put them on shelves. And you can find details about that at davidmcraney.com. And also links right there in your podcast player. I'm scheduling my lecture appearances right now for 2025. If you'd like me to come speak at your institution, academic or otherwise, just go to davidmcraney.com, click on that part of the website or email me.
davidmcgraney at gmail.com.
For all the past episodes of this podcast, head to Apple Podcasts, Spotify, and Amazon Music, Audible, all those places. YouAreNotSoSmart.com also has all the past episodes. Follow me on Twitter and threads and Instagram at David McCraney. I'm also on Blue Sky at David McCraney, Blue Sky, all that stuff. Follow the show at NotSmartBlog over on Twitter. We're also on Facebook at slash YouAreNotSoSmart. The opening music is Clash by Caravan Palace.
And if you really, really want to support the show, just tell someone or perhaps everyone you know about an episode that really, really landed for you. And check back in about two weeks for a fresh new episode. ♪♪♪
so
Bye.
And why not another fried zucchini?
Only for my rewards members for a limited time at participating restaurants. See you for terms.