You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.
The hunt for a new New York Times Santa College poll is our podcast's version of the hunt for Kate Middleton.
Yeah. Can you get some people to look at pictures and see if I'm photoshopped at all? I think you just have to record this entire podcast from like a side profile that's so in profile that people can't actually see your face. Oh, I was thinking I would have like a disappearing sleeve a la Back to the Future, like the disappearing sleeve in one of Kate's kids.
Hello and welcome to the FiveThirtyPolitics podcast. I'm Galen Druk, and we have had it with the presidential candidates trying to take our jobs. President Biden and former President Trump have been out on the campaign trail talking about the polls and giving their analysis. Trump's been boasting about his polls in the Republican primary for months. But now that we've turned to the general, Biden is also talking about them.
When asked about being down compared to Trump, he said that polling has, quote, changed and is less accurate today because it takes, quote, six zillion calls to get one person on their cell phone, end quote.
We're going to talk about whether that's right, but more to the point, we're going to talk about what it means that Biden is down in the polls seven and a half months out from Election Day. As of right now, he trails Trump nationally by about two points and by more than that in the battleground states. And then will comments like Trump's over the weekend calling some migrants, quote, not people and seemingly or potentially suggesting a bloodbath if he doesn't win change that dynamic?
It is also Election Day. The presidential primaries are, of course, no longer relevant, but the Republican Senate primary in Ohio will determine who faces off against Democratic Senator Sherrod Brown. It's one of Democrats' most vulnerable Senate seats heading into the fall. And we've got a good or bad use of polling example. Remember,
Remember that poll back in December suggesting that 20% of Americans under 30 think the Holocaust is a myth? Well, it turns out the results of that poll may have been completely bogus. Joining me today to discuss it all is Director of Data Analytics, Elliot Morris. Welcome to the podcast, Elliot.
Hey, Galen. And also here with us is editor for News Surveys at The New York Times, Ruth Egelnik. Welcome back, Ruth. Hey, hey. Thanks for having me. Okay, so are you ready to talk about these potentially bogus polls? Are they bogus polls or are they bogus respondents? Ooh. I can't conclude it for that. That's Pew's framing, I'll say. Honestly, the philosophical question for our times.
So let's talk about it. The poll that I mentioned showing one in five young Americans thinking the Holocaust is a myth was an online opt-in poll. These are surveys where respondents were not randomly selected. Instead, people find these polls through things like online ads. They opt in, take the poll, and in some cases get compensated.
This month, Pew Research Center released an analysis questioning the use of online opt-in polls. The problem is, according to Pew, that some of these surveys are being corrupted by quote-unquote bogus respondents. These people aren't answering the poll sincerely. They're just trying to get in, get out, and get their reward. Maybe just hit yes for everything. Pew says the subgroups that seem most affected by this are young and Hispanic respondents.
To help illustrate this, Pew ran an opt-in poll experiment in which they asked respondents whether they were licensed to operate a nuclear submarine.
The share of Americans with this kind of license rounds to 0%. But the share of adults under 30 who told Pew that they were licensed to operate a nuclear submarine was 12%. Among opt-in respondents who said they were Hispanic, it was 24%. So a quarter of Hispanic Americans being licensed to operate a nuclear submarine. So there are layers of polling uses in this example. But just to kick things off, Ruth is
Is Pew's analysis a good or bad use of polling? It is a great use of polling. It is the best use of polling we've seen all year. The best use of polling. Wow. Okay. FiveThirtyEight Award for best use of polling ever. This year. This year. Okay. All right. All right. Fair enough. And why? Well, they just did a great job of demonstrating, you know, online opt-in polls are...
are becoming more and more common in our field. And I think it's really good to try to dig a little deeper and analyze and make sure that we understand what's going on under the hood. There are a lot of reasons to be skeptical, and this kind of throws a little bit more onto that pile. By actually comparing something like this
That's really easy to see a huge discrepancy. It's a great illustration of just what's going on that's wrong here, that you might have a lot of these bogus respondents. And if you project that out, I mean, this seems like a kind of a funny little methodological experiment. But if you project that out to online opt-in polls in the presidential campaign that we're all paying attention to as they come out, this tells us that under the hood, it might not but all be hunky dory.
Okay, I want to dig into that specific question. But first, Elliot, Pew's analysis, good or bad use of polling? Yeah, capital G, good use of polling, Galen. I echo everything Ruth just said. She's absolutely right. I will say, we both used to work for Pew, so I don't think we're biased. Full disclosure. Oh, yeah, I think that does require a disclosure. Yeah. Yeah, listen, like,
There are multiple ways to do polls. As response rates drop, people are trying new ways. That experimentation has been the driving force for the polling industry to adapt to its challenges historically.
But yeah, ask any pollster. They'll tell you there's some regression, there's some reversion to the mean, the tool doesn't always work. And for certain subgroups, it's really hard to find high quality responses to them. We're going to talk about this. Not all online polls are created equal. Some are better than others. YouGov is a pretty good one. But even that sometimes has its problems. So your models may vary.
So first of all, why specifically young and Hispanic respondents? Is it that young and Hispanic Americans are the people who are sort of opting into this and just hitting yes for everything? Or are people saying identifying as under 30 and Hispanic when they enter the poll, like as in even their identity as a respondent is bogus?
That's a good question. I mean, I don't know that Pew gets into that per se or that anybody has a clear answer on that. I think there's reason to be skeptical in both directions. If they're bogus respondents, what all are they not telling the truth about?
The argument for your second guess that they're lying or being dishonest when they say that they're Hispanic is that a lot of these surveys have differential incentives where they incentivize certain groups that are harder to reach. And Hispanic Americans, particularly non-high school educated Hispanic Americans, are a particularly hard group to reach. So they tend to get a slightly higher incentive. So that could be an argument for people faking into that group. But it's hard to know.
And so does this tell us then that to ask the second tier good or bad use of polling question, all online opt in polls are a bad use of polling?
No, definitely not. I will jump to the online opt-in pollsters defense, what the pollsters would call non-probability research. They're not all created equal. Say, for example, you are recruiting a sample for your poll just based off of advertisements, and you're not doing any screening on the type of people participating. I would say that's probably a bad sample for your poll, but again, experimentation is generally good.
The way that YouGov does these online opt-in surveys or non-probability work is that they're not like running advertisements on Google or something. You can go to their website and you sign up for a panel to be part of a panel of respondents, a group of, say,
two million people who can answer the poll at any given time. And then YouGov will select a representative, so-called representative sample from that larger panel and then ask those people the survey. So it's their way of backing into the randomness that a poll like the New York Times-Siena poll has
by design. And yeah, like it's not always perfect as demonstrated by this case with the Holocaust polling, but it is a heck of a lot better than just randomly asking people on the internet. It's not the same thing. I will say, I mean, I'm not as gung-ho on opt-in as others. I'm a
more from the traditional school. We do phone polling at Time Siena. But Pew themselves did a really good analysis a couple of years ago where they looked across the breadth of online pollsters and did an anonymized analysis. And for those who remember, it was sample I. It was very exciting. And one poll did do significantly better and felt like they matched the quality of Pew's online probability-based panel. So there are good opt-in online pollsters, but...
Large portions of the field are not so great. And so particularly as we're seeing new polls pop up over presidential cycle, I think it's worthwhile to look at them with skepticism. And the sort of metric that we often use when we're looking at polls at the Times is how transparent these pollsters are about their methods. So it's not about what they're doing to necessarily complete the poll, whether it's opt-in or not, but how transparent they're being about it. That's what really matters.
If I can add a wrinkle, pollsters also will apply differential levels of screening to the people responding to the poll. So if it looks like you're just...
answering yes to every single option with the pollsters called straight lining, you're going to get removed from most, you know, most panels, most, most interviews for that poll. But again, yeah, that's not the case. It's not the case for lots of cheap polls done with something called a survey marketplace. Yeah. At five 38 too. I just echo what Ruth said. What we really care about is like, are you going to tell us a lot about how you're doing your survey? And then we'll trust you a little more.
So, Elliot, you just defended YouGov. It was ultimately a YouGov economist poll from December that found that among American adults under 30, one in five said that the Holocaust is a myth. And Pew also wanted to test this out. So they did. They also use a panel, but they recruit by mail as opposed to online opt-in. And they found that when they asked this question to
to the folks recruited by mail, it was not 20% of folks under 30 who said that the Holocaust was a myth, but in fact, 3%. And it was also 3% across every single age group that they polled. And so, and if you remember back to December,
I mean, I don't know how broad this community is, but amongst people who pay attention to polls and consume news regularly, this had its own cycle where in the middle of the Israel-Hamas war, people were saying, well, young Americans, they're not as supportive of Israel. And one in five of them don't even believe that the Holocaust happened. And so like this kind of polling got a lot of play and was done by YouGov, which is a
a respectable pollster, but clearly, what's the word? Sh** the bed on this one. So if I can just crystallize my position, my position, my argument here is for some granularity, some sort of division in the opt-in online polling group. I do think that this survey had problems. And when The Economist reported on it, I think even they cast a little bit of doubt on the veracity of the result. We should say you also used to work for The Economist. So I guess this is disclaimer on disclaimer. Yeah.
Yeah, multiple levels of disclaimers here. Yeah, but no, look, I know the poll pretty intimately. I'll say I think the folks at YouGov are doing a lot more work to ensure data quality than a lot of other online opt-in polling organizations. So to the extent that there are warnings here, there's two. One is to be really careful with data.
basically any online non-probability poll and, hey, look, you're going to get noise in any given survey. But two, to be like very suspect of data that you're seeing from new pollsters who aren't very transparent, who say they use opt-in online methodologies. And I'll just add to that another sort of caveat is...
particularly among these smaller subgroups that get a lot of attention. Even with high quality polls, we should be a little bit skeptical because these smaller subgroups, they can be pretty noisy. They're not always the most accurate. And so when you're talking about suspect data quality on the top, even if they're doing everything they can to make it good, the subgroups you really need to look at with a lot of skepticism. And so I think this is one that kind of
got away from people. It's a good point. We call this crosstab diving. And basically, like, you can try to delegitimize any poll if you look through the crosstabs and, like, pull things out and say, well, this doesn't seem right. So that means the whole poll is bunk. And to that point, we should say that Pew suggests that when they look at horse race polling, election polling from these online opt-in groups,
They're not so bad, not nearly as bad as, you know, showing that 20 percent versus three percent of young Americans think the Holocaust is a myth or that a quarter of Hispanics know how to operate a nuclear submarine. So what explains that? Why would some questions be maybe more reliable for online opt in polling than others?
There are a lot of errors that come in polls and questionnaire design is one place for error. And so this was on top of not being a great poll. It wasn't a great question. Agree, disagree questions generally lead to what we call acquiescence bias, which, you know, people tend to want to agree. When you're talking about horse race polling, you have a more standard set of questions. You also have these organizations paying a lot more attention to these polls, adjusting to try to get the results right.
Not to say that they're hurting, but they're trying to be more careful in how they work those results. So there are a lot of reasons that those results on the top line level might be more reliable, but I would still have the same caution on the subgroup level. The tension that underlies all of this is a tension that we're about to discuss when we talk about general election polling in the next segment. But it's that polling is hard and polling is expensive. And so there are incentives to find cheaper ways
Do you have a sense of just to sort of like be transparent here, the cost discrepancy in conducting an online opt in poll versus conducting the sort of high quality phone polls that The New York Times does?
Yeah, I mean, it's orders of magnitude. The discrepancy is quite large. And even if I'm not going to talk about our specific pricing, it is orders of magnitude cheaper to do online opt-in polls. And so there's no question that if you are a new organization who's looking to do polling at a much cheaper price and doesn't have access to a huge pot of money, you're
This is a better way to get a poll out there, to get some attention and build your brand in polling or whether it's a media organization. Like this is certainly a lower barrier to entry by orders of magnitude than to do high quality polling. Yeah. And if I can just maybe go back to something I said earlier, it's like
The design of the survey really matters. We can handle this on sort of the election modeling side of things, the aggregation side. We can say, look for systematic differences between types of polls, what we call mode effects, right? But a lot of the times, you just don't get that level of parsimony from reporting on these surveys, that level of contextualization.
There's sort of two issues here. One is it's getting really expensive to do surveys. That's going to increase the supply of low cost survey results. But also political reporters have to exercise, I think, some level of discretion with the results that they're reporting. So I would just kind of raise a flag here that it's not
really all on the pollsters to make sure they're getting every single question with every single subgroup right. We as political reporters need to apply some level of discretion. Could not agree more. That is a great place to land on this because that's kind of the philosophy behind this segment, the good or bad use of polling segment to begin with, which is like, yes, we're specifically talking about an analysis of polling done by Pew Research in this situation, but oftentimes we're
We're talking about folks in journalism who try to tell a story using a single poll or a garbage poll or whatever it may be. And yeah, tale as old as time, it can lead you astray. But let's move on and talk about general election polls. Today's podcast is brought to you by Shopify. Ready to make the smartest choice for your business? Say hello to Shopify, the global commerce platform that makes selling a breeze.
Whether you're starting your online shop, opening your first physical store, or hitting a million orders, Shopify is your growth partner. Sell everywhere with Shopify's all-in-one e-commerce platform and in-person POS system. Turn browsers into buyers with Shopify's best converting checkout, 36% better than other platforms. Effortlessly sell more with Shopify Magic, your AI-powered all-star.
Did you know Shopify powers 10% of all e-commerce in the U.S. and supports global brands like Allbirds, Rothy's, and Brooklinen? Join millions of successful entrepreneurs across 175 countries, backed by Shopify's extensive support and help resources.
Because businesses that grow, grow with Shopify. Start your success story today. Sign up for a $1 per month trial period at shopify.com slash 538. That's the numbers, not the letters. Shopify.com slash 538.
Today's podcast is brought to you by GiveWell. You're a details person. You want to understand how things really work. So when you're giving to charity, you should look at GiveWell, an independent resource for rigorous, transparent research about great giving opportunities whose website will leave even the most detail-oriented reader stunned.
Busy. GiveWell has now spent over 17 years researching charitable organizations and only directs funding to a few of the highest impact opportunities they've found. Over 100,000 donors have used GiveWell to donate more than 2 billion dollars.
Rigorous evidence suggests that these donations will save over 200,000 lives and improve the lives of millions more. GiveWell wants as many donors as possible to make informed decisions about high-impact giving. You can find all their research and recommendations on their site for free, and you can make tax-deductible donations to their recommended funds or charities, and GiveWell doesn't take a cut.
Again, that's givewell.org to donate or find out more.
As I mentioned, the presidential candidates are trying to crowd us out of the polling analysis business with their own takes on the polls. So Biden mentioned the declining response rate to polls as a reason not to put too much stock in the early general election polls that show him down to Trump nationally and in every major swing state.
And to be clear, the grim polling for Biden doesn't stop there. His favorability rating is worse than Trump's. He rates worse on issues like the economy, immigration and national security.
And while Biden dismissed the polls specifically in recent weeks, the campaign has been projecting calmness and confidence for months. Not a message of we're down and we're going to fight our way back, but more a message of screw the polls. We got this. So we just spent the first part of the show talking about how some of our methods for polls can be flawed. Does the Biden campaign have grounds to be dismissive?
I think that it's helpful to unpack the sources of uncertainty in the data today. So Joe Biden, in his comments in New Hampshire last week, raises two types of
polling error. One is a breaking down of reliability of the poll, of the survey as a tool to represent public opinion. And the other is temporal error, the amount of uncertainty associated with there being a ton of time, 230 days before the general election. So I think that's helpful framing, so I'll use it. Wait, wasn't he saying that it was because of declining response rates, not because it's too early? Yeah.
Well, he makes two claims. He says it's way out. He says, first, it's way out. And second, we don't know how much we can trust them. Yeah. So he's right that it's early, that it's way out. He is correct that it's seven and a half months before the general election. He knows that there's a lot of months. Yes. By definition, election day is not tomorrow. Yeah. But if it were, then we would face a
types of error, the reliability point and polling bias. So is he right about the reliability of the poll as a tool, as an instrument to test the barometer of the winds of democracy? I don't think he's right. If you go back and you look at just the accuracy of general election polling from 1936 or so, really, if you want to be technical,
Scientific polling, as we know today, didn't really start until the late 40s or the 50s. So even if you use that as a benchmark, those polls were off estimating Democratic margin regularly by 10 or 15 points. They're off, you know, four or five in the bad close states nowadays. So it's not obvious to me that, like,
The tool is so so broken that if we had a lot of polls and aggregated them we couldn't make good predictions of elections That's kind of like our whole thing, right? So maybe I'm incentivized to say that I kind of make one more point here and that is you know he is right that like polling is harder and
That it's more expensive, that you have to do a heck of a lot more phone calls, maybe 100 or 200 before you even get someone on the phone. That means the pollsters are dialing for hours at a time if it's a live interview operation calling cell phones. It is harder.
And it is early, but the evidence that we have for this tool breaking down systematically, where you could just say, oh, F them, I don't trust the polls, like, that is not an empirical take, I would say. Sorry to rant. End rant. I think Elliot is exactly right. Like, Biden is correct. We are making a gazillion, officially a gazillion calls to get every respondent on the phone. But
You know, we're seeing the same trends in polling for online polls that aren't making a gazillion or any phone calls to anybody. So I think looking at that specifically, that is not necessarily a concern. The temporal error is real. It is very early. The good point Elliott makes in his story is that that can vary in 2020. This early, we had a really good sense of the race. That might have been less common. I mean, a lot can change. That doesn't mean a lot changes.
will change. It's really interesting, though, to kind of see this and see Democrats starting to sound like Republicans did in 2012 when they wanted to unskew the polls and sort of looking at polling error as explaining what's happening. It'll be really interesting to see if that continues over the course of the cycle. But I do think that's maybe not a great take because polls generally tend to be, you know, fairly accurate, like Elliot said, within four or five points, which for a close election can matter. But
are not totally wrong here. Right. So it's correct to say that historically, polls seven and a half months from Election Day are not predictive and that they, on average, have changed, what is it, Elliot, something like eight points?
Yeah, eight points for national and for state polls at 237-ish days before the election. Yeah, on margin. So it's correct to say that, but the other thing that they're doing is what we referred to earlier as crosstab diving, which is saying like, well, if you look at the crosstabs, it can't possibly be the fact that Trump is leading by X subgroup or that it's this close amongst young people or Black respondents or whatever it may be. And so there's sort of like...
arguing a little bit that the polls are wrong. And like, there may be challenges in polling subgroups. We actually know historically that it's harder to poll Americans without a
college degree, it's harder to pull Hispanic Americans for various different reasons, including in some cases even a language barrier, but also in some cases just a disinclination to respond to calls. And so, yes, I think it's important that we delineate here what sources of error we are talking about. But I think one of the arguments that I've heard made is that
while historically polls this far out are not predictive, we're in a very different circumstance. I mean, this kind of matchup between a president and a former president hasn't happened in over 100 years before modern polling, and that both Trump and Biden are very well-defined in Americans' minds. Partisanship is also stronger today than it was even 15 years ago. And therefore, like Biden,
basically Biden and Democrats should be more worried about their standing in the polls than the average temporal error over time would suggest. Is anyone buying that?
Look, there's a couple of things to say here. One is that yes, polls early on have gotten more accurate as the level of polarization in the electorate has increased. I think that's pretty self-evident, like the explanation at least. People know who they're going to vote for because they're committed partisans, so the polls are stable early on.
That's true, but it's not a reason to discard the historical analysis. You can kind of argue that errors should be lower. If you want to do that, I wouldn't do this, and I'll tell you why in a second, but if you want to do that, then the expected error today should be closer to like five points, declining to about two points on margin on election day. I wouldn't read that much into the historical pattern here because...
What we want to do when we're sort of predicting how good the polls are going to be in the future is to understand how wrong those assumptions would have been historically. So if you made this assumption in 2020, for example, that polls were going to be as accurate as they were in 2016 and 2012 in, say, July, you would have been wrong.
sort of forcing your election model, or even if you just want to use like an abstract model in your head, to constrain the outcomes of the election somewhere between Biden plus eight to Biden plus 12 at the high end and Biden plus five or four on the low end. And that's just too narrow because empirically, you know, he won by four and a half. So chomping like
lower bound of the election that high for a president because you just think polls are more reliable is not necessarily something that you should do just empirically if you're trying to predict the future. So I do think, to wrap up the second rant, that people can get some utility out of doing what I call talking themselves into the error term of the model. If you're exploring
how our historical models of the present can break down, I think that's useful. But in order to correct a model of reality, you need data, and you need a lot of it over some amount of time, and you need to afford it the appropriate level of uncertainty. And I think people just aren't doing those two second things, even if they're right to say, hey, maybe polls are a little bit more reliable this time. And that's something that
Our forecasting models will take into account later this year, but it's not like a reason to abandon a historical sort of reliability analysis of the polls, which is ultimately what we're doing. I'll just add to that. So one of the criticisms is that potentially polls are overestimating Trump's support right now because that's something that's happened recently.
in the primaries. And polling error isn't always the same direction from cycle to cycle, and there's no reason to predict that it necessarily will be. There are a lot of reasons that polls overestimated Trump in the primaries that had to do with the structure of what types of voters they were looking for. And so I think this idea that these polls are overestimating Trump or underestimating Biden isn't really necessarily based in a lot of fact right now.
Yeah, they underestimated Democrats as recently as 2012. So maybe they would do that again, but maybe the amount of bias we had last time will show up again. It's impossible to predict beforehand, so you shouldn't bet on it.
OK, so now let's get to the real crux of the matter. Why is Biden down in the polls? Right. Like oftentimes data can tell us what, but not necessarily why. But, Ruth, for example, you all do a bunch of extra polling to sort of help us get to the why or ask a lot of questions that help us get to the why and even call folks who have responded to the polls to ask them questions about their answers. Do you have an answer?
I don't think there is one single answer, right, with things like this. There's never one reason. But I think there are a lot of reasons that people are feeling frustrated. And, you know, there's this idea in polling of expressive responding where people might not actually intend to vote for Trump in the long term, but they're expressing some frustration for Biden. That might be happening right now.
I think that's still worth paying attention to if people are really frustrated. And what they're telling us are the primary thing is the economy. I mean, I think for a lot of people, they're feeling frustrated about the economy. We saw this in our last poll in particular, and we asked about a lot of aspects of the economy and inflation. It's really interesting. Inflation is a lagging indicator. So it's one that sticks around a lot longer, even if other aspects of the economy have improved and feelings about inflation stick around even longer than that. And for a lot of people that specifically
they're feeling frustrated about that with Biden. And that's kind of hard to argue with. We asked this question, which is like one of my favorite questions of the cycle. I think I even said it on your podcast in the past.
whether or not you think Biden's policies have helped you or hurt you, whether or not you think Trump's policies have helped you or hurt you. And right now, we have a lot more people saying that Trump's policies help them and that Biden's policies hurt them. And so right now, people are just kind of looking at their balance sheet, looking at dollars and cents and saying, you know what, I did a lot better in Trump's presidency. I mean, we do have this historical incredible moment to compare actually how you did under somebody's president, the two presidencies,
And you can see that people did people felt they did a lot better under Trump's presidency. So there's no question that's a part of it. There are a lot of other things at play. It's not only the economy, but I think for a lot of voters and a lot of swing voters, that's the single biggest issue.
And I should say, we're talking about how voters felt. But also, if you look at the specific data, wages were growing faster than prices. And I'm back on myself. I've talked about this plenty of times on the podcast. Wages were growing faster than prices throughout Trump's presidency. Basically around the exact same time that Biden took office, prices started growing faster than wages. And they've only reversed again within the past year. And so there's data behind what Americans are feeling.
Yeah, and I think it's important not to underestimate the referendum dynamic, both in the vote but just in the polling today. So Ruth calls it right expressive responding. This is somewhere where I think the crosstab divers get a little bit over their skis, maybe to try to play on the metaphor a little, where they've relinquished their license to dive. Oh, stupid, sorry. LAUGHTER
They're not licensed to dive at these depths. They're not nuclear submarine licensed to bring it all back. They're licensed to dive, but they're not licensed to operate a nuclear submarine. It goes full circle. So.
So I do think it's important to put yourself in the shoes of a respondent. The political scientists call this the receive-accept-sample model, where they're taking an information, which if you're a disaffected Biden voter would be prices are unbearable.
There's wars going on that people feel badly about, and polarization is high, and maybe something affects them too. The border. Yeah, the border. Just generally, political conditions are not awesome right now, and I wish I had some data to share with that, but I think that is a self-evident thing to say.
So if you've received that information and someone comes and asks you, hey, how do you feel about the president, about this election that's eight months in the future, I think it's fairly plausible that people would say, either I'm not going to vote for this guy or I might even vote for the other guy, even if in November there are different contexts and they do end up voting for that person. Empirically, that might be part of the temporal error we observe in polls. There's a certain coming home dynamic to the party where supporters come back
to their partisan allegiance. That's getting stronger as polarization increases. But that doesn't apply to all polls uniformly. So I don't want people to hear this and say, oh, Biden's going to gain ground over the next, you know, X months or something. It's not the type of thing you want to use to make a directional bet on polling movement. But I think it does explain some of Biden's current weakness. The question is, can he get over that over the next eight months? And that's why we have campaigns.
Right. Trump could gain ground over the next eight months for all we know. But I do want to talk about this in relation to what happened over the weekend. So it strikes me, Ruth, looking at the polling that you guys have done and also a lot of other polling that's out there that on so many policy questions, Trump's
Trump leads Biden and not by a little, but a lot. I mean, the gap that Trump leads Biden by on immigration and the economy at this point, it's something like 20 points in terms of voters views of who performs better on those policy areas. And it goes on down the line.
There are a couple questions where Biden does better on. It's things like democracy and abortion specifically, but especially temperament. So in your recent poll, Biden actually led Trump on the temperament question, not by a lot, by just a couple points, but it stood out because it was one of the only areas where he was leading. And so when you look at actually how Americans grade Trump,
Trump and Biden on policy, you might think, well, why isn't Trump leading by a whole lot more? It in part comes down to frustrations with
the situation today that ultimately, like Democrats are like, I don't really like the economy or the situation at the border, but I'm definitely not voting for Trump anyway. So part of it is partisanship. Part of it is also temperament. And so over the weekend, Trump had a rally in Ohio where he may have sort of reminded voters of why they rate him poorly on temperament, despite them liking his positions on the economy and national security and things like that.
And so he kicked off a rally with a rendition of the national anthem sung by January 6th convicts. He referred to some migrants as not people and in another place referred to them as animals. And so if he didn't win the election, there would be a bloodbath.
There's been some disagreement over what exactly he was saying. He was talking about levying a 100% tariff on some foreign automobiles and seemed to say that there would be a bloodbath in the automobile industry if he wasn't reelected, but then seemed to suggest that there might also be a broader bloodbath. We're going to play the clip here so people can make up their own minds about what he said. We're going to put a 100% tariff on every single car that comes across the line, and you're not going to be able to sell those cars. If I get elected...
Now, if I don't get elected, it's going to be a bloodbath for the whole. That's going to be the least of it. It's going to be a bloodbath for the country. That'll be the least of it. Biden has obviously gotten a lot more attention over the past three years because he is the president. So rightly so. But now we're officially in the general election. Do you think things like this change the dynamics of the race?
I mean, I think it'll be really interesting to see. I mean, you mentioned that we ask about the difference between Trump's temperament and Biden's temperament. And Biden is leading Trump, but that gap has closed quite a bit. If we look back to our polling from last year in 2022, the gap was wider, right? And that we were doing those in the states, not nationally, but there was a broader gap between the people who said that Biden had the right temperament and Trump had the right temperament. And that's closing.
Some of my colleagues wrote an article last week about whether or not people have collective amnesia about the Trump presidency and Trump and who he was. And the data seems to suggest that people are either forgetting that or it's less important to them. So it'll be interesting to see as more of these sort of things become public and are reported on, will that gap begin to widen to where it was when more people were seeing Trump in their daily life and on the news every day?
At the same time, I will say, so one of the things we do, as you mentioned, is we call back our poll respondents and we talk to them a little bit more about their responses. And what I hear from people again and again who are Biden-Trump voters, people who voted for Biden in 20 and intend to vote for Trump in,
in 2024 is that they don't care that much about that anymore, that he says bad things, but what can you do? I care more about money. I care more about my bottom line. I care more about the economy. And, you know, whether or not that bears out to be true will be interesting to see. But right now, people are certainly saying they've sort of accepted that part of him for a lot of people, which is really fascinating because it was certainly a bigger issue in 2016 and 2020.
Yeah. In terms of who Americans think is the better guy, some numbers I pulled up for my piece but didn't end up using was the net favorability rating for both Trump and Biden over the course of 2020 election and this election so far.
What's interesting, right, like Donald Trump's net favorability after January 6th plummeted, and he's gained ground, but he's only gained ground to about where he was at this point in March of 2020. It's Joe Biden who's lost a lot of ground by virtue, I guess, of being the president, being easy to blame, border situation at the border, what have you, being so-called ineffective. His net favorability has dropped 10 points. So,
where there used to be an 18-point Biden advantage on net favorability in November 2020. He's now at a 5-point...
four to five point disadvantage. So that's kind of the number that I'll be tracking as people start to tune into the election. If it looks like Biden starts to gain ground on the temperament question or people remember the 2020 election or the Trump presidency, whatever our theories are, I think this will be a pretty good benchmark. Does Biden regain a favorability advantage or at least his disadvantage fall?
One potential bright spot for Biden there is, so because his favorability has fallen, we're sort of growing this group of what we call the double haters, people who dislike Trump and Biden. It was a pretty big group in 2016. It was a fairly small group in 2020 because Biden's favorability was much higher. And now it's growing into this bigger group. And it's a really telling group. In 2016, the double haters voted for Trump and Trump ultimately won. Right now, the double haters actually prefer Biden over
by a decent margin. And so it'll be really interesting to track that group and see if that group ultimately does stick with Biden. That says something good for him. I mean, part of that is that because Biden has such a low approval rating, there are more Democrats who dislike Biden than there are Republicans who dislike Trump. And so...
If Biden is actually successful in what his campaign is trying to do right now, which is not just attack Trump, but saying I think they have like two messages. One, we're going to tell you all of the ways that he is being effective and announce an infrastructure plan around somewhere else around the country basically every week between now and Election Day and say like it's infrastructure week all year. Yeah.
I mean, yeah, screw infrastructure week. Biden has infrastructure year. And so they're talking about that. They're talking about chips and science. They're talking about gun laws. They're trying to talk about all of the ways in which they think from a policy perspective he has been effective. And then they also keep saying, you know, he's a good guy. He's a decent guy. He's a nice guy. And so they're trying to, in effect, shrewdly
shrink the pool of double haters, get Democrats and Democratic-leaning independents back on board with having a favorable view of Biden. And so in some ways, what's going to happen is if they are actually successful in getting the pool of double haters to shrink, it will become a less Biden-friendly group, but his favorability will go up. So that will be a very interesting pool of folks to watch. Like right now, the fact that they lean Biden doesn't necessarily suggest all
all that positive of a picture for Biden. I think where it leans, if it becomes a smaller group, will tell me more. I think that's right.
Yeah, Democrats were also able to win the Biden disapprovers during the 2022 midterms as a similar dynamic. So if there are these composition effects in the double haters, it's important to note there's also composition effects in approval, which gets used as a benchmark for presidential vote performance all the time. So that would be something also to look out for if Biden's approval goes up, but the disapprovers who support Biden in November go down. That's sort of expected movement, too.
Okay, so this has been a very polling-heavy episode so far. I think it's been a minute since we've dove this deep into the polls, perhaps even acquired a nuclear submarine license for this level of diving. God help us. I'm sorry, that wasn't good, but it was something. So as we head into the seven-and-a-half-month period where the polls are going to be scrutinized, I keep hearing different things.
measures on this, but it will be maybe the longest general election in American history, or at least one of the longest general elections in American history, given how quickly the nominees have been decided. What advice will you give folks about their posture towards the polls over the next seven and a half months?
Pay less attention to the top line number, to the horse race number, but don't not pay attention to polls. That was a lot of double negatives. Pay attention to polls, but don't pay too much attention to the top line numbers. You can tell a lot about what's happening with not a cross tab dive, but looking at the other questions on the poll, the issue questions, what people care about, how they're feeling about the candidates, those kinds of things. Continue to pay attention to those questions.
And I think they will help tell the story. And, you know, I mean, we always say the sort of campaign kicks off in earnest after Labor Day. I think that's the time to really tune into the polls. This summer, the parties will have their conventions. They'll rally around the candidates. You'll see polls move around after each of those conventions to get a bump for whoever just had their convention. But don't ignore polls overall because I think they still tell a compelling story about the mood of the country and how people feel. So everyone freak out the first week of September is what Ruth is saying. Yeah.
Hold your freak out until then. Hold your freak out until September. If it's Labor Day and you're guys behind, you have the license to freak out, in addition to your nuclear submarine license. I think this is something that the people who are doing good election writing, Ruth's colleague, Nate Cohn, I think we at FiveThirtyEight, if I can be so humble, do a pretty good job of this too, which is like...
contextualizing how much people should pay attention to indicators at any given time, how much movements are going to be in the polls, how reliable is this survey instrument this year? Right now, the answer is not very reliable as a predictor of the vote since it's so early. So...
What people should do is listen to us until we tell them, now you have permission to freak out. And that's also something that polling averages are really useful for, that forecasting models, when they're used properly, are really useful for, just conveying how much certainty you can attach to any given indicator that you see flowing across your timeline hundreds of times a day. So that would be my recommendation. When in doubt, put
put it in the average. And I'll also say part of what we didn't get into today is the underlying sort of demographic why as to why Biden is down in the polls, which is in large part that he has lost support amongst voters
voters of color and in particular has lost more support amongst non-college educated voters. And we are going to talk about that more in depth. So hang tight. We will do a little more crosstab diving, but we're going to try to do it in a responsible way in a future episode. And we're going to move on to the Ohio Senate race. We're going to let you go, Ruth. Thank you so much for joining us today. Thanks for having me. Thanks, Ruth.
You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.
Tuesday is the Republican primary in Ohio's Senate race, and the winner is going to face Democrat Sherrod Brown in November. The outcome of that race will help determine which party controls the Senate come 2022.
Now, our polling average gives businessman Bernie Moreno a slight edge. Moreno has Donald Trump's endorsement, but it doesn't appear to be a runaway. State Senator Matt Dolan and Ohio Secretary of State Frank LaRose both trail Moreno by just single digits. And the infighting amongst Republicans has turned especially...
contentious in recent days. We can get into that. Joining us now to discuss is senior editor at FiveThirtyEight, Tia Yang. Welcome to the podcast, Tia. Thanks for having me on. Okay, so give us a general sense of the state of the race here. What issues are kind of motivating it or defining it? And how up in the air is it, would you say?
This, like many races, is sort of a race where what we have our eye on is how Trump's endorsees will do, how Trump's sort of wing of the party is going to do compared to more establishment candidates. It's a theme that we're also seeing down ballot in several of the House races, the House primaries on the Republican side happening in Ohio as well.
Bernie Moreno, he's endorsed by Trump. He's a businessman and has self-funded a pretty decent amount of his campaign. And he currently has a polling lead in our average over two other pretty prominent candidates, one of whom is Trump.
Senator, State Senator Matt Dolan, his family owns the Cleveland Guardians baseball team. He also has self-funded much of his campaign. And he notably is not necessarily a Trump acolyte or part of the Trumpy wing of the party. He's been endorsed by Governor Mike DeWine, who also has sort of kept his distance. Neither of them are like anti-Trump people. They definitely are not saying that on the campaign trail.
But it's definitely a bit of a Trump versus not as closely Trump type race here. And the third candidate who is sitting on our polling average is Secretary of State Franklin Rose. He actually was in the lead in polling most of last year, I believe, but he
has potentially even more of a Trump problem. He has in the past criticized Trump for racist tweets. He's acknowledged that the 2020 election results were legitimate. So he sort of set himself up a little bit more for attacks from the Trump side of the party, even though he has definitely veered right messaging-wise during this campaign. He's sort of been outspent and drowned out by the Trump-endorsed Bernie Moreno in this race.
And we should say Frank LaRose has actually been on this podcast talking about some of those things, in particular election legitimacy. Elliot, how much of this race would you describe as like a Donald Trump referendum or a referendum on the Donald Trump endorsee?
I mean, I think it's impossible really to separate a party competition from Trump these days. Marino certainly hasn't been trying to distance himself from him. And the other candidates are doing this like very cautious dance when they talk about him. The Democrats have also been buying ads, painting Marino as like a Trump candidate.
light is going to be dangerous or what have you for the state. And those ads purchased by Democrats, are they sort of meant to boost Moreno?
Yeah, so Democrats have been buying ads to boost Marino. They won't tell you this. The interpretation is that they view Marino as easier to beat in the general election. He is more extreme. He's closer to Trump, right? These are things that have been associated with underperforming in recent elections, like ideological extremity is sort of there's a long-term trend of ideologues underperforming.
People who were endorsed by Trump and endorsed the big lie, so-called his big lie, right, in 2022 also underperformed, Republican candidates did. So, you know, I think it's a pretty safe bet that he would do worse than some of the other candidates in the general election because of those qualities. And so Democrats are running ads trying to boost him in the primary so that he's their eventual opponent.
But doing worse in a general election in Ohio doesn't necessarily mean losing. Obviously, important to keep in mind if Democrats end up having to deal with a Senator Moreno in 2025. But right now, I think he's also maybe primarily focused on the criticism that he's getting from his own party. And that's over one particular scandal that broke last week. An account using his email address was created in 2008, right?
for a profile seeking men for one-on-one sex. Bear with me. These details are a little, you know, out there. On a casual encounters website called Adult Friend Finder with his profile reading, Hi, looking for young guys to have fun with while traveling.
An intern at his firm said that he did it as a prank. I don't know if there's any way to prove that or not. People will believe whatever they want to believe. But does this seem to be affecting the race? Of course, the suggestion here is that he is, you know, closeted or something like that. But he is also not pro-gay or trans rights. So maybe a contradiction.
This just happened last week, so we obviously don't have a ton of polling or anything to necessarily say concretely if it's changed the state of the race. I think that it makes sense that, you know, Moreno's in the lead. His opponents sort of are clinging to this recent news as like, oh, this could potentially hurt him. But ultimately, his sort of explanation that it was an intern pulling a prank seems to be
Like, it doesn't seem like this is a broader thing and no more, like, weird, crazy details have come out. It was quite a long time ago. I think that Moreno has obviously on the campaign trail sort of emphasized that he is anti-trans rights, and that's been sort of a sticking point. He's in fact attacked his two opponents for being too pro the, quote, like, radical trans agenda or rhetoric like that. So attacking him for being potentially anti-trans
closeted or something like that. I'm not sure if that attack necessarily sticks. It's kind of a strange attack and sort of just maybe like more of a weird news story the week before the election is my impression. And perhaps importantly, Trump hasn't rescinded the endorsement and he was in Ohio over the weekend. Yeah, scandals do have impacts on campaigns. It's one of the things that you have to be sort of careful with when you're modeling House elections.
Does this one have legs? I'm not really sure. This does seem to be like an example of what J.D. Vance was talking about, just a very nasty campaign. Who knows if this was leaked by another campaign or some sort of oppo researcher. You never really know. But yeah, I mean, that is sort of pretty ugly stuff to put out there. It does sound like the type of juvenile prank someone might do. So who knows if it's true or if the defense is true, whatever. I wouldn't necessarily expect it to happen.
fundamentally changed the race. It's not the type of scandal that we typically associate with people getting punished. Yeah, I mean, so to quote Ohio Senator J.D. Vance, he said, it's pretty ugly. I think obviously that will eventually harm the nominee. The more negative dollars you spend attacking each other, the more it makes it harder. We've certainly learned that the hard way, end quote. Quantitatively, I mean, is there anything to that that really contentious
primary battles make it harder for the nominee to win a general election? This is one of those things people say a lot, especially in the context of presidential primaries. You know, the more that you be down on a nominee to try to decrease their polling numbers, the worse they do. There's just not a whole lot of empirical research on this because we have so many limited examples.
One thing that might surprise people, if you go back and look at Joe Biden's favorability rating during the 2020 Democratic presidential primary, he gained like 15 percentage points of net fave from January to March of 2020. So these things, you know, they can shift around a lot and people will like forgive you later. And he ended up, you know, doing really well among Democrats, obviously having some of the highest party loyalty ever among those voters. It wasn't necessarily like
docked points by Bernie Sanders or Elizabeth Warren voters, at least not enough to matter in a close race. So this just seems like, like Tia said, one of those weird stories you get right before an election. Okay. So this is the main event for Tuesday evening, but are there other primaries that you all would like to shout out before we wrap for today?
Like I said earlier, there are a lot of races on the ballot in Ohio on the House side that are also sort of this Trump side of the party versus the establishment side of the party. There are competitive races in, I think, at least three districts where we see
more establishment Republican candidates facing off against sort of Trump-endorsed or more Trump-type candidates. And like we've been seeing throughout the season, there's just been some races where there's competition to out-Trump each other. So that's definitely going to be one thing we're watching in Ohio.
And interestingly, like Elliot referred to earlier, we're seeing Democratic spending, not necessarily in those House races, but in general, victories by Trumpy candidates is maybe not only a good outcome or a preferred outcome for Trump, but potentially for Democrats who might prefer to face those candidates in the general election.
And then the other thing I would shout out is that in Illinois, the other sort of state that has statewide elections happening tomorrow, we have two potential incumbents that might be endangered. They're facing competitive primary challenges. Democratic Representative Danny Davis in the 7th District, which is around Chicago, he's facing off against a primary challenger. And then in the 12th District, also Republican Mike Bost is facing off against a former state senator.
And that's sort of one of those districts that's a safe red district. And it is become really a battle of like out trumping the other with the challenger Bailey sort of trying to prove that he's further right than boss. So those are some of the races that we have our eye on tomorrow. All right. Well, we'll see how they play out, but we're going to leave it there for today. Thank you for joining me today, Tia and Elliot. Thanks, Elon. Thanks.
My name is Galen Droop. Tony Chow is in the control room. Our producers are Shane McKeon and Cameron Tretavian, and our intern is Jayla Everett. You can get in touch by emailing us at podcasts at 538.com. You can also, of course, tweet at us with any questions or comments. If you're a fan of the show, leave us a rating or review in the Apple Podcast Store or tell someone about us. Thanks for listening, and we'll see you soon. Bye.