You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.
This listener says, okay, so one, is your podcast taking a gay cultural pivot? And if so, I'm all for it. And two, everyone knows that St. Paddy's Day is the straightest holiday of all time. Oh, yeah. Is that? No, St. Patrick's Day is more heterosexual than the average day. It's the only holiday for which that's true. Than the average day or than the average holiday? Average day. Average day. Because Thanksgiving is gay relative to normal day. How gay is the replacement level day?
It depends where you live. I live in Hell's Kitchen and you live in Chelsea, so we're maybe not reliable sources for this. Hello and welcome to the FiveThirtyEight Politics Podcast. I'm Galen Druk. I'm Nate Silver. And this is Model Talk. There are 20 days until Election Day, and the forecast shows that Democrats have a 61% chance of keeping the Senate and Republicans have a 75% chance of winning the House.
Those numbers have been moving around a little bit, a little bit more than usual over the past month. Yeah, there has been a shift toward the GOP. There has been. Let's address that up front. Why has there been a shift towards Republicans in the forecast? I think both in generic ballot polling and maybe more pronouncedly in state-by-state polling. So the generic ballot has moved back to basically a tie. And remember, that's among...
All polls, if you shift it for likely voters, probably leans GOP at this point. Meanwhile, like, you know, Pennsylvania, there have been polls showing that race tightening. That's a pretty key state because if Democrats were to have Fetterman win, then they're up one. So the G.P. would have to win two seats elsewhere. And that's easier said than done. If Pennsylvania, though, is also in this ambiguous category where you go into Election Day or deep into the next morning, not knowing who won, then kind of in the base case scenario,
There's uncertainty about the Senate. Meanwhile, there's other more long shot Democratic pickup opportunities. So Ohio, North Carolina, North Carolina, Wisconsin have, I think in ways that some people predicted have shifted GOP. So it kind of comes down now to like,
I mean, I don't want to rule out the fact that you'd have something develop in another state, right? But it kind of comes down to Pennsylvania, Nevada, and Georgia. And can the GOP win? What do they need? They need two out of those three. Yeah. Or two out of four if you include Arizona, which I'm a little more skeptical of. But like that math is not that hard, right? You wouldn't need some massive overperformance to win.
the polls to have the GOP win, right? They could win kind of if that Pennsylvania race tightens by a couple of points and you're in the margin. I mean, you're in the de facto margin of error already, right? But like, but these are not safe leads for Democrats. Right. I think when Fetterman was leading Pennsylvania by 10 points, at
A 10-point polling error is extremely rare. Well, a 10-point polling error on election day. It's not that odd for a race to start at 10 and go to 6 and then 4 and then you have a 4-point error, right? Of course. Now he's leading by 5 points. Yeah. And a 5-point polling error is not unheard of. It's not even unheard of for the past couple, two elections. 5 points is a pretty average polling error. Yeah.
And so at that point, Democrats would have to hold both Georgia and Nevada. And Adam Laxalt is currently actually leading in Nevada in our polling average. That's right. Nevada is, I think, the toughest hold for Democrats. You know, I still might take Warnock in our model in Georgia relative to the odds that we give. The polls haven't been great for Walker since that latest abortion scandal.
related scandal. And in fact, Walker has never led since he won the nomination, really. Yeah. Let me see what the fundamentals forecast says for Georgia. So part of it is that in the classic model, which includes just objective data, Warnock is a two to one favorite. And in the deluxe model, which includes the expert raiders, I think they all have Georgia as a toss up. And so
That swings things toward a toss up in Georgia and our in our deluxe model so that we also have as a toss up. That's a little weird to me. I mean, again, I think the people who have been skeptical of Democratic polling leads in Wisconsin, Ohio, I think it was right all along. Right. Georgia is a race where, you know, Walker is a problematic candidate. And it kind of seems like the polls show him trailing. And I don't know. I mean, if I'm a subjective forecaster, I think.
I think that race is lean D, which doesn't mean the Democrat will always win. But like, but, you know, we're not making manual edits to. So, yeah. So Georgia is the one race where I think maybe things are better for Democrats than our model shows. But, you know, I mean, Pennsylvania, we talked about this a little bit the other week. You've seen a big shift in emphasis from Oz toward Fetterman. It's the kind of race where you're probably the candidate who doesn't want to be talked about necessarily. Right. And so it's like not just like some.
random blip in the polling, there's been more focus on Fetterman's mental condition after his stroke and less on New Jersey and Crudite, right? Right. So you were saying, though, that
People who maybe doubted Democrats polling leads in Wisconsin and Ohio and maybe perhaps that double digit lead that Fetterman had in Pennsylvania, those sort of Rust Belt states that have not turned out the way Democrats hoped or in the most recent presidential election turned out the way Democrats hope, but still showed a significant polling error.
that those numbers have come back down to earth and now Republicans are leading. So Ron Johnson is up in Wisconsin. And I think J.D. Vance just took like the slightest lead in our polling average in Ohio. But what happened, though? Are pollsters getting more sort of like
serious about their likely voter models and that's changed or just voters start paying attention. And while they may have been open to voting for a Democrat earlier or said they were undecided, now they're registering in the polls as decided and going towards Republicans. Like what has caused that sort of coming back down to earth? I mean, keep in mind that in most of these races, you have inexperienced Republican candidates, right? I mean, all three of Ohio, not Wisconsin, I guess, and Pennsylvania. And so people may initially just
be a name recognition thing or they may, candidates may initially be very prone to being defined by- Wait, Dr. Oz suffers from name recognition? Oh, fair enough. Fair enough. Fair enough. Fair enough. But like, you know, I mean, maybe inexperienced, like maybe it's like a rookie player in the NBA. They're better by game 82 than by game four, right? But do you think it's candidate specific or maybe it's just environment specific where people come home to the party? I mean, yeah, I think that's part of it for sure. And I think part of it is like,
People were talking a lot about abortion in the summer. I mean, that's been the big seismic event, right? It's still the Dobbs decision. I still kind of wonder if whether it's like non-response bias in the polls or like voters feel conflicted. Let's keep in mind that ordinarily in an election like this, it's a big GOP sweep, right? Yeah.
Ordinarily, it's supposed to be. Think back to when we had our first model talk. Senate odds were 50-50. 50-50 is way better than you would have thought Democrats would have done. Right, exactly. That's what I was going to say. So this is supposed to be like the economy is kind of in the shitter still, or at least inflation is, we should clarify, not all the economy. People are unhappy with the direction of the country and Democrats' margins are very narrow. Ordinarily, this was going to be the least climactic outcome.
election that 538 covered right when i was first working on the model don't do what it would say i'm like i assume that it's going to be 80 gop in both chambers it'll go to 95 by election day and we'll be i don't know what we're doing to to juice site traffic right i thought it'd be boring we're just gonna be posting um storybooks of 5e fox storybooks of 5e fox i don't know where that would go but like but no i mean democrats kind of defy gravity for a period of time they're still
favored narrowly in the Senate. I mean, we get in this weird linguistic thing where we're kind of at the point where if I were characterizing the Senate to a frame, would I say Democrats are ahead or would I say it's a toss-up? I might say it's a toss-up. I mean, our forecast says that they're slightly favored. Well, then... A 60-40 proposition... At 59, it switches. What are the point of even having numerical odds if a 60-40 proposition is a toss-up? Well, part of it is that you have two chambers in Congress. So this is one of the reasons why...
You want to be a little careful. So the most likely scenario at this point is actually that, let me see here.
It is more likely that Republicans win both chambers of Congress and that Democrats get a split, right? 38% chance GOP sweep, 37% chance split, almost always meaning Democrats win the Senate, the GOP wins the House, and 24% of a Democratic sweep. So when that's the case, right, there's no longer this balance that we had, right? It's still on average you'd expect the GOP to win of the two chambers like,
more than one, right? Let's dig a little bit deeper into this why question. We first tried to tackle it from the perspective of what's happening in individual states with polling. But I think since, in particular, this recent New York Times-Santa College poll came out this week, the conversation has shifted to...
is there a change in the issue environment in the country that is helping Republicans? And there's a lot that we could talk about in this poll. We mentioned it only briefly on the Monday podcast. But what they showed is that now 44% of likely voters say that inflation or the economy is the number one issue facing the country. That's up from 36% in July when they first polled that question.
And you see that the other issues like abortion, even immigration, crime, etc., are in the single digits.
Do politics work that way that like, oh, OK, now there's like a change in the issue environment. And so that means that what concerns had been at top of mind over the summer regarding Dobbs and abortion are maybe voters aren't thinking about that anymore. So a voter who might have voted for a Democrat based on that issue is now thinking about the economy more and is going to vote for a Republican based on that issue. Sort of. No, I think salience is a thing. I think Pennsylvania is actually a good example of that. Right. Well, there's been like no change.
new information revealed about Fetterman, really, right? He had this stroke as the primary was wrapping up back in the summer, or was it spring? You know, it gives an interview, right? But like, so no, I mean, you know, salience is important. I mean, go back to like Hillary Clinton's emails, kind of what the media focuses on. And it's not like they have total free reign. I mean, I think you saw some ridiculous claims
by liberals like in the spring when inflation started taking off that, oh my God, the inflation is just a media creation, right? Like if you say that, then that's like literally the least example of a media creation.
inflected narrative. Well, and it got tested. As the media all turns its attention to abortion throughout the summer, abortion never became the number one issue in any of the polls that we even conducted with Ipsos or any of the other polls that I looked at. It was always the economy and inflation. It ticked up. And according to Gallup, of course, abortion hit, you know, 8% of Americans saying that it was the biggest problem facing the country. That's now halved and gone down back to 4%. But also, you know, Trump was very involved in the news cycle. I think one of the
least accurate pieces of convention wisdom was when people thought, oh, the Mar-a-Lago raid that will help Trump, right? I think just having Trump and FBI in the same sentence is unhelpful to Republicans, just like having Hillary Clinton and FBI in the same sentence was unhelpful to Democrats in 2016. So no, I think the issue environment's changed, right? It seems a little weirdly distant to kind of talk about it in that way, but a lot of people feel conflicted and what kind of comes top of mind for them may matter. I mean, I still think Dobbs is going to
help ensure that Democratic base turnout is very high, as well as various things Biden's done legislatively. But, you know, swing voters may swing toward the GOP as they typically do toward the out party in midterms.
Well, those are two different. Are those two different things, though? You know, the economy becoming more salient and just historical trends showing us that in the final month of a midterm election, the out party picks up steam in polling. I don't know if that's quite there. Like, are they sewn together in some way? Like, how should we parse? No, I think this is more. I mean, I guess I don't know if I agree that the out party tends to pick up steam in the final month. I think that's under specified. I'd like to see a more rigorous.
I have a chart, Nathaniel Rakich shared it with me, that does show that the out party in midterms improves its standing in the polls as you get closer to actual election day. Okay, fair enough. Like we've had two bad, maybe two out of three like bad inflation readings in a row. That's pretty visible. Let's see what the Dow Jones is doing today. Good old Dow, which is not as reliable as the S&P. Well, a lot of people have stocks, Galen. I mean, I know.
I mean, I actually kind of track some of these things pretty well, right? The Dow peaked at 33, basically 34K on August 16th. It fell to as low as 28K, now back to 30 and a half. They're intertwined. But if you had to pick, because gas prices have also started to tick back up again, which we've already talked about. But if you had to pick one, which is more important? Gas prices, right? Yeah. Yeah?
You gotta give me more than a sound. I think it's become such a cliche, right? Oh, now you think gas prices are overinduced. You sound like such an out of touch snob if you're like, oh, gas prices are real and the stock market's fake, which is for elites. Like people care about the stock market, man. People are invested in the market. And also it's very visible, like gas prices, in fact. Like you have a ticker on the screen. Right. The piece of polling actual data that I saw this week that
was the best argument for this is that there had been a larger shift towards Republicans amongst people who were at a near retirement or early retirement age. So like 40,
mid-40s to mid-60s, there had been more of a shift towards Republicans or more emphasis on the economy. And so like, right, okay, if my 401k drops, it's like, you know, I'm what, 35 years away from retirement? So that's not really going to stress me out in the same way it would for someone who's thinking about retiring. Yeah, I mean, gas prices are probably...
But the stock market's like a pretty visible indicator, right? And also kind of goes into the way people talk about the economy in general and kind of overall. I mean, it reflects like, so whether it's causal or not, it kind of reflects overall vibes or sentiment. Vibes. About the economy. I mean, but also the price of like, one of the reasons the stock market's going down is because the price of borrowing is going to go up, right? Mortgages get more expensive. And like that has an impact on people. So we're back on the stock market matters train, which I think works.
The analysis that you have done since earlier in your career on the importance of the economy in elections is that there are a lot of different indicators. It's hard to parse. They're all highly correlated. So back to this New York Times-Siena College poll, which started this whole conversation about the economy.
There have been some critics saying we shouldn't trust this poll because it showed a 32-point shift amongst independent women between September and October. Oh, yeah. I mean, I like—that's total bull. Like, to cite, like, these, like, 100-person subgroups—like, I have, like, thing—I was going to write this for this week's article, but, like, focusing on these, like, tiny subgroups—like, what I'm going to do—I'm going to blow people away with this when this gets published, right? What I did is, like, actually, like, take from a poll and drew—
10 like 800 person random samples. I'm just gonna publish them and like just show you how much this happens if you like if you are looking for sub subgroups among an 800 person sample, you're just totally like snorting the noise and
up and getting high off it. You're really into the drug metaphors these days. I guess I am. Okay, but there are two things going on here, which I think we need to clear up. One, the New York Times itself highlighted the shift in the write-up of the poll. They wrote, quote, That's a bad use of polling.
A striking swing given the polarization of the American electorate and how intensely Democrats have focused on that group and on the threat Republicans pose to abortion rights. Okay, so let's say the poll had 800 people. Let's say 51% women. About 30% are independents. So you have a sample of 122 people, right? Let's look at the margin of error calculator. So a margin of error is...
for 122 people. Hmm, that's going to be pretty high. Let's see what that is. So the margin of error is 9% just on one number, right? So since it's actually a margin, it's twice that. It's 18% margin of error. And now you're comparing one poll to another, right? So it's almost like it's not technically double, right? But you can have like an outlier in one direction.
in the previous poll and outlier in the direction on the next poll and have like a 36% shift in margin. That's all noise. People have no idea how even the top line numbers, right? Like in this like fake poll I made where it just takes a Biden plus five pull from 2020, right? Even just the top line numbers in 800 person sample
The first sample I drew had Biden plus 15 and then Biden plus 10 and then Biden plus one. Oh my gosh, the race really shifted, but it's all randomness. It's all randomness, man. You can't be like, you cannot...
cherry pick your way through 20 different demographic cross tabs with an 800 person poll that are 100 person cross tabs and like make a news article out of that, right? That's not responsible use of polling. So I think we have two bad uses of polling that have created just a polling cluster, which is one that the New York Times is pulling out this data and highlighting it and making it part of the story.
But then the second use of polling is that critics of this poll, people who would like to discredit this poll because it showed a three-point advantage for Republicans in the generic ballot, are using that swing to say a 32-point shift in one month. That's not believable. So we don't believe the whole poll. Here's my advice. You will be a smarter consumer of polling if you never look at a crosstab. On average, right? Okay. It's too tempting. The crack cocaine effect...
of like looking at crosstabs, right? To like debunk a poll and or tell narratives based on the poll is like just too strong for most people to resist, right? I'm sure there are people whose lives are bettered by like using heroin, but you would recommend the average person. I don't know. I think that's like the most progressive thing I've ever heard you say on this podcast. No, I'm saying like,
For 97% of users, heroin probably is a net negative in life. Maybe there's 3% for home. It's pretty good. But you don't want to stop the insanity with pouring through pulling cross tabs unless you certify, swear an oath, that you understand how much variance there is. It's random. Right. However, people do want to understand how certain segments of the electorate are moving because, for example... You don't have enough...
It's significant for understanding our country's politics, where our country's politics are headed, how the two parties campaign to voters to know that Latino voters swung eight points towards Trump between 2016 and 2020. And so people are going to want to look through the data and see, is this trend continuing? Has it reverted to the mean?
So what is a responsible way of trying to understand subgroups? Is it just oversampling those subgroups and doing a thousand person polls where you only contact independent women or Latino voters or young voters of X or Y demographic? Like how do you responsibly do this? When you get, you know, typically pollsters, the average polls on the range of 800,000
to a thousand people, right? And that's kind of been determined by the market over time. You have some as few as 500, some that have 2000, right? But like call it 800 ish if our model is no, the sample size is like imputes 800. That's the kind of market equilibrium for
The point at which there's enough news value in a poll where it's not purely noise. There's a lot of noise, right? But it's not like purely noise. And again, even in the top line numbers, 800 produces a large margin of error. Larger than people realize, I think, because pollsters often check out results that don't fit their priors, right? So you don't see the actual randomness of an 800 person sample and they use a lot of weighting and things like that, right? So maybe as a rule of thumb, let's say 800, right? If you have 800...
person samples of the subgroup in question, then maybe it's a point in which I start to trust your demographic vibes analysis. And that requires either having a very large poll or taking it over sample or aggregating different polls together. And maybe we should probably do this at five. We should probably be at, have a product where we gather every polls, Hispanic cross tab, right? It's a little hard because they all define, you know, sometimes,
Hispanics get lumped in with other, right? I think you said, I think you compared doing that to bringing heroin to an Alcoholics Anonymous meeting on the Monday podcast. Although I'm glad to see you've come around to my idea. No, I mean, maybe you would like, no, but maybe you like would only let people see the compilation, right? You don't even show like the individual numbers. They don't like. Okay. Yeah. Maybe. Shall we?
Maybe. I don't know how many different drugs we've compared polling to today. That's what's happened with 20 days until the election. I mean, yeah, people want certainty. We've said this many years in the run-up to an election that we can't give you an answer. All we can do is describe the uncertainty. Yeah. With that, shall we move on to some listener questions? Sure.
You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.
You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.
All right, our first question comes from a listener named Brendan. What are some variables that you would like to add to the model but you aren't sure how to measure? And then we got a similar question from Zvi. What information that does not exist but could in theory be gathered would most improve the model's accuracy?
And this got a bunch of upvotes. So I think people were very eager to hear about the unmeasurable things in American politics or the unmeasured things in American politics. I think if you knew how much advertising was taking place in a campaign and had some expectation of how that would shift, that would be valuable. I mean, if you knew, for example, that I'm not sure if this is the case or not. If you knew that Tim Ryan had dominated the airwaves in the summer, but you had advanced knowledge of ad buys and that would even out
in the fall, I would think that'd be a pretty, like something you'd adjust for. And would you, what you really would be looking at would be the discrepancy between the two candidates. Overperformance due to a temporary advertising advantage. Yeah. Does that data exist to the point where like, we know in June, the amount of money that JD Vance is going to spend on advertising in October?
I mean, there are places that track campaign ad spending data carefully. I think they're somewhat proprietary with their data. I mean, if you were really like committed to it, you could probably figure it out. Right. So if you were like if you really wanted to make money against the Scottish teams, like that might be the kind of thing that would be worth investing in. And our forecast does consider fundraising. It just doesn't consider how it's spent.
I mean, this is kind of another issue, right? Which is like fundraising has kind of always been a low key underrated indicator for elections, right? It shows if you're organized, it shows some level of grassroots support. You know, Democrats now have kind of an institutional advantage with fundraising because they tend to attract college educated voters. But no, Democrats have mastered the art of campaign contributions toward Senate campaigns, right?
But in a way that, A, you encounter diminishing returns, B, it doesn't indicate grassroots support so much as the same very active Democrats computing the same campaigns outside of their states, right? So that might be one reason why there's a skew.
Next question from Alex. Does a party passing their platform as legislation translate to positive electoral outcomes? Two high-profile counterexamples, passing the ACA in 2010, and then they also mentioned the Tax Cuts and Jobs Act in 2017. Of course, both the Republican Party and Democratic Party ended up losing seats after those two legislative successes. Right, yeah, I mean, the general conventional wisdom is that
passing stuff makes you less popular. There's thermostatic public opinion, meaning people tend to move against the party. It's moving the ball, so to speak. I mean, the things Biden has done have, for the most part, been popular-ish. So I don't know if it's a factor. So it depends on whether or not your agenda is popular? Yeah, I mean, I think the notion that, like, we're going to pass this thing that our voters will really like and they'll turn out as a result, I think that's usually bulls**t.
Maybe not always bulls**t, right? I think the Dobbs decision is just a really good example. When the Dobbs decision came down, it wasn't like all of a sudden conservative voters showed up in polls saying, we've been waiting decades for this to happen. Now we're absolutely going to show up to the polls this fall to reward you. We saw the exact opposite, which was even though Democrats had shown for decades that they saw abortion as a less salient issue than Republicans, all of a sudden they all wanted to show up to the polls to vote on this.
And so I think disappointment and anger seem to drive voting behavior or even fear. And every other type of behavior. And seem to drive voting behavior more than, you know, happiness or satisfaction. That's the conventional wisdom. I mean, I think that's probably right. It's confounded because, like, the party out of power usually has trouble doing very much. Dobbs is an important exception to that. And you did see a shift there.
That goes against the typical shift. Next question. In the past, I've heard you guys and others discuss how Democrats are often underestimated based on Nevada polling. However, I've heard little to no mention of this during the current cycle. Is this being factored into Cortez Masto's chances for the midterms? We're in a... I mean, to the extent... Second guess of the poll season. Second guess of the poll season. Again, I'm not a huge fan of looking at the way in which the polls missed in the state previously and...
And what happened now? I mean, the thought used to be that Democrats underperformed in states with large numbers of Hispanic voters. If Democrats no longer have the same advantage with Hispanics, then a Hispanic related polling error may not cancel out as well. Right. Nevada also has like a lot of like white working class voters, also a lot of voters of color who are working class. It's just kind of an unusual state. Right. It's like like 44th in percentage of people with a college degree. It has a lot of migration from white.
from other states, right? It looks like the sort of state where it's not quite a Trumpy state, right? It's not a culture war state. Well, Trump lost it by only a little over two points in 2020. Yeah. But like, it's not kind of this new democratic coalition either. And so, no, I mean, I, you know, I, we have the race of 50, 50, the polls have it. We actually, I think have lack of salt ahead in the polls. So if anything, our model is like actually,
based on the fundamentals, helping out Cortez Masto a little bit because the polling average has laxed all up by 0.7. A related question. How, if at all, should we factor in early voting trends in Georgia, Nevada, and Florida in assessing the odds of the Senate races? So this is another just like big trend that happens in the final month of an election, which is we start getting early vote data. And of course, you can see in many states how a voter is registered with which party. And
maybe you can guess how they voted based on that registration data. Of course, the problem being that voter registration does not equal... If cross-tabs are the heroin, early voting is the fentanyl, right? You don't want to like... Jesus Christ. No, but like, don't. Just don't. Unless you're John Ralston. Well, what I was going to say is... Unless you're John Ralston, the Nevada Independent. It seems like the exception is Nevada because they keep having... We keep saying this and then they keep having like really accurate...
early vote data what makes nevada special because this doesn't happen in other states like if it were actually predictive we would have done this and now we we would be able to tell you having done the analysis that it is predictive but in nevada seems like this odd state where you know maybe this time will be the time that it's not but i mean nevada has for a swing state quite low turnout so in some ways it is kind of a race see who can actually get more voters
to the polls. One obscure factor that I think is actually important is that because people have moved to Nevada fairly recently, their party registration is usually a good indicator of which party they support, which do you think you would take for granted? But there are lots of states where they're like rural ancestral Democrats that are no longer de facto Democrats. Or at this point, maybe even Romney Clinton Republicans. Right. But in Nevada, people move there pretty recently
And so I think that tends to I think the party restoration numbers are kind of like a de facto. Who are you? Who are you voting for? If you had some very eccentric, moderate candidate, maybe it might be different. Right. But, you know, but you have Nevada races tend to feature Democratic Democrats and Republican Republicans and the voters are likewise.
So this question comes from someone from Virginia on Twitter who describes themselves as a contrarian. And so I have them here as the Virginia contrarian. We took a question from them last week as well. What do you make of the Seltzer poll showing better numbers for Democrats in Iowa? This is another game we love to play.
But we got to give we got to give Ann Selzer a shout out because she has been on this podcast multiple times. She is known as the pollster who always gets second guessed and then turns out basically being correct in the end. And she just published a poll in Iowa showing the Democratic challenger to Chuck Grassley trailing by only four points. So that was an unexpected one. Let me put this in context. Yeah.
In the light forecast, which is kind of polls only, we still have Frank and the Democrat with only a 10% chance. By the time you get to Deluxe and account for other factors, I was a pretty red state, it's 3%, right? So not quite impossible, but more likely than the Knicks winning the NBA championship, for example, but like still a fairly big long shot. I think the Seltzer poll is interesting though because she doesn't give a f**k about like caring. I mean, she's like if you like put someone in a cave and then in October-
14 days before the election, she comes out with like no prior expectations about what's happened, right? Plato's cave or mixing metaphors here. Like there aren't that many pollsters like that. And you're not afraid of putting out a poll that pretty ostentatiously refutes the conventional wisdom. And we saw I mean, the most famous examples are her poll showing Obama leading in the
the 2008 primary or caucuses in Iowa. And then another good example is on the eve of the 2020 election, releasing a poll of Iowa showing that Biden did not have the advantage that he was expected to have in the state and sort of foreshadowing the polling mess across the Midwest and Rust Belt states. One thing that is interesting is that she had in that poll, the governor, Kim Reynolds, up by 17 points, which is a pretty typical amount for other polls in the state, right? So maybe there's something about
I mean, it's, you know, but like, hey, it is an independent data point. Not effective. I mean, one thing I always worry about is how much of a difference is there between polling and the conventional wisdom? It's always a little blurrier than you might think. Who feels entitled to publish what number when or to go back and say, you know what, maybe we should...
implement this likely voter model instead of that one, right? Like polling is not purely removed from the conventional wisdom. Oh, for sure. Yeah. I mean, the subjectivity of data, we could teach a whole class on it. Yeah. I mean, if, you know, right now a pollster could publish a poll showing Oz ahead, right? You know, I'm not sure if like people would have done that. You know, that's what their poll showed like three months ago. Mm-hmm.
And we know from experience that herding on average makes the polls less correct, not more correct. No. No? Well, yeah. I mean, it makes – individual polls become more correct. Like if I have a poll showing Chuck Grassley only had by three points in Iowa, right? Like if I only care about my accuracy, I'm probably not going to publish that, right? But it makes the polling average more accurate. Polling averages are more accurate when pollsters don't herd. But individual pollsters have an incentive to herd because they will on average be less wrong. Yeah. Yeah.
So this is all about- It's kind of like a prisoner's dilemma, man. Yeah. I mean, are you willing to take a reputational hit for the good of the whole? Yeah. Most people aren't. And Seltzer is. Go in. Good for her. Okay. A related question from Mitchell. How do the quantity and quality of polls going into the midterms this year compare to previous years? Is that resulting in more or less certainty? There are definitely fewer polls in the House.
I'm just going off the cuff here, right? We could look at this. It seems like we have a fairly typical number of Senate polls. One helpful thing about the Senate this year is like the races are taking place in traditional swing states, right? Which is usually what happens. But like in 2018, you had key races in like North Dakota and West Virginia and stuff like that, right? So most of the swing states have established polling infrastructures. And so, yeah, we're not suffering for like a lack of Senate polls. House polls, yes, but not Senate polls.
Or governor, really. All right. We got some interesting questions this time around. I kind of like this one. Meme Hunter asks, you have a polls only forecast. What about a fundamentals only forecast? And how would that kind of model have performed retroactively? That's a good question. I don't know.
Pure fundamentals, man. I mean, there are pure fundamentals forecasters of sorts, but usually they're only focusing, they're not focusing on a broad range of fundamentals. They're focusing on just, say, the economy or... Yeah, I mean, even the fundamentals, our fundamentals includes this generic ballot, right? So it includes polling, right? Yeah. Yeah, no, I mean, the fundamentals on the model would show a big GOP win, I think, any way you construct that. Even more than... Because even, I mean, if you're looking at, like, Biden approval...
then that's polling-based, right? So the default would just be to assume that things shift by six or seven points away from the party in power at the midterms, and that would cause a GOP borderline landslide. So if there is, and this is an actual question we got from Ariel, would you care to predict what reasons slash excuses pollsters will give in the case of a significant error in each direction in this cycle? So, I mean, it'd be interesting if, say...
Actually, Republicans won the national popular vote by the amount that you would expect in an out being an out party in the first incumbent president's midterms.
Yeah, I don't know. I mean, I think maybe frame this more charitably, which is like 2020 was crazy. We were in the middle of a pandemic. People were voting in different ways, like not expected, but understandable in a way. 2016, this sort of realignment along educational lines really started taking root. It caught people off guard. It caught the campaigns off guard to some extent and pollsters too. What like at this moment in time would we say is interesting?
a potential hurdle for pollsters that they might not be expecting. No, I think if pollsters have another year where there's a pro-democratic bias, then they'll give up and say, we need to re-examine things from the bottom up, right? I don't think, whether that's correct or not, I don't know. But like, again, the reason why we, our model assumes with a lot of qualifications that polling is unbiased is because we expect pollsters to
to make that correction, right? So the fact that we might say, hey, look, whoops, they f***ed up again, but they'll fix it next time. It's up to them, right? If you're a pollster, I think you can't really say that, right? You may say, we tried to fix it and the techniques we used didn't work, so give us another try, right? Like in some sense, it would be, you know, I don't know, what if you had a big miss and Democrats beat the polls? I think you'd hear...
A couple of excuses. That people were like freaked out by the prospect of another pro-democratic mess. Or did it in the other way. Politico would do some article where they talk off the record to pollsters or we would do some article, right? Maybe you'd hear some of that from pollsters. I mean, it's happened like in the UK before where for years like conservatives beat their polls. And so there's one year where like labor really did. I think you might hear about how motivated Democrats were by Trump.
I think you might hear about Republicans being demotivated by concerns about election integrity. I still do wonder, because this was a thing in Georgia last year, although the polls were good in the Georgia runoffs, right? But like, I still would not feel fantastic about having a party who is trying to turn out voters when the message from lots of elites in the party is that elections are rigged. Like, that still seems not great to me. Well, perhaps both from a capital W win perspective and a small d democratic perspective. For sure, right? Yeah.
I also think that like if the polls miss low on Democrats, there'll be a lot of like throwing like Trafalgar and stuff under the bus. Right. So there's this college industry of polls that are kind of fake news. Right. And some of those pollsters now rated highly by 530. I'm just trying to predict how the industry will. Yeah. Right. Because it's always when reason my pollsters heard is because you always have someone else to cast greater blame upon. Right. You can always say, well, we weren't great, but we're not.
Center Street. Who's the poll that always has like Democrats have like 46 points, like Center Street back or something, right?
Going back to the New York Times poll for a second, Nate Cohn published some of the modifications they made to their polling practices this cycle, along with the results from this most recent poll. And one of the changes is that they now wait based on polling or on voting method. So they sort of try to get accurate buckets of voters who vote by mail or earlier absentee or in person on day of, etc.,
Does that seem like a good idea, a good use of polling? Not really. Why not? I'd have to think about it more. It's, I don't know how they do it. I semi-resin that answer. I mean, I think they did. So one thing Nate talked about, other Nate, is like, can you condition based on verifiable behavior, right? So if you have a sample of actual voters and you ask how many of them contributed to campaigns, right?
And you have some sense of like how many contributed and you maybe even can do a name match with FEC records and look that up. Like things like that could be worthwhile. Tangible evidence of political participation. And so maybe. But I, so I'm, you know, intuitively it seems like, just my intuition is that it seems a little circular somehow and a little risky. I mean, their logic for it is that Republican voters who vote by mail even or vote early have different opinions
preferences than Republican voters who vote day off, which I think is a way of saying that Republican voters who vote day off are more likely to support a perhaps more conspiratorial or outsidery quote unquote type candidate like Trump or somebody in Trump's mold. I probably have a rare point of crosstab Nick picking that I'm going to agree with. Right.
You ready for this, guys? Are you going to negate everything we've said during this podcast? No, I just want to show that, like, if I say 90% of the time it's bullshit, I want to give you the 10% that's not, right? Okay. Quinnipiac took a poll of the New York governor's race where Kathy Hochul is only four points ahead. She's ahead by 10-ish points in other polls, right? That poll had skewed distribution results.
of geographic voters relative to where people actually live in New York. So not enough in New York City, too many people upstate. A pollster should try to get a representative geographic sample, right? Even understand there's been some population shifts out of the city of New York. Like that's one of the easier, more verifiable things. So I think that criticism is good. So this recent poll from Quinnipiac or Quinnipiac? Quinnipiac.
Quinnipiac? The cupole. I think it's Quinnipiac. Okay. And other polls like it prompted a common question this week, which is,
Along the lines of where do you expect to see upsets, which defies logic in a way because if it's an upset, then how could we expect it? But you've already said that you sort of see our forecast of Georgia underestimating potentially Warnock. Are there any other places where you think like, hey, this candidate's being underestimated? And obviously, Warnock winning wouldn't be an upset.
I mean, well, a lot of the ones where I thought that was the case are no longer the case, right? You know, Ron Johnson was constantly a three to one favorite or lack salt now being a toss up or slight favorite. Like those are some of the risks that I thought would move and they mostly have. So maybe, you know, if you want a optimistic spin for Democrats, I think the kind of like the subjective shifts that people saw before, I think are mostly now reflected in the polls. So maybe you should be careful about like
depending on an additional GOP margin. There also perhaps could be outcomes that are framed as upsets, even if they're foreshadowed by the actual data, like a Republican governor in Oregon. Yeah, I mean, the short answer is like,
governors races can be to their own drummer. And so that's where you expect to see the upsets, either relative to the polls or relative to pre-election. One more serious question, and then we'll get to our 5e lifestyle questions at the very end of the pod once again. We have really sort of prompted a slew of questions about 5e's life. All right, our final serious question comes from
Ben. It is, what do you think of the recent model, I believe this was from Bloomberg, that predicts a 100% chance that the US will be in a recession in the next 12 months? Yeah. I think if that person believes that model, they should be incredibly rich from shorting the market, which is pricing in more like a 50% or 60% chance of a recession. Look, I wrote about economic forecasting in
I booked the signal and the noise. Good promo right there, right? Click on the link. Find it at your local independent bookstore or Amazon. Or your local chain bookstore. Or your local chain bookstore. Yeah. I get the same royalty either way. There you go. No, support independent bookstores, people. There you go. But first of all, there's a very, very, very long history of economic forecasts being dramatically overconfident, right? Secondly, and maybe more importantly, we don't have a lot of data on what an economy looks like
coming out of something like COVID, right? And obviously other pressures like Ukraine and you have kind of an ordinary business cycle fluctuations on top of that, right? But like we never had before an economy that fell as rapidly as it did or recovered as rapidly as it did. So we're kind of already in a world where the sample size in some sense is zero. And you have an economy where like job market metrics are relatively good, but where inflation is high, it's been persistently high. Some of that's because of
one-off events like COVID and maybe Ukraine, especially in Europe, right? And so you have like the Fed reacting to this like unprecedented amount of money being spent and that can affect like the yield curve and other things. But like, you know, you don't have like enough of a sample to know what this
looks like really, right? Even in the best of circumstances, you'll say, oh, 100% chance of recession. You know, that might mean like six times where these conditions arose, six out of six. It's also not foolproof, right? It's not 60 out of 60 or 600 out of 600, right? So I don't think we have, we're not in the space where we have data to do like much in the way of empirical modeling, right? If you had really understood the fundamentals of the economy, you know, if some
If the top economist at Goldman Sachs came out and said, I have done this very rigorous analysis and talked to people and reported this story out, right? And like, here's all the proof and there's a 30-page paper and there's a, not 100%, but 95% chance, you know, I would believe that and I'm sure there are like some hedge funds and so forth that have people who are doing that kind of thing. But like, but like a plug and play, like,
logistic regression model just has no data to work with to describe this kind of post-pandemic very weird economy that we're in. And so, yeah, I don't, you know, I mean, also keep in mind that like... So bad use of forecasting to say there's 100% chance that the U.S. will be in our... I mean, haven't we learned these 100% things don't go very well, people? All right.
We are now on to the 5E Fox portion of the podcast. First question, is the warrant for 5E's arrest in Nevada still on the books or has that matter been addressed? Wait, what was this? Was he rummaging in the chips at the... There's no specification about what this user who's known as Steeler fans...
Look. Is referring to? I will say the last couple times I've been in Las Vegas. There were foxes running around. No, no, no, no. I have not seen 5E. Oh. His excuse is that it's outside his range, right? He's not a desert fox. Doesn't like the desert climate. But like he used to go. I used to see him in Vegas all the time. I know when like Celine had her residency there, he was there every night basically. Okay. I mean, I don't know what you're implying about 5E, but like. We actually got that question. Okay. Yeah.
The answer is, we don't know if Ivy even has a sexuality. Yeah. Yeah. Next question.
Next question. Take that information about 5E's interest in Celine's residency as you will. I can offer no more details. Next question. What are 5E's thoughts on furries? Does he find humans who dress like animals to be offensive or fun? Is that appropriation of 5E's identity? I haven't broached that subject with him. Maybe we should have 5E on the podcast himself next time and you can ask him.
I think that 5e resides within you. So maybe you can put on your 5e voice and I'll re-ask the question. No, I, you know, 5e is actually a, a,
doing interviews today with other uh podcasts so oh okay where's where's fivey is fivey on joe rogan today probably on joe rogan yeah talking about first he's probably on joe rogan talking about phrase we gotta really like up our five you'll be a rogan bro no yeah okay next question is fivey more of a frequentist or a bayesian oh i mean i mean fivey is like more of a frequentist than he should be i think and what does that mean for the lay persons amongst us
I mean, 5e is kind of rigidly empirical, right? I think he maybe sometimes doesn't think enough about priors and kind of meta-level problems and analysis. Yeah, I know he's, you know, I mean, he would call himself a Bayesian, but I think he's like a closet frequentist. Okay. I hope, Colin, that answered your question. Final question. Does Nate put a blanket over the model at night so it can rest?
I mean, the model goes to bed at like... The model is not 5 to keep in mind, right? Yeah. Yeah, yeah, yeah. These are two very... The model is totally strung out on all the drugs you've been talking about. Roaming the streets. Of Miami. Of New York at 5 a.m. I don't see the model until... Yeah. I thought the model lived in Miami. The model's just like bouncing around. So why would I be in a position to put a blanket over the model to begin with? Well, you spend a lot of time in Miami these days. Not recently. It's an election. I can't have fun.
Yes. So the answer is yes, but it's at like 8.30 in the morning after the model finally gets home. And the model has to wake up at like 9.30 to go to work anyway. I know. That's when we start typing in the new polls. Mm-hmm.
All right. Any closing thoughts? Oh, we have a live show. We are less than a week away from our live show at this point. It will truly be a lot of fun. We'll have special guests. We're going to do some model talk. We're going to take some questions from real live listeners in the audience. And it's on October 25th at 6th and I in Washington, D.C. You can find the link in the show notes below.
Are the tickets available still? Yes, absolutely. And also virtual tickets so people can watch from the comfort of their own home. Living room. Get an in-person ticket. Get an in-person ticket. Unless you have COVID. Then don't. Yes, come in person. It will be fun, but let's leave it there for now. Thank you, Nate. Thank you.
My name is Galen Druk. Sophia Leibovitz and Kevin Ryder are in the control room. Ben Schelfefer is on video editing. Chadwick Matlin is our editorial director and Emily Vanesky is our intern. You can get in touch by emailing us at podcasts at 538.com. You can also, of course, tweet at us with any questions or comments. If you're a fan of the show, leave us a rating or review in the Apple Podcast Store or just tell someone about us. Bring your friend to our live show. Thanks for listening and we'll see you soon.
Bye.