You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com. Did you see how New York Magazine put out
a new set of etiquette guidelines. - Oh God, that was so cringy. - Did you think any of them were good? - Some of them were good, but most of them were bad. - Good or bad advice? When casually asked how you are, say good. - 100% agree. - It's neutral and doesn't force someone to endure a trauma dump of a spiel on how the world is up in flames. - Yeah. - Yeah. This I think is maybe the one I most agree with. If I ask you how you're doing, I'm just saying hello. - That's right. - That's what I, like, that's what's happening.
Hello and welcome to the FiveThirtyEight Politics Podcast. I'm Galen Drew. I'm Nate Silver. And this is Model Talk. A Monday Model Talk. A Monday Model Talk. A little weird. And I'm sure folks thought they wouldn't hear this intro for a while, but Christmas comes early this year.
The doldrums of February. The doldrums of February. The reason that we're talking is because you did a comprehensive overview of how the 2022 midterm forecasts did, which we're going to talk about. But first, it's been a minute. The 2024 presidential primary is kind of starting, although it seems like it's taking longer to get off the ground than it did in 2019. Can I get a...
little vibe check with you. Is 2023 like a weird year? Yeah. Like it's a weird number. It's a weird number. It also, it kind of feels like we don't really know what we're doing. Yeah. In the sense that in 2019, didn't Elizabeth Warren announce her president while we were on winter break?
And then by this point in the cycle, there were already four serious candidates or something like that running for president. And now, I mean, I guess Nikki Haley has basically told us she's announcing for president on February 15th. Trump's announced. Trump has announced. But then that also feels... It's never felt less like Trump is running for... In fact, it's never felt less like Trump is running for president than this current period during which he's been running for president. I mean, we're going to have lots of time to talk about...
Former and perhaps future President Donald Trump. I mean, for you, when does the primary begin? Not a moment too soon. No, I don't know. I mean, like, look, obviously the invisible primary has begun now, right? I mean, like, I'm not sure there's like anything super duper consequential that'll happen in the next six months. Six months?
Well, people would say that... I guess, you know, people would say that... DeSantis is going to announce? Oh, God, when are the debates? So the debates started, I think, in July or something like that in 2019. Oh, my God. I can't even f***ing deal with it. You can't? No. Well, get ready. Okay, so...
Let's get into it. Okay. As is tradition after an election cycle here at FiveThirtyEight, last week we published an assessment of how our forecasts did, how the polling averages and seat gain projections compare with what actually happened. If we said there was a 70% chance a candidate would win a race, did that actually happen 70% of the time? And we're going to get into all of that. But let's begin with the elephant in the room, which is
In the months since the midterms, we've repeatedly heard claims that the media, including us, predicted a red wave and that we got it wrong because the wave never materialized. This has come from casual Twitter pundits and from serious sources as well. So the New York Times published an article titled The Red Wave Washout, How Skewed Polls Fed a False Election Narrative.
And then the sub headline reads, the errant surveys spooked some candidates into spending more money than necessary and diverted help from others who otherwise had a fighting chance of winning. Is there any truth to this perception? Any truth, that's a low bar. I mean, there were definitely like polling firms that had results that were much too Republican leaning, but like sites with good methodologies, transparent methodologies like 538 had polling averages that were pretty darn accurate. And I think the way we wrote about
the story and talked about the story was pretty balanced relative to what the evidence said, right? I mean, I don't know. I'm not even sure how spicy I should be. What spice level do you want today? Medium. Medium. I mean, it's kind of frustrating. Here's medium spice. Medium spice. I don't think people actually really give a shit about being truthful. Maybe it's very spicy, right? What do you mean? Like,
they're not willing to do the work to figure out whether or not we predicted a red wave. They just want to say we did because it feels good. And to be fair, the time story, we mentioned like the 33rd paragraph or whatever, right? But like people, it's like people always used to blame like, oh, innumeracy. It's like when you have like a forecast that says like a 40 or 45% chance Democrats get the Senate, like even dumb people understand that's a pretty good chance, right? It's not about innumeracy. It's about like honesty on some level, right? Or about like,
actually being driven by accuracy and truth. And like, I think those, I'm trying to keep it medium spicy, so maybe I shouldn't mention particular outlets. But I think there's another thing going on here too, which is that it is true that if you paid attention to some media in the run-up to the election, you might have thought there was going to be a red wave. And so, and perhaps in that coverage,
folks would show polls that suggested that there would be a red wave. And so if you wanted to tell the story that there was going to be a red wave for your audience in the run-up to the 2022 midterms, which did happen on Fox News, for example, you could. Sure, there are people who just want to say the forecasting industry sucks and we're just going to say that they're wrong every time. But I think there are also some people who did think there was going to be a red wave and were surprised.
I mean, it's tricky because there is like some like does the average listener out there distinguish between 538 and RealClearPolitics? I mean, I don't know. I mean, I like the guys at RCP, but like they were making interesting, put it that way, methodological decisions where their polling hours were about a point or point and a half more GP leaning than ours were.
I think they were not terribly defensible decisions, right? They're just kind of like picking bite to me pretty arbitrarily. On top of that, they would show the average polling error over the past couple of elections, which underestimated Republicans. And then on their forecast, they would show, well, this is our polling average. Now imagine if the polls are as off as they were in 2016 or 2020. This is how the Republicans would do. We talked about it at the time and said, there's no reason to be doing this. Yeah, and there's like the New York Times. They've
They published like these polls and they do very good polling, right? They had like Democrats like winning in a bunch of like competitive House districts that were frankly very good polls for Democrats. And they kind of spun it as, oh, this is like bad news. It's just frustrating when like, you know what you said and what you wrote on the site. And I think we take...
pride in trying to have trying to explore the universal probability right we're not trying to lean into some direction and a lot of people like they're like there's a forecast but oh well you know the gop who might outperform that they often do right um and we try to be pretty careful about like presenting sorry for the cliche but both sides of the argument relative to where our forecast was we like deliberately didn't exercise where i had like these fake conversations with like piece of my alter ego about like exploring how democrats or republicans could overperform
we wrote stories or I wrote stories about like why you shouldn't necessarily assume that there's going to be a polling error like you had in 2020 or 2016. Right. And like, so it's just like, it's just frustrating when like, when your work is,
misrepresented. So to clarify, our deluxe forecast showed Republicans favored to win the House at 84% and slightly favored to win the Senate at 59%. And that's the forecast that we emphasize, the deluxe forecast. The light forecast showed the Senate as an exactly 50-50 proposition and the House as a 75-25 proposition with Republicans favored. And the light forecast, of all things,
relies more on the polls than any of the other forecasts. However, we did lean into the deluxe forecast. And so we got a question on that topic from Colin. He says, you hedged towards deluxe, which was the redder of the forecasts for vibes reasons, is what Colin says, when every time in the past you've gone with classic. So the models themselves may not have been off, but from the editorial standpoint, you definitely strayed into the red wave narrative.
How would you respond to that? So we in 2020 made Deluxe our default. And the kind of thinking was like, there's a pandemic. People just want the answer, right? And like Deluxe is the forecast that I would bet on, right? Because it does kind of hedge toward the conventional wisdom a little bit. Whenever you build a model of any kind and there's a market or a handicapping service, right? Like you don't assume that
your model is purely truthful and that the market is dumb, right? You assume that maybe your model is directionally correct. But if you want to like make an objective forecast and you would blend that with the with the market's view, right? So like that's kind of I think the best practice. I mean, there's different issues here, right? Number one,
It wasn't crazy to look at the history of interim elections and say, oh, you'd expect the GOP to have a good year. It's a little bit weird that they're not doing better in the polls, right? That's why this is a little bit tricky, right? Because when we talk about...
So, which is why maybe I'm slightly more sympathetic to people who are like, I thought Republicans were supposed to be doing well, which I mean, they won the House. They did fine. That like, I don't think it's just people being untruthful. I think there were people who were surprised at how well Democrats did. It's not crazy to say, A, this should be a good GOP year. B, we've had a couple of years where the polls were off the Democratic direction, right? Like, it wasn't crazy to think the GOP would be
It's polls, right? I mean, it wasn't crazy at all. Yeah. And that but that kind of shows up in classic more than more than deluxe. Deluxe adds ingredient of expert forecast. So political report inside elections and what's the third one? Crystal ball. Crystal ball. And that definitely induces some element of vibesiness. Right. I mean, part of it is like, OK, so those like expert forecasters are not particularly good at diagnosing like the macro environment.
They are good at like individual races to say this candidate, I interviewed this candidate and this candidate sucks, right? And when voters see that, this candidate will underperform whereas I actually have some good sources and I know the internal polling points this candidate being down five points even though the Rasmussen poll shows them a head-by-five five points, right? So that's valuable info but like I do feel like as like social media increases and it's like everything gets so vibes-based. Let me say what I mean to say, right? I think
Those groups are good, but in the modern social media environment are too influenced by vibes, right? They're too self-aware of their role and their incentives. And they're kind of like too big in the sense of like people notice what they say. And like, so I think their incentives aren't as good as they, as they once were.
Which is, I mean, that's the only thing that Deluxe does that Classic doesn't. So is the Deluxe Workhouse going to go the way of the Nowcast? Oh my God, the Nowcast. I feel like there's just, you know, there's a sort of blank tombstone next to the Nowcast waiting to be filled by the Deluxe based on what you just said. Maybe we go back to what we did in 2018 where Classic is the default, right? And say, yes, here is our purely objective model, right? I mean,
That's a loaded term objective what that means. But like but and then maybe if you want to see deluxe, you know, it's like it's like the deluxe version is not always the best. Right. You know, with a burger, you don't necessarily need to get like some fried egg on the burger. Okay. The burger is rich enough. The reason that we said we defaulted to the deluxe version of the forecast is because over time the deluxe was the most accurate. In backtesting. In backtesting. Yeah.
Can we say that during the time that we've been doing this, it has in fact been the most accurate? No, they've all been tied, basically. Yeah. I mean, you couldn't tell. We only have this set up for three years, but you couldn't really distinguish in a meaningful statistical way the three forecasts. So you're not ready to eulogize the deluxe forecast just yet? I mean, if I had to guess, we'd do what I just said, right? We would make kind of classic the default if you want to see deluxe. You can...
It probably still would be the basis on which I would make bets. But classic is kind of more in the mode of like what a 538 model is supposed to be. You go as far as you can with statistical modeling. And you recognize that a model is one view of the universe and the map is not the territory. And so therefore there's other information that might be valuable. But like that maybe should be the default, I think.
All right. We've touched on this before, but we now have a fuller picture and have done this assessment. So how did the polls actually do? What actually happened versus our averages and what the model expects? So we're still doing our pollster ratings, which is a separate exercise from this that will come out soon. What's interesting is like, it's not that Democrats really beat their polls by very much at all. They kind of came up clutch in key races. Our model has a forecast of the House popular vote.
Which is a little bit more complex than the generic ballot because there are some districts where you have no candidates from one party running. Right. But like,
That forecast of the House popular vote was quite accurate. I mean, there are different versions, right? But it's between two and four points, and it wound up being three, basically. Do you have that in front of you? Yeah, I do. So the light version, the mean House popular vote margin, was Republican plus 2.4 points, and the actual margin was plus 2.8 compared to the deluxe version, which actually –
I guess Republicans would do plus 3.8 in the national popular vote. Yeah. So like the polls, like we're not directionally off very much. I mean, some polls were. Trafalgar and Rasmussen were, right? But like the polling average was pretty good. And that method worked. And Democrats, I think, you know, there were a lot of bad GOP candidates in key races. And independents reacted strongly to that. It was not a year where you had particularly strong Democratic candidates.
turnout, right? You actually had more Republicans in the electorate than Democrats per both of the exit polls. So the story about like, oh, you know, Democratic turnout was underrated. That's not an accurate story. Yeah. So to add one more number to this conversation, our generic ballot average showed Republicans winning the national popular vote by 1.2 points. Of course, it's complicated. It beat that, but there is like a point they gain from uncontested races. But still, yeah. I mean, like, you know, it wasn't like Democrats. It wasn't like a super...
blue environment. It was kind of the environment that the polls predicted, maybe not the environment that the narrative predicted, right? But it was quite consistent with the overall environment with the polls. Yeah. I mean, a one or two point national popular vote error for a polling average is more accurate than the average of polls going back decades, right? Which is to say that 2022 was more accurate than usual,
Yeah. I mean, we all do the math for the polling averages, but this was one of the more accurate years for polling. 2018 was a good year for polling too, by the way. Maybe it's just a Trump thing. Maybe Trump is hard to nail down because 2018 and 2022 were pretty good. And so, but I want to, you started to explain this, but I think it's sort of really important to drill down on the specifics. The national popular vote average in the polls and the national popular vote average that are forecast projected
were very, very close to what actually happened. However, when you look at our forecasts at the mean number of seats that we expected Republicans to pick up, there's more of a discrepancy.
And the reason is because Democrats overperformed in the most competitive races, whereas Republicans did quite well in races that weren't all that competitive. So it's kind of the reverse of the electoral college split that we have seen in recent years where Democrats did well, like very well in the races they really needed to win. And Republicans did very, very well in the races they were going to win basically no matter what.
Yeah. Of the closest races, like Democrats won like 75% of races that were decided by like three points or less for like senator and gubernatorial races. They lost Wisconsin. That was a close one that they lost. For Senate. For Senate. But like otherwise, like the races that came down to like a point or two, like Nevada, they tended to win or the Arizona gubernatorial race and so forth. Maybe it's good luck. Maybe it's they knew where to put their resources. Obviously, the GOP candidates. Yeah.
Bad. I mean, what I mean by bad, I mean, some combination of being far right, being scandal plagued and being inexperienced. Right. And yeah, I mean, they nominated like the fact that they like had all these goofy candidates cost them money.
The Senate in all likelihood, right? And we've talked about this plenty, but the places where they didn't have, your word, goofy candidates, Republicans did quite well. New Hampshire, gubernatorial race. Georgia, gubernatorial race. Florida. Or New York's third district, I think it is, where a young man named George Santos, truly a next generation leader, like a good candidate like that, did fine, right? All right, all right.
You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.
You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.
Like you said, we are going to put out our pollster ratings later this year. But friend of the site, G. Elliot Morris, sort of already came to this conclusion. The average absolute error of polling averages in competitive Senate elections now looks likely to come in around 2.5 percent, about half the expected error since 1998. And polls look to have underestimated Democrats marginally by about 0.5 to 1 points. Yeah.
That it's my expectation is that I mean, and again, I don't think he was looking at the non-competitive races. Sometimes those can can have an effect. But clearly, like I mean, the polls did in like key Senate gubernatorial races. There just like weren't really very many upsets. Right. I mean, Katie Hobbs is down by like a point or two in the polling averages in Arizona. Right. But like a point or two is not much. I mean, there weren't like there were not any major upsets in Senate or gubernatorial races.
They were minor upsets, but like not major upsets. And to put this in perspective, the polls underestimated Democrats less in 2022 than they underestimated Republicans in 2016 or 2020. Yes. Remember, 2020 was actually quite a bit worse than 2016. People kind of forget that. But like it would be surprising to me when we do a polling average update if, you know, if it's more than like a point or a point and a half, it might be less than that even. So, yeah. So it seems kind of weird to like be in this panic about polls being –
too democratic when they were like barely too democratic or too Republican. I mean, you know, after kind of two of the past three cycles where they were, had a pretty strong pro-democratic skew. Yeah. I don't know.
I want to put some numbers to the seat pickups that we projected as well so that we can be transparent and accountable. So we the mean projection in the House, according to our forecast, was that Republicans would pick up between 16 and 19 seats there. They, in fact, picked up nine seats. And then in the Senate, the mean outcome, according to our forecast, was that Republicans would pick up between zero and one seats. Democrats picked up one seat.
That's for reasons that we basically just talked about. Yeah. I mean, again, like in the House in particular, it's kind of like Democrats just kind of coming up big in the key swing races where the overall national vote was right in line with expectations. But like if you have 435 races, then being off by, what are we, seven or eight? Like it's hard to pin the tail on the donkey so precisely, right? So like, yeah, I mean, you'd like to get the number exactly right, but like that's like
A pretty normal, typical miss. There were, however, some major upsets.
Yeah. Were there more big upsets this year than there have been in past years? Because, okay, and here specifically we can point to, you're right, the major upset here is in Washington's 3rd Congressional District where Democrat Marie Glucencant Perez defeated Republican Joe Kent despite only having a 2% chance in the deluxe version and a 4% chance in classic. That's a big upset.
Yeah, but it's supposed to happen. It's literally supposed to happen if your model is well-designed. You're going to have like, I mean, there are probably 100 races that were somewhere in that range, right? And so literally, you're going to have a 1% or 2% chance to come through if you're designing the model correctly. Well, you're talking about calibration. So we went through and looked, as I said, if we said there was a 70% chance something would happen, did it actually happen 70% of the time? How well calibrated was this cycle?
Quite well calibrated. I mean, our forecasts were a little underconfident, meaning that there were fewer upsets than expected. It's been a pattern in the past. If you want like a correct, honest, hot take about 538, you can come up with a take that our forecasts are underconfident, right? That when we say 80, it should really be 90, right? But here's the problem, which is that so, so, so, so, so many of these races are uncompetitive. And so that doesn't really...
Like, people are only really going to focus on our forecasts in competitive races. And so what happens in the, like, 80% plus zone, I think, for actual everyday people's lives, not just like a statistical game or whatever, they're not really focused on those. They're focused on the, you know, in between 70 and 50% chance. I don't know. I mean, there's people, you know, like... And so when it comes... Stacey Abrams or Beto O'Rourke, right? Like, those races were...
by polling and analytics to not be very competitive, even though I think in the absence of polling, people might have said, oh, you never know, right? Okay, sure. Or Gretchen Whitmer, right? I mean, these are races. There are some on both sides. Like, this is, yeah. But as far as our forecasts are concerned and their calibration, when it gets to the sort of 70% to 50%, 75% to 50% chance range, how well calibrated are our forecasts? Very well. Yeah.
Yeah. And if anything, historically a little underconfident that there are, again, fewer upsets than our model purports. So I think that what people will perhaps focus on in our assessment, which I encourage people to go read the article itself on our website, which is how our 2022 midterm forecasts performed. Folks will see that in our deluxe version of the forecast, toss-ups that leaned are
we only called 17% of those correctly. Whereas a perfectly calibrated forecast would have called 50% of them correctly because they were basically toss-ups. So that's an area where there were some struggles this time around. No, I wouldn't say they're struggles. I mean, I'd say that like, first of all, you know,
there was something systematic going on where Republicans dominated underperforming candidates, right? But like, like these sample sizes are pretty small. And so like, if you have like, and they're different versions of the model. So if you have like six of these or 12 of these, right, it's not that hard to, when you flip a coin to have two heads out of 12 tosses or one head out of six tosses. Right. And so like, it's just not that interesting. I don't think, I mean, if you want to find way, I mean, we like literally publish a list of like every race where we, that was an upset. Right. And so if you want to like find fault with,
then, I mean, you have all the tools. I mean, that's part of the issue also is like... We're so transparent that we help people hate us. Yes. Well, yeah. No, but like it's like... But there's so many different types of outputs that we publish, right? That like you can find ways to be wrong or be right for that matter, right? But like if people fundamentally don't get it or don't care about being honest brokers, then you can always make us look bad. I mean, it's just like... Like this is a year where like when I was in the studio...
on ABC with election night I'm like this is like about a eight or nine out of ten as far as how well our forecast did right you're not gonna have everything perfect but like this is like pretty good um and like maybe seven and a half out of ten right and like but people it doesn't matter it doesn't matter it doesn't matter yeah I will say I was equally surprised that sort of some of the backlash after the fact given that I also felt on election night like
Wow. The polls did quite well. And there were some existential questions about polling. I mean, we had those conversations where it was like, if the polls screw the pooch again, like, what are we going to do? Are we going to have to shut down shop, whatever? And the polls did in historical terms, which is the only thing we're really ever talking about here. Wow.
Peter asks, the real question of forecast accuracy is how much better you do than another less advanced forecast. If a layperson can guess as accurately as your model, what's the use? How does the model rank that way? Well, I look at... Because I'm a gambling man. I look at... Although I don't gamble on politics. I look at how our forecasts do relative to prediction markets. And we were...
more bullish on Democrats than prediction markets, right? We had them with a 40% to 50% chance of winning the Senate, depending on which model you look at. Prediction markets had a like 32% chance. So you would have made like a nice little chunky wager on Democrats and won that wager on the Senate. In the House, there was less of a gap. But like, to me, it's like we were bullish on Democrats after the Convention of Wisdom. I guess people like can't really, I mean, this is kind of the case in 2016, right? Where like, we were more bullish on Trump
relative to consensus, but still below 50%. Right. And so it's kind of like, I guess it kind of feels like a no win, right. You know, in that spot. So it's kind of similar to 2016 in the sense that like we were more bullish in Democrats relative to the, relative to the prediction markets. I think if you kind of could somehow like distill like the, you know, New York times conventional wisdom, it would be even more
more bearish on Democrats than like the prediction markets. Pitch markets kind of like are striking a balance between conventional wisdom per se and like the models usually. So let's say that's annoying is like we were on, I mean, not to like a huge degree, but like we were on the right side of the bet in terms of like how our forecast lined up relative to what the average like pundit or even the average person like who's better than pundit willing to actually like put money in the line was thinking.
Okay, this is a related question. Martin says, the calibration article was really good. However, it was based on the last forecast before the election. Yeah. Why not look at earlier forecasts, for example, one month or two months before? I'd argue that they have much more impact on people than the one just before the election. No, I think the irony is that like, if you look at our forecast from like September 15th, then it would have like exactly nailed everything.
like everything, right? And at least directionally. In fact, there is like, so we actually had an error that the deluxe forecast as published was using out of date race ratings from one of the groups that we use for expert forecasts inside elections. We kind of were having some issues with processing it and we basically forgot to like turn a switch back on, right? So it's using like the late September version of those ratings, which were actually better than the November version, right? Because there was this like, I mean, this is kind of the question that like I don't have an answer to, right?
Clearly, the polls showed some shift back to the GOP between Labor Day and November. Was that real or was that fake? But no, it is true that like the polls in mid-September and therefore a forecast would have been like more accurate, I think, than the ones on election day. We actually have an interactive on our site. We can see every forecast from every date for every 538 thing we predict probabilistically, right? So those actually do contain like, yeah, I mean, it's the comments completely right that like
There's no intrinsic reason why you would only look at election day, right? Any day that we have a forecast published is a forecast for which you can scrutinize and you can be accountable for it, right? And in some ways more interesting actually the forecast like earlier out. So like, but that interactive does show the entire history. Can we key in a little bit more on the final two months of the election for a second?
you posed the question of whether or not the shift back towards Republicans was like fake. Basically, a couple things happening. Pollsters being worried about overestimating Democrats again.
And so maybe not publishing certain polls that show Democrats doing well or what have you. There's another factor here, which is there were partisan pollsters that were publishing in some states very aggressively polls showing Republicans doing well. I mean, Pennsylvania is one of those places in particular. There's also the possibility that throughout the entire cycle, polls were underestimating Democrats and had the election happened, a
around Labor Day, Democrats would have done even better. And that there was still sort of the hangover from the Dobbs decision at that point in the cycle, but that other factors like concerns about the economy, what have you, did end up ultimately helping Republicans. Those are three possible factors in terms of what happened between Labor Day and November 8th.
Do you have thoughts on which of those is likeliest or all three or which concerns you the most? I mean, there is it wasn't crazy to think the environment was getting better for the GOP. Right. I mean, if you looked at polling saying what's the most important issue, abortion declined over the course of the cycle. Right. You had some fairly bad results.
economic reports late in the election cycle. We've had some better reports since then, but like people forget that like the data people saw in real time, like was not terribly great about like inflation coming down and so forth. Then there's some notion of like kind of people kind of the election becomes more in line with the fundamentals. I mean, there were some states, right? Like Ohio was a case where like J.D. Vance won by a pretty solid margin in the end, right? You know, Wisconsin, Mandela Barnes came actually pretty close, but you know, polls showing him winning over the summer would not have been
So I don't know, right? I mean, it's like our generic battle average didn't shift that much, did it? Can we look that up? It's already on 2024. Okay. So from Labor Day, let's say from September 1st,
Democrats were leading in the generic ballot average by a point. And in the end, Republicans were leading by 1.2 points. So it did shift two points in that final stretch. People weren't gaming, if you want to call it that, the generic ballot average as much as they were trying to in Pennsylvania or something, right? Like lots and lots of polling firms put out generic ballot averages, right? It's not just Trafalgar and stuff. And so that shift was visible even in the higher quality polls.
So you... Which may mean that maybe they were hurting too, right? Um...
So which means that you're convinced by the idea that the environment did actually shift in the final two months of the election. A veterinary ballot shift. I think it's probably mostly real that there's a point or two. Okay. Well, let's talk about the partisan polls. Yeah. Maybe the spiciest. Well, I don't know. You said you were getting spicy earlier, but we'll allow you to add like a little more red pepper flakes in this conversation. Okay.
We saw this cycle that some Republican-affiliated pollsters, or maybe not on paper Republican-affiliated pollsters, but pollsters that we know are Republican-affiliated, and pollsters that had done quite well in the past couple cycles because they had showed Republicans doing better than the polling average. So I'm talking in particular about Trafalgar here, Rasmussen here.
And there were others as well, like other not very well-known pollsters. Insider vintage, yeah, yeah. That were putting out polls showing Republicans doing quite well in some swing states. We already mentioned Pennsylvania, but it was happening across the country to some extent. And folks said, you know, hey, there's a lot of partisan pollsters putting out results. When you take those out of the average and you just look at institutional pollsters,
we see that like Democrats will actually do better. I mean, obviously that is what will happen if you take out the polls that show Republicans doing well from the average, then yes, Democrats end up doing better than the average. And so folks were sort of criticizing us for allowing those polls to stand as they were in our average. So it looks like they did have an effect on the averages in some places. What should we do about that now, understanding that that happened?
Trust the process, man. Because it means like when you say trust the process, what is the process for folks who aren't familiar with it? I mean, the process kind of believes in the market for polling being efficient. First of all, if you remove like all those geopolitical polls from the average, I think you would have ended up with like an average. It was too democratic again. Right. So it's like, you know, I mean, look, I'm fully aware that like Trafalgar and Rasmussen put out very Republican leaning results.
You know, but I think is the average university poll that isn't up to modern polling standards and isn't waiting by education and things like, I mean, things like that, right? And like, you know, I mean, there's some like... So you're saying if we take out Trafalgar, then should we also take out Monmouth or whatever? Well, Monmouth's not a good example because, I mean, A, they chickened out from actually publishing horse race results, right? But like, but no, I mean, you know, I think like...
Yeah, I mean, I'm not sure. I mean, you'd have to go firm by firm, right? I'm less concerned about like the more prolific pollsters like Quinnipiac than like the pollsters that publish one poll per cycle, right? And like, don't really know what they're... I guess what I'm saying here is, are you saying that if we have a policy of no longer considering Trafalgar or Rasmussen polls after this one cycle, then after 2016 or 2020, you could have just said, well, take all of those institutional university-based polls.
polls out of the equation as well. Like if you just go after each election saying, well, screw the pollsters that got it wrong this time. We're not going to include them anymore. Then you ultimately end up sort of like zigzagging around the field. That's what basically RealClearPolitics did. Again, the issue is partly lack of transparency, right? But they kind of like all the GOP leading polls included and like they were selective about like which
Democratic-leaning polls they included when you kind of asked them they had this transparency initiative that only went back to 2016, right? But yeah, that's what happens, right? If you kind of always fight the last war, but again like this is weird because the thing about like election forecasting I give a shit about the forecasting part of it, right? 98% of the audience gives a shit about the I mean I care about election results I have like political preferences myself, right? But like I care about forecasting for like
forecasting sake right and most people don't and so just inherently it's just inherently very weird that like 538 is such a popular product because like the average person in the audience like doesn't know what like
calibration means or what the goal really is per se, right? Or hasn't built like a model or anything. And there's nothing wrong with that. I mean, the way that I would frame it is that it is weird stuff because we're talking about something that is extremely emotional and extremely important to people. And we're trying to say, I understand that these things are very important to you and are very emotional and in some ways very much tied to your identity. And now what I want to talk about is statistics. And it's just like,
It's kind of like when someone's having a panic attack, like they're on a plane and they're having a panic attack because they're, you know, I'll talk about this from my own perspective. I'm afraid of flying. If I'm freaking out on a plane and I don't want to be there, there'll be a couple things. Well, you know, statistically, airplanes are the safest form of transit. There was a better chance that you would have died in the cab on the way to the airport than in the airplane, right? Like that's kind of in part what our role is. So I totally understand why it may be sort of like frustrating or discordant for people sometimes.
Yeah. And so that's why I don't like blame the audience, but I do blame like New York Times reporters if they aren't like even really bothering to like get the story right. You know, I do blame like, and obviously like journalists have come like a huge way in terms of like just incorporating more numeracy and more understanding of like polling. Right. And data. I mean, it's kind of a huge way to be sure. Right. But like, but fundamentally like there are people that just kind of, you know, it's weird. It's weird. Cause like, it's like,
Both more data-driven and more vibes-y. It's like both those things. So what's it? I think it's tricky because I think elections coverage had gotten more and more data-driven. And then after 2020, people were kind of like, f***.
We're going back to vibes. And that 2022, I mean, I said this during one of our model talks, that it felt like 2022 was in some ways a post-data election because all of the polling was telling us that Democrats were going to do fine and historically well for a midterm where there's a Democratic president in his first term in the White House. Yet the coverage was all like Republicans are going to do well. And so it was a weird situation. I mean, it also has to do with like, you know,
People have the memory in political journalism of gerbils or something, right? I think the usual comparison is goldfish, but are gerbils bad at memory as well? Maybe gerbils are really smart. I don't know why I said gerbils not. But you will never get yourself in trouble by just saying, oh, what happened literally last time will happen again, right? I mean, we saw this in 2020 in the Democratic primary.
where people were like, well, the party no longer has any influence, right? When like literally the, like they sound like the, like Transformers and like Pete Buttigieg and Amy Klobuchar and Jim Clyburn all come together on the same stage. And like,
endorsed joe biden like putting a very very very very like heavy thumb on the scale and it works and joe biden goes from being kind of in trouble to like like winning the whole thing by a ton like two weeks later right and then covid happens we all forget about that but like but you know but people there was lots of bad punditry in in the democratic primary and even though kind of trusting the long term and the long-term parties are influential and like
The fact that they weren't in 2016 is one data point out of many data points, right? You didn't hear this because you weren't on the podcast last week when I made this comparison. But I said...
saying that the party no longer can sort of control the primary process is like going out on your front porch after it's snowed and seeing a shovel on the porch and seeing snow still on the sidewalk and saying, well, shovels don't work because there's still snow on the sidewalk. The party never picked up the shovel and tried to clear the sidewalk of snow. And so you haven't really tested whether or not the party has influence because it didn't try to exert it. Yeah. I mean, 2016, the GOP side was weird in that the GOP kind of
sat out. I mean, we'll obviously kind of be revisiting these debates a lot when it comes to 2024, right? Get ready. But it wasn't like the party said, you know what, Jeb Bush, he's our guy. We're going to put every possible resource behind Jeb Bush, right? Or Marco Rubio, right? It's more like they were like, well, these guys all kind of suck and hopefully something will happen and Trump won't win. And when they did try to put together a campaign to stop someone from winning, it was Ted Cruz.
Correct. Who, you know, who some GOP insiders thought were was as bad as Trump. They thought he'd lose. They thought he was abrasive in a lot of ways. But like, no, I mean, like, look, if you had had like a Ron DeSantis, I'm not saying Ron DeSantis is like the most brilliant politician of all time. Right. But he seems like competent politically. And someone that people just can get to the basic side. He won reelection in a former purple state by.
lot of points, right? Almost 20 points. He does like the basic blocking and tackling, right? Maybe he's only a B-minus politician, but like, you know, I'm not sure you had any B-minuses running against Trump. You know what I mean? You had some C-minuses and D-pluses. Okay, so back to the topic at hand. When it comes to future elections, what are we going to do about partisan pollsters? Like how, so you described the process, how the process has sort of worked in the past. Are we going to make any changes? What's our plan going forward? Nothing
that jumps out. I mean, maybe there is some ambiguity as to what we classify as a partisan poll or not. But like, again, to me, it would be absurd to say we trusted the process after 2016, 2020, when you had fairly strong polling errors in the opposite direction, right? To like then like reform after 2022, when the polls had one of their most accurate years in history would be stupid. But again, there is like a market correcting mechanism, right? Why is there...
Why was there a market for all these fly-by-night, methodologically dubious pro-GOP polls? Well, it's because Republicans really beat their polling averages in the last two presidential election cycles, right? There will not be the same market for those polls going forward. This is why we say the direction of polling bias is unpredictable, right? Because you're not like... And this is what annoys some people is like, we are not in an environment any longer. Maybe we never were.
where there's a pure drifting from the sky way to do polling and it's the right scientific way to do it, right? You're making assumptions no matter what, right? And so basically polls are forecasts. You know what I mean? They're like little models almost. And so it's kind of like almost a version of efficient market hypothesis. Should I explain that? Well, I think we should just say literally what happens.
how we use our pollster ratings and how our forecast treats partisan polls in terms of what happens to Trafalgar and Rasmussen going forward. So the pollster ratings are based partly on methodology, but mostly on past results, right? And those firms had good years in 2016, 2020. And so our polling averages say, on average, these polls have been
pretty good. And so they get fairly high ratings. And so they matter more in our app. They matter more, right? After this cycle, however. After this cycle, they'll fall. I assume. We'll double check that. But yeah, I mean, they had a bad cycle, I think it's fair to say. And so they'll be punished in the ratings. But on top of the ratings, like, you know, I assume Travolgar and Rasmussen will stick around. I mean, Scott Rasmussen is no longer part of Rasmussen, right? But they're
you know, their new founder is tweeting out crazy anti-vax memes and stuff like that, right? I assume they'll stay around, but like there'll be less of a market for their polling, right? Or they'll be chastened a little bit by this mess. So like you're correcting on the market to kind of correct. Well, on top of that, will they be labeled partisan polls? We do not consider Rasmussen a partisan poll because they are just doing polling for themselves. And the fact that like,
Their ownership is conservative. I had to even use the term conservative, right? But like, you know, right wing. That does not have a bearing on whether they are a partisan poll per se. We don't want to like look into the soul of every person that works for a polling organization and say, you know what? I bet you're a secret liberal. And therefore, or even not secret. So therefore, we're going to like call you a partisan pollster, right? Partisan, we define partisan polling by who is the person who pays for the polling.
With Trafalgar, they are an exception for a very particular reason, which is that they in the past have not disclosed who their clients are, some of whom are partisan clients. Right. So if you don't disclose, I mean, that's not for like not use them at all, frankly. But like maybe we should say like like you don't disclose you're doing polling for then that's like a bannable offense. Right. That's a that's a coherent argument. But like.
oh, we're not going to look at this poll because they tweet out crazy anti-vax stuff. I mean, like, I don't want to be, I mean, there are like a couple of hundred polling firms in our database, several hundred, right? Of whom I'm sure a hundred or so published polls in any given cycle. I mean, people who like criticize also, like have never actually done the actual work to like actually come up with a consistent set of standards, right? Of course, there are polls that I think are bulls**t, but I don't want to have to go through and like subject every polling firm to like, does Nate think this is bulls**t?
Especially in the heat of an election when emotions are running high and errors are easier to make. Yeah, I mean, if you were emotional about it, right?
Then you'd say, well, I have all the incentives in the world to like head because, God, can you imagine if Republicans had beaten their polls? We'd never hear the end of it. Right. So it's like, yeah, I don't press my emotional response to in November 1st of an election cycle. Well, and so if a poll is categorized as a partisan poll, how much do we shift their results away from their outcome? The difference is in the prior. Our model assumes that nonpartisan polls are unbiased and that partisan polls are
are biased by I think about like four points or something, right? Now, if you have a polling firm like Data for Progress, which has its own issues, but they like did a lot of polling for Democratic clients, I think also did some polling on their own, right? They had a large enough sample where their polling was not Democratic leaning, right? So therefore, if you have enough data, then the prior is overridden, right? But if a new...
If I've never heard of your polling firm before you publish an internal poll for the George Santos campaign, we assume it's going to be biased by like four points or something.
You're a podcast listener, and this is a podcast ad. Reach great listeners like yourself with podcast advertising from Lipson Ads. Choose from hundreds of top podcasts offering host endorsements, or run a reproduced ad like this one across thousands of shows to reach your target audience with Lipson Ads. Go to LipsonAds.com now. That's L-I-B-S-Y-N-Ads.com.
Adam asks, is there any concern for future elections about the balance of nonpartisan to partisan pollsters? The idea of partisan pollsters flooding the zone with more polls, making it harder for the way to keep balance. So your argument is that, oh, there's a market for these things and that the market will correct itself. But what if people actually just want to mess with the polling average and don't actually care about how accurate their polling is? And there's also this kind of like underwear gnomes.
Am I dating myself with that meme? I probably am, right? Underwear gnomes? It's like from South Park. Like gnomes who wear underwear or underwear that has gnomes on it? No, they collect underwear and it's like, one, collect underpants, two, question, question mark, question mark, three, profit, right? So it's like somehow they think like, oh my, I'm f***ing dating myself. But like, what is the end game of like manipulating Poland? I mean, who the f*** cares, right? Like, how is that like, why is that like, it's like the dumbest conspiracy I've ever f***ing heard. Well, I don't know. You can...
potentially sway national political media coverage. Well, if anything, it backfired, right? Because you had Republicans investing in these races in like Colorado and Washington, and they're probably never going to win. And they lose all the close races. So like they kind of shot themselves in the foot. I don't understand like what this conspiracy is supposed to be. You know, I mean, I guess the incentive is that like, I mean, there are some asymmetries in as much as I like to get into
arguments with like liberals and stuff on Twitter, right? I mean, I think there are asymmetries in the media environment and there's more of a market for undistilled propaganda on the right. Whereas like left-leaning propaganda is like sophisticated. You know what I mean? I mean, I'd like to hear some examples of somebody who tries to be rigorous in their job, but go off. Yeah, I probably shouldn't. I mean, but like,
Okay. So I think that like a lot of things that are referred to on the left as disinformation, just arguments that partisans don't like. Not for every category, right? But for some of those categories. You mean like we need to investigate where COVID came from? I mean, those are two, I think, kind of clearest examples, right? Example number one is like,
of any discussion of could COVID have emerged from a lab, which is now a position that's extremely mainstream and like the US government looked at it and says we can't tell, right? Like that was clearly like a case where misinformation was used incorrectly and also the Hunter Biden like laptop story, right? Those are two cases where, but even those are, you know, they're kind of, but they're more sophisticated, right? I mean, you have like, oh God, I'm gonna get myself in trouble. Like with the lab leak stuff, you have like,
all these scientists working together to like write a letter for the Lancet and like put it in the guise of like scientific expertise. Right. And like,
Whereas what are you saying? What happens on the right? On the right, they'll just be like, well, the vaccine will kill you. On the right, you have this like conspiracy that like this Buffalo Bills player had a heart attack. Oh, it's because of the vaccine. Like anytime any person under the age of 50 dies, like, oh, was he vaccinated? It's just crazy, Galen. It's crazy. They're even like trying to like, you know, it's like, yes, sometimes people die and they're under 50. Right. Yeah.
Whereas the left, it's kind of like, oh, here's the, you know, veneer of expertise. Like, it's much more sophisticated. But when it comes to polling, I think, like... A for effort. When it comes to polling, I think there'd be less of a market on the left for, like... Fake polls. For polling that just takes the results and skews them by four numbers toward Democrats. Also, liberals and progressives are kind of addicted to, like, a doom loop narrative. So they actually get off on, like, bad news. Oh.
This conversation is getting interesting. So let's take it back to, like I said, in the face of emotion, just waving statistics around. We actually got a lot of questions from listeners during the cycle about intrastate correlations.
Yeah.
or what have you. Because listeners may know, our forecast does see correlations between the states. If, for example, Wisconsin is trending in this one direction, and we have a lot of polling in Wisconsin, but we don't have a lot of polling in Michigan, we can apply some of the wisdom that we get from Wisconsin to Michigan. Can we apply some of the wisdom that we get from a Senate poll to a
a House race, which our forecast doesn't currently do or didn't do in 2022? The short answer is that it sort of does it, but not enough. Like the way the model works is like when you cross the border between New York and Pennsylvania, there's nothing special that happens, right? It will simulate demographic shifts. So it'll say, okay, let's say that Republicans are polling with like white working class voters. I hate that euphemism, you know, with non-college
voters. That will have an effect in certain states, right? So it'll have an effect in like Wisconsin and Michigan jointly because you have a lot of white non-college voters in both of those states, right? So the model's smart in that sense. But if you have like a super strong Democratic incumbent gubernatorial candidate in Wisconsin and they're going to like drive turnout in the Senate race there and the House race is there, the model doesn't account for that per se. Should it?
However, unless you're actually interested in that exact question, then it doesn't really have much effect on like the numbers that you like. And we'd accounted for that. Like if you had delved into the simulation and said, OK, and how many simulations are Republicans like actually sweep all these competitive New York state house races? Right. If you care about that, I'll be giving you a wrong answer. Right. If you care, like what's the overall distribution of like seats or what's the chance that the GOP wins any particular seat? It wouldn't have very much effect on that. So if you're very detail oriented, then yeah.
This is something you'd be interested in, but like most people aren't looking at those detailed simulations. Okay, so is that a change that we're going to make though going forward to the forecast? Maybe. I mean, I also want to see like how often this happens. My guess is that this might be more of a change for 2026 because in presidential years, even in states where the presidency is not competitive, I mean, I don't think we'll have a competitive presidential race in New York in 2024, right? Even there, the presidency drives everything else, right? So you wouldn't have like
irregularities in turnout like you'd have in New York where Kathy Hochul was like a really underperforming nominee or Florida where Rhonda Santos was a really overperforming nominee. People come out for the presidency. It's the same race in every state. So like, I don't think you'd see this as much in midterm elections or in presidential years. In midterms though, you might for sure. What I'm hearing is changes so far, potentially interstate correlation in midterm elections.
de-emphasizing the deluxe forecast. We're not really planning on making changes for partisan polls. We're keeping them in and or partisan or ostensibly partisan polls. Yeah. I mean, I'd say like we need rules that we can apply consistently across a large batch of polls. Like if could I be convinced by 2024 that like there is a different quote unquote objective standard that we can use to define partisanship? I mean, maybe. Right. Yeah.
So I'm not like totally ruling that out. Right. But like but, you know, the fact that a polling firm has GOP leaning results does not make them automatically Republican partisan poll. Right. And the fact that like a pollster is a Republican also does not make it because, you know, if I'm honest, like, you know, have you looked at the political affiliation of academics? Right. Sure. Yeah. Of course. I mean, yeah.
Is there anything... So those are sort of the three maybe big buckets and that we got a lot of questions about. Is there anything that you're thinking about changing apart from that? I mean, I have to kind of put myself in the mindset of like we're back in a presidential year. No, I mean, I think we had some COVID-specific stuff in 2020 that we'll remove unless we have COVID-23. Get the f*** out of here. But I think like... I mean, this is why I get so aggravated. Like...
I think our models are at a point where they're pretty good, you know? Which doesn't mean they're perfect, but, like, this is kind of like a solved problem. Should we shut down model talk? I don't know. Well, then, we're just going to have to leave things here. Nate? Yeah. Thank you for doing this today. It was a little wacky. It was a little wacky, but it was fun. I do want to say that, like, I appreciate the listeners and the readers who do keep us honest. I do think that, like, it's hard to know how to respond if you feel like
a lot of people aren't really responding in good faith and probably I could be better about that, right? But it is frustrating when like, like I said, like on election, I'm like, we had a pretty good year because like I was on, I mean like honestly like, yeah, I don't know if we have like another like 2020, then I don't know what that means, man. I mean, we said it on Model Talk. We were like, we're going to have to shut down shop. I know what happens, right? Also, if we had a year where a bunch of like, um,
Election deniers won and stuff. So it kind of felt like we dodged two pretty serious bullets with that, right? But we still got blown up in the end. No, I'm kidding. All right, Nate. Yeah, but thank you to the readers.
And the listeners. And the listeners for your support. My name is Galen Druk. Tony Chow is in the control room. Chadwick Matlin is our editorial director. And Audrey Mostek is helping out on audio editing. You can get in touch by emailing us at podcast.538.com. You can also, of course, tweet at us with any questions or comments. If you're a fan of the show, leave us a rating or review in the Apple Podcast Store or tell someone about us. Thanks for listening, and we will see you soon.