Deep Dive with Shawn C. Fettig

Did Polls Do Harris Dirty? (w/ Dr. W. Joseph Campbell)

Sea Tree Media

If some of the last 2024 presidential election polls were pointing to a decisive Kamala Harris win, then why did it become clear so early in the evening that Harris would, in fact, lose? Were the polls wrong...again? In this episode, Dr. W. Joseph Campbell discusses how polls work, the history of polling errors, and why it matters. He also takes on Selzer's Iowa poll that showed Kamala Harris's unexpected surge in Iowa days before the election, only for Donald Trump to secure a decisive win, echoing the shockwaves of 2016. We dissect the historical miscalculations that have shaped voter trust and question whether inherent biases, flawed methodologies, or media narratives are distorting the truth.

Harry Truman's stunning 1948 victory and the unexpected triumph of Donald Trump over Hillary Clinton in 2016 are some examples of polling error that we discuss. These moments show how fragile the balance is between the pursuit of precision in polling and the pillars of free speech. We talk about how these errors have an impact on voter engagement and democratic processes, and how challenging it is to read and engage with potentially faulty polls in an entrenched electoral culture.

Finally, we discuss the reasons why Trump's support has historically been underestimated and the implications for media narratives in shaping electoral momentum. Polling is an art and a science. It's not going away, so we should temper our expectations.

Recommended:
Lost in a Gallup: Polling Failure in U.S. Presidential Elections
Better But Not Stellar
Polls Were Largely Accurate in Anticipating Trump-Harris Race

Related:
Counterpoint Podcast

Counterpoint Podcast

-------------------------
Follow Deep Dive:
Instagram
YouTube

Email: deepdivewithshawn@gmail.com

Music:
Majestic Earth - Joystock



Dr. Campbell:

where a three-point lead late in the campaign for Kamala Harris in a deep red state like Iowa sent off all kinds of shockwaves. They really did. Just three days before the election the poll was released and the suggestions were the implications were pretty clear that if Harris is ahead by three points in a state with a demographic and partisan profile of Iowa, then she's going to be doing very well elsewhere in the upper Midwest and a clear path to an electoral college victory lies ahead.

Dr. Campbell:

And a lot of people were thinking along those lines, and it did generate a sense of late campaign momentum for her, even though, overall in the polls, donald Trump had edged ahead narrowly. And it was a moment, really a stunning moment, frankly, in a presidential election in which a poll had a significant factor, a seriously significant factor in terms of changing expectations, almost.

Shawn:

Welcome to Deep Dive with me, s C Fettig. The final polls leading into Election Day this year were promising for Harris voters. While most polls in the days leading up to November 5th showed a tight race within a margin of error, meaning that Kamala Harris or Donald Trump could win, the final Marist PBS NPR national poll released the day before the election showed Harris up 51% to Trump's 47% and, crucially, outside the margin of error and more promising renowned Iowa pollster Ann Selzer released a poll just days before the election showing Harris ahead of Trump by three points in deep red Iowa. So you'd be forgiven for feeling like 2016 all over again, as it became clear early on election night that Trump was going to win, and relatively comfortably. That Trump was going to win, and relatively comfortably, not only did he win the Electoral College, he won the popular vote, and he won Iowa by over 13 points. In the end, though, the final results did track fairly closely with most polls.

Shawn:

So while there was some polling error, something else was going on here too. Was it our own subconscious hopes putting a positive spin on what we were seeing? Was the media narrative and interpretation of polls the problem? Remember 2016? Virtually every major poll pointed to a decisive win for Hillary Clinton. Media outlets plastered their headlines with her projected 98% chance of victory, shaping not just the campaign narratives but also public expectations. Then, on election night, as Donald Trump clinched victory, we were all left grappling with more than just the political outcome. We had to confront the fallibility of polling itself. What went wrong? And, perhaps more importantly, why did we place so much trust in a system that has a long history of getting it wrong?

Shawn:

Today I'm talking to Dr W Joseph Campbell, an expert on the history and shortcomings of polling. He's written numerous articles about polling, polling error, the history of polling, and he's also the author of the book Lost in a Gallop Polling Failure in US Presidential Elections. So we discuss the promise and the pitfalls of public opinion polling, why polling errors happen, what those errors mean for campaigning, voting and, ultimately, american democracy if we can even trust polls and what the hell might have gone wrong this year, if anything. All right, if you like this episode, or any episode, please give it a like, share and follow on your favorite podcast platform and or subscribe to the podcast on YouTube. And, as always, if you have any thoughts, questions or comments, please feel free to email me at deepdivewithshawn at gmailcom. Let's do a deep dive, dr Campbell, thanks for being here. How are you? Great, my pleasure, thank you.

Shawn:

So I'm not going to lie, after this election I was pretty angry. Well, at a lot of things, but one was something about the way the polls were interpreted and packaged and sold during this election that one felt like they were their own story and not just helpful information. And two, I think, contradicted each other and sometimes sold a false hope, especially in the last days of the race. If you were a Harris supporter, I remember a particular moment when in the New York Times or maybe it was the Washington Post online there was a headline about a poll one day that suggested a Harris win and then literally right underneath it was a headline for another poll that suggested a Harris loss.

Shawn:

And in that moment I remember thinking that polling is just it's really just become its own game, and feeling really disgusted about it. And part of it's because I'm exhausted with politics I feel like it's been in our face constantly for like a decade but also because the news can't help me understand anything about it and I don't know that the way that they're, you know, interpreting, packaging and selling polls or providing poll information is particularly helpful. So I actually think to me it created a lot more confusion and then, by way of that, exhaustion, so I'm glad to have you here to discuss this, maybe explain it, maybe make me feel a little better about it.

Dr. Campbell:

The polls certainly were numerous and confusing and at the same time they did set a narrative for this election, and consistently. For months, even before Kamala Harris became the Democratic Party nominee, the polls were signaling a pretty close race. And that's what turned out that we got a close race. It was a very modest victory by Donald Trump in the popular vote, something quite different in the electoral college, of course, but that's distorted by the outcomes of respective states. The polls did do a decent job overall of letting us know that this was going to be a close race, but you're absolutely right, they did become their own story, and that is kind of the story of presidential elections in the recent past.

Dr. Campbell:

The polls do dominate, they do tell us what's likely to happen, whether that's correct or incorrect, and in fact they're in error more often than they would like to be. I'm sure they're in error more often than they would like to be. I'm sure. So. Polling has a checkered history, a checkered record, and that record is not terribly well understood or well recognized, I don't think, by the American public, by the American electorate at large.

Shawn:

So before we get into that history, which you've written about primarily, I'm thinking about your book Lost in a Gallop, and I want to talk about that. But before we get into that, one of the things that strikes me is, you know, there's a lot of narrative around the idea that polls are either right or wrong, and then a lot of praise or, you know, responsibility, depending on how that all is borne out. But I kind of feel like this election year maybe maybe the last couple of presidential election cycles that another potential problem is becoming more clear. So one is the poll itself and its methodology and how accurate it is, but the other is the narrative that the media constructs around a poll, and I feel like that was the problem in this election cycle. Media constructs around a poll and I feel like that was the problem in this election cycle, and I don't know if it was.

Shawn:

You know, I had a certain candidate that I wanted to win. So was I, you know, hunting for those polls. But I feel like the media was really leaning into this idea that Harris had momentum and that narrative. Maybe, maybe it was true, but you know it didn't bear out. And then, on the flip side of that is the actual polling error. I'm thinking about Seltzer's poll out of Iowa that had Harris up three in Iowa over Trump and that did not bear out at all, and I wonder what you think about that.

Dr. Campbell:

You're absolutely right. That poll especially suggested a good deal of momentum perhaps surprising degrees of momentum for Kamala Harris, and the poll was done by the Seltzer and Seltzer and company and it was. It has a reputation for being one of the best, most accurate polling operations in the country and Seltzer is based in Iowa and focuses mostly on polling Iowa. But for a for a three point lead late in the campaign for Kamala Harris in a deep red state like Iowa sent off all kinds of shockwaves. They really did. Just three days before the election the poll was released and the suggestions were the implications were pretty clear that if Harris is ahead by three points in a state with a demographic and partisan profile of Iowa, then she's going to be doing very well elsewhere in the upper Midwest and a clear path to an electoral college victory lies ahead.

Dr. Campbell:

And a lot of people were thinking along those lines, and it did generate a sense of late campaign momentum for her, even though overall in the polls, donald Trump had edged ahead narrowly.

Dr. Campbell:

And it was a moment, really a stunning moment, frankly, in a presidential election in which a poll had a significant factor, a seriously significant factor in terms of changing expectations almost overnight, or confirming expectations.

Dr. Campbell:

And I remember going into election day thinking that Trump had a good chance of losing because this poll was signaling that his chances in Iowa were kind of thin, and if that was the case, then he was looking at a defeat elsewhere in the country, and so I was a bit surprised when his political fortune seemed to turn around on election night. But that poll was a decisive poll, and it was a decisive miss because Trump won Iowa by 13 percentage points, meaning it was a 16-point miss by Ann Selzer, the so-called Oracle of Iowa, and it was a total, deep embarrassment, and it was one that just left a lot of people shaking their heads as to what happened here, and they'll figure it out it was probably due to her methodology that she continues to use random digit dialed phone methodology, that she continues to use random digit dialed phone calling to as the principal technique for her, her sampling, her, her poll data gathering, but, uh, that may be completely just beyond the pale now that this is not a methodology that bolsters really can rely upon.

Dr. Campbell:

She was one of the last to use that, almost as opposed and, um, well, she might be the last use that, almost, and well, she might be the last now. Yeah, yeah, I think there's a quinnipiac college. Quinnipiac university in connecticut still uses that technique too, but uh, that used to be the gold standard random digital phone calling by uh live operators. Uh used to be the gold standard of polling, but uh, given the fact that people just don't answer the phone or don't even have land lines and or don't answer their cell phones from numbers that they don't recognize, it's really become very, very difficult for pollsters to use that technique, to use phone-based techniques to get a sample. It takes forever and it's very costly and it's very expensive and then the results are kind of shaky, as we saw with Ann Seltzer's poll. Again, that was a pretty dramatic miss.

Shawn:

Before we get into some of the history of this, something that's kind of I don't know, it's, it's I don't know if it's the Wild West or if it's just unknown to me, but there are all these polls that are released to the public, you know, throughout the election season, and that's what we're all ingesting, but campaigns themselves are doing internal polling and those are much more. You know. Those are held closer to the vest. We rarely know the results of those polls and we rarely know the methodology. But I've always kind of gotten this sense that internal polls were and I don't know why I would think this but that they're somehow more accurate. And the reason I'm mentioning this is because I'm wondering if Seltzer's poll not only, you know, sent up flares to the public but internal to the campaigns as well. Do you think the polls that they get internal polling or do you know, does that tend to be more accurate? Or is it any different than the way polls are conducted that are ultimately consumed by the public?

Dr. Campbell:

Hard to know. It really is hard to know. One reason is that the internal polls are not often released, and when they are, it's kind of held with some suspicion. Well, why are you releasing this poll but not all the polls that you've done? But campaigns do spend a good deal of money on polling. So, yeah, you would think that if they have that money to spend and we're talking tens of thousands of dollars or more it's likely that those polls are more accurate, are telling them things that we don't get in, the public polls that are being released, as you say, in great number.

Dr. Campbell:

One of the best examples, though, of an internal poll and this goes back gosh to 1980, the presidential election of 1980, which Jimmy Carter was running for re-election against Ronald Reagan, and the internal polls for both campaigns signaled late in the campaign that there was this shift, a pretty dramatic shift of sentiment to Ronald Reagan, and the public polls did not pick this up. The outcome was a real surprise that Reagan won by a near landslide almost 10 percentage points in a race that had been expected by the pollsters to be very, very close between Reagan and Carter. But the internal polls of both candidates seem to pick up this shift, late campaign shift that the public polls didn't pick up on. So that is one example but it's a very small universe of one that I can cite, in which an internal poll seemed to be far more superior to the public polls, and that was a real surprise for pollsters because they were not expecting a near landslide by Ronald Reagan. I thought it was going to be a very close race and the Jimmy Carter had a chance of winning.

Shawn:

So we've talked about the polls this year and we specifically focused on the misfire from Seltzer and the polling out of Iowa. But you know, as you've written about, that is not that's not the first time that there have been polling misfires in our American political history, and maybe perhaps not even the worst. So you know, given that you've written about this, could you maybe describe some of the most significant polling misfires and then I guess maybe in doing so, could you help me understand why polling error matters or the impact that it has.

Dr. Campbell:

It matters because polling is a way in which we address, I think, an innate human urge to want to know what's lying ahead, what's going to happen. And polls in their own way and it's not a very effective way, always they're fragile, they're prone to error, they have a checkered record but nonetheless there's nothing else that we have really that would take the place of polls in terms of giving us a sense of what's going to happen, in this case in the most important single election in the country, the presidential election. So with that as kind of the backdrop, we've seen efforts over the years in which pollsters have tried to get a sense of what's going to happen and they've failed miserably. And the most dramatic example, the single most dramatic example of that was in the 1948 presidential election, when Harry S Truman was the president. He was running for re-election, he had become president on the death of Franklin Roosevelt in 1945.

Dr. Campbell:

And his administration, his first years in office, were pretty much a failure. I mean, he oversaw the end of the Second World War but his policies had the effect of dividing the Democratic Party, splitting the party in three different ways. There were the Dixiecrats. In the South, the segregationists. There was the Progressive Party that split off mostly from the Northeast. And then there was the mainstream Democrats of Harry Truman. So it looked like a three-way split in the party that he had no chance of winning. Plus, he was running against a pretty strong candidate, Republican candidate, by the name of Thomas E Dewey, who at the time was governor of New York state. And it uh, all the polls were signaling that this was going to be an easy victory for for Tom Dewey, that the Democrats would finally lose power after holding it since 1932. And uh, Harry Truman ran a very vigorous, effective campaign, crisscrossing the country by train uh, giving them hell, I mean, that was his slogan. I train, giving them hell, I mean, that was his slogan. You know, give them hell, Harry.

Dr. Campbell:

And he pulled out a stunning victory, a stunning victory in an election that nobody gave him any chance of winning. Truman himself thought he would win with 300 and some electoral college votes, but almost no one else said there was a chance for Harry Truman to win and the shock was so profound in the country. Afterwards it was like, wow, what had happened here? Because previously the polling had been correct, it had signaled the winner in the previous four elections and it was a real shock to the pollsters, a real shock to the body politic and a real shock to the public at large. I don't think there's been a shock like that, a polling surprise, quite like the Dewey defeats Truman election, ever since, since 1948.

Dr. Campbell:

And it's called the Dewey defeats Truman election because of the famous photograph, taken a couple of days after the election, of Harry Truman folding up a front page of the Chicago Tribune that had a banner headline saying Dewey defeats Truman and Truman's holding up the front page and smiling radiantly. And he said this is one for the books. And he's right. It's the most dramatic polling failure in US presidential history. Right, A close second, or maybe I don't know how close it is, but a second would be the 2016 presidential election outcome, when Hillary Clinton was widely expected to win, and a number of poll-based forecasts, forecast models, including one by Huffington Post and another by the Princeton Election Consortium. They figured that Hillary Clinton had a 98 or 99 percent chance of winning and that Donald Trump had no path or no way towards an electoral college victory, and the overwhelming expectation was that Hillary Clinton was going to win this election, maybe not by an overwhelming majority, but still clear enough.

Dr. Campbell:

And she did win the popular vote majority. But Trump won the three blue wall states of Wisconsin, michigan and Pennsylvania and, with those three states, wins the Electoral College in a dramatic and stunning upset. There's no way to measure the shock to the body politic from 1948 to 2016. But I have to think it's probably. The 2016 shock probably rivaled that of 1948.

Shawn:

The 2016 shock probably rivaled that of 1948.

Shawn:

I know that some countries have laws that restrict polling and publishing of the results of polling within a certain window prior to an election, and I always thought that was a bit odd.

Shawn:

I've been thinking a lot lately about the 2016 election. I think about the 2022 midterm election, the results there, the results in this presidential election this year and the impact that that's had the polling has had on me, and I can kind of see how consistent either polling error or error in narratives around polling can create a real problem. For I guess I'm just going to say democracy, because I'm thinking about progressives or folks that support folks like Hillary Clinton or Kamala Harris could be, over time, lose a certain amount of trust in polling and it depresses their desire to even vote. But then, on the flip side, I think the same kind of effect could happen to conservatives if they're consistently being told that their candidate is going to lose or is losing and then doesn't. I could see how that would feed a conspiracy theory type of narrative as well, and I wonder if you've ever given any thought, I suppose, to what we could do related to polling and its role in elections. That would, I suppose, bolster democracy rather than undermine it.

Dr. Campbell:

In a way, polling is a reflection of the American democracy, the American experiment in democratic governance. It tries to capture, in ways that are probably not always accurate, a sense of what lies ahead and what the voters in this huge country are going to be doing, especially in a presidential election country, are going to be doing, especially in a presidential election. If there was any effort to try to say, okay, we're going to have a blackout of polling for the 48 hours or 72 hours before the election, it just would run into almost immediately into First Amendment difficulties that it would not succeed. I do believe the French and maybe the Canadians have some sort of window in which their polling results are not to be published before an election, before a national election. But it would just not work here and I think that the interest in the election is driven not only by polls but by the campaigns at large, by polls but by the campaigns at large, and to get a sense, however erroneous, whether the election is going one way or another. I think it's so important for people to have that there's just no way to regulate the publication of polls and polling has such a deep and long pedigree in American history it goes back almost 100 years anyway or 200 years, I'm sorry to the 1820s, and it's hard to see how polling is going to be uprooted or outlawed. I mean it's unthinkable. That kind of proposition and the number of polls really does lead to a cacophony out there. That's definitely the case. We're more confused in many respects than find illuminating the poll results. But nonetheless it's part of this great churning of the American democracy and the American democratic experience and I think that we need to continue to insist that polls do better and that they do give us an accurate sense of what's likely to happen. I mean, that's the reason they're done.

Dr. Campbell:

Pollsters don't go out there and do polls with the expectation that they're going to be wrong, that they're going to be an error. Ann Selzer didn't do that in Iowa in the days before the 2024 election. They expect to be right and but it behooves pollsters and perhaps even the public to insist on it that they, that they come up with techniques and methodologies that are that are far superior to what is what we're seeing now. There was one pollster in 2024 who got it right, and this is a Brazilian company called Atlas Intel. Not much is known about Atlas Intel, but they got it pretty close to accurate. They had Trump winning all seven swing states and had Trump winning the national popular vote by a very small margin, which is what turned out to be the case. More about how they're doing their methodology. I think it's an online approach, but they claim it's proprietary. They don't speak much about it, but you know how they did it, how they got it right, is something I think we need to know more about. I guess that kind of begs the question.

Shawn:

I'm not sure I understand how polling generally works. I assume that people are getting calls and I don't know if it's landline, I don't know if it's on their cell phone. I have never been called to participate in a poll. Maybe I have, but I would ignore it. So I'm a potential voter that is not being polled, or I am a voter that's not being polled. So I guess I'm wondering you know, how does polling work, how has it evolved and what are some of the challenges that today's environment clearly is posing for polling?

Dr. Campbell:

Fundamentally, polling is a sample of a larger population and the views of that larger population. And samples, ideally, are done with the prospect that everybody within that population has a roughly equal chance of being asked the question being asked to participate in the poll. How they're done has evolved dramatically over the years. It used to be door-to-door back in the days of George Gallup and Elmo Roper. In the early days of quasi-scientific public opinion research, they would send out interviewers door-to-door to do the sampling. That was not a very effective method, prone to error for lots of different reasons, including the way in which the interviewer would decide who to interview. So it has evolved over time and with the advent of random digit dialing technology because everybody at the time, in the mid-1970s almost all American households had a phone. So that meant that there was this universe of people who were eligible to be interviewed and had the potential to be interviewed. So it was a great opportunity to get a random sample without a whole lot of cost. But over time marketers and telemarketers and others using the phone for their purposes had the effect of getting people away from the phone. They didn't want to answer the calls they didn't recognize. And polling by phone has really dropped off to about a three percentage point response rate. People don't answer the phone or they don't complete the surveys done by phone. Plus, landlines are disappearing. In the country Almost everybody has a cell phone, but it's another difficult task to get a whole list of cell phone users and call them, so phone calling is very expensive.

Dr. Campbell:

A number of pollsters these days are trying to use other techniques, including text to web sending text messages to people, particularly in certain demographic groups that they believe are certain demographic groups, and inviting them to take a survey online.

Dr. Campbell:

I've received at least a few of these invitations over the time and I've just ignored them completely or just deleted them.

Dr. Campbell:

Get a good sample of people to participate in what they call panels, or online panels, in which people would agree to be interviewed periodically online about issues and campaigns, and some of the panels that have been created by polling organizations, including the Pew Research Organization here in Washington huge numbers of people on their panels.

Dr. Campbell:

So that's an approach that some pollsters think is very promising in terms of getting a good sample, getting a sample that is not that difficult to tap and to recruit. So polling is in a state of flux, that is for sure, and there are different techniques, different methodologies, and I mentioned the Atlas Intel online approach that they've been using, and we need to know more about that as a possible way in which pollsters can do their job and can get a reasonably good sample of people, because it all starts with a sample. If your sample is bad, if it's distorted, if you don't have the sort of a mix of people that reflects the demographics that you're trying to match, then you have to weight the sample, in other words, statistically adjust the sample, and then that brings in all sorts of possible error. Polling has lots of points where error can creep in, and it's, in some respects, kind of surprising that polls don't go off the rails more often than they do.

Shawn:

So one of the phrases that has become somewhat synonymous with the Trump era although I'm sure it's existed prior is the idea of a shy voter, and I think in 2016 and maybe 2020, there was this assumption that there were potential shy Trump voters that pollsters were struggling with, and that this year there were potential shy Harris voters that that pollsters were struggling with. Could you explain what the shy voter phenomenon is and if you've ever given any thought to how pollsters could effectively confront the challenge it poses?

Dr. Campbell:

It's not real clear whether the shy voter phenomenon is all that extensive it was. You're right mentioned frequently in the aftermath of the 2016 election Trump's surprise victory in 2016, that many of his supporters were disinclined, for whatever reason social desirability reasons, maybe other reasons from making clear telling a pollster, telling a stranger for whom they were going to vote. Pollsters and polling organizations have looked into that and really have not found a lot of compelling evidence for the shy Trump phenomenon that this would be something that would be so extensive and so pervasive as to help explain the outcome of that election. There may be some of that. There may be some people who are reluctant to tell people who they're going to vote for, but it doesn't seem to have been a widespread phenomenon that would have affected the outcome of an election. Now there is, on the other hand, some indirect evidence that suggests pollsters are having a very difficult time of tapping the extent and breadth of Trump supporters in Trump's presidential elections, and that was the case in 2016. It was certainly the case in 2020. And, to a lesser extent, it was the case in 2024 that they underestimated Trump's support. They understated just how many voters were going to go for him, and this was the case pretty much in all seven swing states. All seven swing states underestimated the polls, collectively underestimated the amount of support Trump was going to have in those states. So the problem persists for pollsters that they're not able to get into and interview the extent, the full extent of Trump supporters.

Dr. Campbell:

Whether that's related to a shy Trump phenomenon or not, could be, but there just doesn't seem to be a lot of compelling evidence that says yeah, that's a real problem. There is some evidence that Ronald Reagan also was the beneficiary or whatever was affected by a shy voter phenomenon that his in his races for governor in California and then later his two presidential campaigns tended to understate the his support. Now the last presidential campaign that Reagan ran was 1984, and that was a landslide. Everybody knew it was going to be a landslide, but the polls estimating Reagan's support in 1984 for his second term were all over the lot, from like 10 percentage points to 25 percentage points. So pollsters really had a difficult time even then kind of getting a sense of Reagan's support. Whether that's a reflection of a shy Reagan voter phenomenon or not, or whether it's an artifact of polling failure and polling difficulty, it can be debated. I think it tends to be the latter.

Dr. Campbell:

I think it's more of the technology, the methodologies that pollsters were using back then in 1984, than any extensive, shy Reagan voter. But still this is a phenomenon that it seems like it should exist. If it doesn't exist, I mean it just sort of makes sense. On its face it says, oh, okay, yeah, but there are people who are just not going to tell pollsters who they're going to vote for because they're for whatever reason. I just don't think it's extensive enough to explain Donald Trump's performance at the polls these last three elections.

Shawn:

And I feel like I mean, this is absolutely just my own personal feeling, but I feel like in 2016, there was a better argument for it. I feel like Trump voters aren't particularly shy about it. They just don't strike me as shy people anymore.

Dr. Campbell:

But no, I think that's a good point. It's a good point, I'm sorry. I think it's a good point because, even in 2016, he was holding these rallies which attracted, you know, tens of thousands of people, and they did not seem to be very inclined to hold back on their views about Donald Trump, and so and that has been a feature that has figured in all three of his presidential runs, so I think it's more of a problem that pollsters have had to try to reach out to and interview people who are Trump supporters, I think for whatever reason.

Dr. Campbell:

Maybe it's not shy Trump, but they're reluctant because Trump has sort of characterized media organizations as fake news and many of the polls more prominent polls being conducted these days are done by or for media organizations New York Times, cnn, washington Post, los Angeles Times, nbc News, abc News. These are all media organizations that are also big time into polling and it's possible that the Trump voters saying, you know, yeah, right, I'm going to participate in a poll done by ABC news, you know it's no, I'm not, I don't have to do this and it's, it's just not worth my time. I don't believe in what these guys are doing. So, um, that's not an unlikely response, it seems to me, by Trump supporters. Why bother?

Dr. Campbell:

And the New York Times senior political analyst his name is Nate Cohn. He wrote a column shortly before the election in which he was reporting how white Democratic voters I think they were getting them far more readily than white Republican voters and he said that that might signal that we're going to be underestimating Trump's support again. And that's what happened. That's what happened. Trump's support was underestimated again in the 2024 election, not necessarily by the margins or to the degree which was underestimated in 2016 or 2020, but still pretty clearly underestimated again in 2024.

Dr. Campbell:

I think, it's more of a polling problem than a response problem.

Shawn:

Yeah, I'm glad you say that, because that's where I wanted to go. Next is that if this is not a participant problem, then we're really talking about probably methodology, you know, and there's a. There's a whole bunch of directions we could go here. But you know, I've talked to a handful of people that have studied the MAGA movement and have gone to some of the rallies and to them this year it was never a question that Trump was going to win just by attending his rallies. There was just something you know about the fervor and the way. You know how people felt about his rallies and why they were going to them in the first place.

Shawn:

That was sometimes absent policy altogether. That was just very different. It suggested, you know, there was something in the air. I don't know how you measure that Right. But another methodological problem that has nothing to do with the participants at least in an active sense, but more in a passive sense is on the back end and it's related to this concept of poll herding, which I had never heard of prior to this election, but then suddenly I heard about it all the time and I'm not quite sure I fully understand it. I think it has something to do with pollsters, not wanting to be an outlier. Maybe Seltzer wishes she had also chosen to do this, but not wanting to be an outlier. Maybe Seltzer wishes she had also chosen to do this, but not wanting to be an outlier. And so they kind of tweak their results to be within some kind of a norm, and I'm not sure if I'm correct. So could you explain what poll herding is and I suppose, the impact it might have had?

Dr. Campbell:

You're pretty close. It is the suspected movement of polls by pollsters to get close to some sort of consensus. This is more suspected than confirmed, though, hurting among pollsters. But it's done for reputational reasons, to protect one's reputation. If you're an outlier, like Ann Selzer was this time, it's very difficult to explain that away and if your poll comes out way wrong it does damage the reputation potentially.

Dr. Campbell:

So to avoid that, herding happens supposedly among pollsters because essentially they're copying off each other in a way. It's like a grade school kid in a math class copying, you know, copying the arithmetic problems, and I'm not trying to trivialize it, but it is a difficult concept really to wrap one's mind around because it presumes that pollsters are sort of in cahoots. But there's really never been any clear evidence that that is indeed the case. And there's a lot of low-level rivalry among pollsters. They're not out all the time saying you know, I want to beat the YouGov poll this time or I'm really eager to outshine Ipsos polling. I mean there's not a whole lot of that overt rivalry among pollsters but it exists. It exists and privately they talk about polls that are way off or not looking strong, and they do so kind of negatively. But whether they surmount this innate rivalry that exists among pollsters and go about hurting. I don't know. It's more, as I say, more suspected than proven.

Dr. Campbell:

Nate Silver made a big deal about this. He's the data guru and polling analyst who is widely followed and has a pretty good reputation. But he was mentioning this quite openly during the latter part of the 2024 campaign, but he didn't have any evidence. He didn't have any real compelling evidence that this pollster is doing this. And here's the evidence. He was relying on appearances and that it's unlikely. He was saying that the polls would all sort of hew to the same kind of outcome unless they were hurting. So, but I think if you're going to make that kind of accusation, you're going to have to make it a little more strenuously with evidence, it seems to me.

Dr. Campbell:

Again, it's one of those phenomena that's not unthinkable, it's not implausible on its face. It could happen. It could happen and for reputational reasons, and the polling industry has taken some battering to its reputation overall. The 2020 election collectively was the worst in 40 years and the pollsters were very, very inclined to want to avoid that kind of scenario again. This year they have to an extent, but still they underestimated Trump's support, and that's another black eye for the polling industry. Now, whether this is going to be a real, lasting, lingering damage to the reputation overall remains to be seen. I kind of doubt it, but still it underscores just how tough it is, what a difficult profession this is the election polling.

Shawn:

So we've talked about the potential that the polls themselves have been inaccurate or incorrect, but as we're getting a final tally on the popular vote, I suspect what we're going to hear more and more, especially from the agencies reporting these poll results, is that the result actually was in the margin of error for pretty much most, if not almost all polls as they predicted, pretty much most if not almost all polls as they predicted.

Shawn:

So, setting that aside, there's the narrative that the media kind of puts, or the spin they put on the poll that they're releasing or producing for the public, and that to me felt like and this could be just the media that I consume admittedly I could be in my own bubble here, but it felt like there was very much Harris momentum.

Shawn:

Harris is doing very well. Harris looks like, I mean she was pretty much leading in the three blue state, the blue wall states, I suppose even in Pennsylvania, but leading in most polls in Michigan and Wisconsin up until the day of the election, and so I don't want to ask you a leading question, but I do wonder if the media might be part of the problem here. It's not as sexy, I suppose, for the news to spend, or the media to spend a certain amount of airtime explaining how polls work and what the margin of error means, but to say, you know, harris has got momentum or it looks like she's going to pull out these three states, is very different than saying we are showing Harris up one within a margin of error. And I'm just wondering if they're doing a poor job of explaining that to us or if I just am hearing what I want to hear.

Dr. Campbell:

Pollsters have long complained, and I'm talking about non-media pollsters going back to the days of George Gallup and Elmo Roper and Archibald Crossley. These are some of the founding figures of election polling or issues polling in general. They, especially Gallup often complained that the public didn't understand about how polling was done. The public was in the dark and pollsters needed to do a better job of explaining how this works. And that is a criticism that has continued to be aired over the years, over the decades, and pollsters. You still hear people say well, we have to explain better, we have to do a better job of letting people know how we go about our job and how we go about our work, and I don't know. It seems like there is something to that.

Dr. Campbell:

But at the same time, news organizations do, as you suggest, put their own spin interpretation on it, and one of the easiest elements to inject when you're discussing and reporting on polls is the momentum which way does it seem to be going? Is Harris gaining points or is Trump moving ahead? And that sort of conjures, the notion of a horse race, which people have criticized for a long time. But that's what it is. I mean, that's what an election is. Elections are about who wins and who loses, and elections do matter, and so that's really the most fundamental aspect of a campaign who's going to win, who's going to lose, and if the polls tell us that, and tell us that accurately in advance, that by nature is news, that is newsworthy.

Shawn:

So, as long as I've got you here, I want to ask, because my expectation is that I'm going to be around for a few more presidential election cycles and I do not want to feel, regardless of who wins, the way I did this time, and at least as it relates to polls. So I guess I want to ask you, given your expertise here, how do you approach polling and polling results throughout an election season and, ultimately, can we trust them?

Dr. Campbell:

We have to treat polls warily. It's important to recognize that polling is a very fragile undertaking, prone to error and has a checkered record and therefore it really behooves us as consumers of the news, consumers of polls, to keep that in mind, to treat them with some degree of caution, some degree of distance even. Of course, that's much easier said than done, as I said earlier in our conversation, I thought that the Seltzer poll in Iowa signaled a lot more in terms of implications for this race than turned out to be so. So it's easy to dip into and to be overwhelmed by, or to be taken in by, a single poll result, and the Seltzer poll was, you know, example number one, certainly in this election. I think it's important to try to step back and try to remember from these mistakes that single individual polls. You shouldn't put too much stock into any single individual poll, even one that has the reputation attached to it, as the Seltzer poll did. We're probably better off looking at impartial, even-handed aggregation of polling results and even those can be skewed, but still that gives us a little better sense of which direction things seem to be heading, and also keeping in mind that polls are fallible, that the polls are just not prophecy this is not gospel and maybe even keeping in mind other options for prognostication betting markets.

Dr. Campbell:

I don't know much about betting markets, but they did attract a lot of attention this campaign cycle. You know what are they saying. If people are really into it and trying to figure out what's going to happen, we're not always going to know in advance how these are going to shake out. This is another example. The 2024 election is another example. I do think that it's important to keep in mind that some pollsters have a better track record than others and this is something that Nate Silver and FiveThirtyEight tried to quantify, and they both of those organizations, both of those sites have rankings of pollsters. But even then that can be misleading, because Ann Seltzer had an A-plus rating from Nate Silver and her poll at the end of the campaign was not an A-plus poll by any means.

Dr. Campbell:

It was an F.

Shawn:

All right, final question you ready for it? Okay, what's something interesting you've been reading, watching, listening to or doing lately, and it can be related to this, but it doesn't have to be.

Dr. Campbell:

It's kind of a not terribly thrilling answer, but I've been recovering from knee replacement surgery over the last month and it's going pretty well. But it certainly is a. It's major surgery and it does take time to bounce back from it. It doesn't sound terribly sexy, does it, but it's what's been preoccupying my time as well as writing about polling and predictions and presidential elections, and that's been something I've been trying to do too, to be part of the conversation, to the extent that I can, about this election and the polling thereof. So it's been fun in that regard and they've been very welcoming to my writing, and a lot of it spins off from the book too, an expanded edition of which came out early this year from the University of California Press. Another book is in the works somewhere, someday, sometime.

Shawn:

Well strike while the iron's hot. I'll put a link to some of those articles in the show notes. I've read some of them, so you've been very busy. I do want to circle back to the knee, though. How quickly do they get you up on your feet?

Dr. Campbell:

You're doing PT the next day, physical therapy the next day, surgery was done at a surgery center. It wasn't a hospital. Some people have a hospital stay of two or three days. I was home. I was home six hours after the surgery and, uh, walked in the front door of the house and it's, uh, yeah, I mean it's not like pain-free, but it's they want you moving around real quickly.

Dr. Campbell:

I think the skill of the surgeon matters an awful lot in these matters too, and you know, I, my mobility was, was, was there pretty quickly and uh, the, the temptation is to do more than you really should and you want to not do that. You want to sort of keep you know, keep it under control and and uh, and not uh, not start walking or or even certainly not running any distance at all, uh, very soon. So, um, it, it does take time and there are potential, are potential moments when things fall back and you overdo the physical therapy, for example, for a session and things like that. But yeah, they want you up and moving right away and we have stairs in our house and it's something I've been climbing inevitably, not necessarily to do at home PT, but just to, you know, to get around to be able to get moving.

Shawn:

Wow, it's totally weird question, but do you ever have a moment where you're like, oh, I can tell there's metal?

Dr. Campbell:

in there the only time when that has happened. I've had knee replacement surgery on my right knee before this most recent surgery on my left and going through airport metal detectors that's it off every time and how do they handle that?

Dr. Campbell:

you get frisked, you tell them that you have a, you know, a bionic knee, essentially, and and then they, uh, still frisk. You pat you down, you know, put you, you know. But that's the only time I really don't ever have a sense that there's metal parts, metal moving parts, in that joint or in those joints.

Shawn:

I really haven't had that sensation at all, you are kind of a cyborg at some point, right? I think so yeah.

Dr. Campbell:

I mean, isn't that the definition? Artificial part? Yeah, absolutely, but it's essential. I mean you reach a certain age and I guess after certain activity over the years that arthritis is set in on the knees or whatever joints you're talking about and you have to have them replaced, and the technology has really improved dramatically in recent years, according to my understanding.

Shawn:

And.

Dr. Campbell:

I'm glad it has, because that arthritis can really be debilitating.

Shawn:

Yeah, well, I'm glad you're doing well. Well, thank you, thanks for asking. Yeah, dr Campbell, thanks for being here. Thanks for taking the time for the conversation it's. I think it's been a little helpful in not only understanding but maybe reconciling the outcome. For me, great.

Dr. Campbell:

It's been my pleasure, thank you.

Shawn:

It's clear that the story of polling is complex and it's layered. It influences campaigns, it influences how voters think about candidates and races and even how they vote. Polls are not just tools for predicting outcomes. They're instruments that influence how campaigns are run, how the media frames narratives and how voters perceive their own power in the democratic process. So that means that polling has implications for our democracy. When polls go wrong, as history has shown us time and again, the consequences are felt far beyond inaccurate headlines. They can distort public trust, skew political strategy and even alter the very course of elections. But, as Dr Campbell said, polls aren't going anywhere. So we're the ones that are required to hold pollsters to a high standard, while also approaching polling with a critical eye and a healthy dose of skepticism, because I don't want to feel the way I did in 2016 and 2024, again in 2028 and 2032. You get the picture All right. Check back next week for another episode of Deep Dive Chat soon, folks. Thank you, thank you, bye.

People on this episode