A couple of weeks ago, I wrote that the election forecasting industry needs to be put out of its misery and rebuilt to reflect the new realities of the 21st century. Every election of this century has had problems (as did the last couple of the 20th century). The current narrative — that everything was OK until the polls got it wrong in 2016 and 2020 — is nothing more than a myth.
Last week, I noted that the biggest reason to be skeptical of this myth is that the national polls were further off in 2012 than in either 2016 or 2020. In 2012, the national polls understated Barack Obama’s margin of victory by 3.2 percentage points. This year, they understated Donald Trump’s position by about 2.7 percentage points.
So, why wasn’t the polling “miss” a big issue in 2012? Why didn’t we hear all kinds of theories about “shy” Obama voters?
I believe the answer can be found in something I call analysis bias. Too often, analysts see what they want to see rather than what the data is telling them.
In 2012, the left-leaning analysts looked at the data and assumed the actual results would be a little bit better for the Democrats. So, when the polls showed Obama barely ahead in 2012, most analysts expected him to do a bit better and that’s what actually happened. The analysts looked smart and there was nothing to complain about.
In 2020, however, the analysis bias backfired in a big way. The national polling averages showed Biden leading the popular vote by seven points and he ended up winning by just under 5. The results shouldn’t have been a shock. But the analysis bias had TV analysts dreaming of a double-digit blowout by Biden. When that didn’t happen, the election night pundits were shocked and embarrassed.
So, how can a bunch of really smart people make such a mistake?
There are many reasons (including groupthink) that lead to such errors. But this year, part of the problem was that those who pore over every new number and survey were overly committed to an outdated polling methodology. They considered polls that used live operators to be “high quality,” the “gold standard.”
There was a time when I would have agreed that operator-assisted polling was the “gold standard.” But that was long ago in an era when people routinely talked on the phone. That’s not the world we live in today.
Talking on the phone — especially to a stranger — is pretty uncomfortable for most Americans in the 21st century. It’s especially uncomfortable when you don’t know if the person calling will consider you deplorable depending upon how you answer. As the Trafalgar Group’s Robert Cahaly said, “Relying on live callers for polls is especially bad in this modern era, where ‘social desirability bias’ is in full play.”
This year, the operator-assisted polling led the analysts astray. They consistently showed Biden with a bigger lead than polls using more modern techniques. Many of these operator-assisted polls did show the Democrat with a double-digit lead.
Not only did those old-time polls lead the analysts astray, they also led the polling averages to be further off. In fact, if you took the operator-assisted polls out of the Real Clear Politics average, the remaining polls gave a really good picture of the race. They were only a point or so off the actual results.
So, as we think about how to fix the election forecasting industry, one important step will be to recognize that the era of operator-assisted polling has come to an end.
Scott Rasmussen is an American political analyst and digital media entrepreneur. He is the author of “The Sun is Still Rising: Politics Has Failed But America Will Not.”