Pollsters ask the public all sorts of questions about their political beliefs. But what does the public actually believe? Is there any reason to assume that people are responding truthfully to the questions asked by pollsters?

That may seem like an odd question. Why would people lie to pollsters? I’m not sure, but there is evidence that they do lie about their beliefs. Reason magazine has an excellent article by Ronald Bailey, which discusses the tribal nature of expressed views on factual questions with political implications:

A 2015 study in the Quarterly Journal of Political Science sought to distinguish partisan cheerleading from sincere partisan divergence. The Northwestern University political scientist John Bullock and his colleagues found that offering participants small payments for giving correct and “don’t know” answers to politically salient questions reduced the partisan gap between Republicans and Democrats by about 80 percent.

“To the extent that factual beliefs are determined by partisanship, paying partisans to answer correctly should not affect their responses to factual questions. But it does,” they observe. “We find that even modest payments substantially reduce the observed gaps between Democrats and Republicans, which suggests that Democrats and Republicans do not hold starkly different beliefs about many important facts.” 

The article cites another academic study that reported some truly astounding results after people were shown pictures of the modest crowds at the Trump inauguration and the large crowds at the Obama inauguration:

But are partisans really seeing different things? Perhaps they are mostly cheerleading their team rather than asserting actual beliefs. This is the thesis explored by the University of Nottingham philosopher Michael Hannon in a 2020 paper for Political Epistemology. He points to a survey of nearly 1,400 Americans conducted in January 2017. Researchers showed half of the respondents photos, simply labeled A and B, of the crowds on the National Mall during Barack Obama’s 2009 inauguration and Donald Trump’s 2017 inauguration. They were asked which photo depicted the crowd for each president. Forty-one percent of Trump voters said the photo with the larger crowd depicted the Trump inauguration, which was actually the one from the Obama inauguration. Only 8 percent of Hillary Clinton voters picked the wrong photo. The researchers argue that it is likely that Trump voters picked the photo with the larger crowd as a way to express their partisan loyalties and show their support for him.

More tellingly, the researchers asked the other half of the respondents which photo depicted the larger crowd. One answer was clearly correct. But Trump voters were seven times more likely (15 percent) than Clinton voters (2 percent) to assert that the much less populous photo of Trump’s inauguration had more people. Remarkably, 26 percent of Trump voters with college degrees answered incorrectly. “When a Republican says that Trump’s inauguration photo has more people, they are not actually disagreeing with those who claim otherwise. They’re just cheerleading,” argues Hannon. “People are simply making claims about factual issues to signal their allegiance to a particular ideological community.”

Former EconLog blogger Bryan Caplan occasionally bets with people on specific factual questions, because he felt that people have less incentive to engage in wishful thinking when money is on the line.  These academic studies provide support for Bryan’s claim that people don’t always believe what they say they believe.

Robin Hanson has argued that some public policy decisions should be guided by prediction markets, and I have specifically advocated using NGDP futures markets to guide monetary policy.  Public policy is likely to be more effective when based on views that will prove costly if incorrect.

PS.  In a recent post, I reported this story:

In 2006, lawmakers passed a bill banning almost all abortions, which Gov. Mike Rounds signed. It set off a brutal campaign that became the dominant issue in a busy election year that featured a governor’s race and 10 other ballot issues. Voters rejected the ban by 56% to 44%.

Abortion opponents decided to make another run in 2008, collecting enough signatures to return abortion to the ballot. The key difference between the two measures was that the 2008 effort included exceptions for rape and the mother’s health. Opponents figured the lack of exceptions in 2006 had doomed their efforts.

They were wrong. The 2008 vote was nearly identical to 2006, with 55% rejecting the measure.

I suspect they were wrong because they took seriously poll results that suggest a wide range of views on abortion.  If you give people 4 or 5 options to choose from, the responses will spread out among these options.  People don’t like to sound extreme or unreasonable.  But in a binary up and down vote, it turns out that people are simply pro-life or pro-choice, with very little in between.

PPS.  North Dakota had a similar referendum, with a similar result.