Mood Affiliation or Confirming Evidence?
By David Henderson
Tyler Cowen introduced an important new idea in 2011 and gave it a name: the fallacy of mood affiliation. His idea is sound and important; the name he gives and even the way he defines it is faulty. Here’s Tyler’s original statement.
It seems to me that people are first choosing a mood or attitude, and then finding the disparate views which match to that mood and, to themselves, justifying those views by the mood. I call this the “fallacy of mood affiliation,” and it is one of the most underreported fallacies in human reasoning. (In the context of economic growth debates, the underlying mood is often “optimism” or “pessimism” per se and then a bunch of ought-to-be-independent views fall out from the chosen mood.)
Tyler gives 4 examples. The first 3 don’t fit his own definition. In all 3, people don’t appear to be choosing the mood. Instead, they have come to a certain conclusion about something, a conclusion that might be justified, and then dismiss contradictory evidence.
Consider his first example. (I won’t bother going through them all.)
1. People who strongly desire to refute those who predicted the world would run out of innovations in 1899 and thus who associate proponents of a growth slowdown with that far more extreme view. There’s simply an urgent feeling that any “pessimistic” view needs to be countered.
It could be that people have chosen a mood or attitude on this, but there’s a good chance that that’s not what’s going on. For one thing, it’s hard to choose moods. They tend to happen based on the interaction between certain views we have about the world and certain real or perceived facts about the world. Tyler’s right that there’s probably an urgent feeling that a pessimistic view in the above case needs to be countered. But there’s a good chance that it’s not because of the “chosen” mood. There’s a better chance that it’s because the person has reached some conclusions and doesn’t want to alter those conclusions to fit what might be a more complicated reality.
Again, I’m not criticizing Tyler’s point that there’s an important fallacy here. I’m saying that he’s got the name wrong and he even defines it wrong.
So what’s my name for the fallacy and what’s my definition? I might be able to do better but for now I think a better name is the “confirming evidence” fallacy. And the definition is: “Having reached a conclusion, possibly based on strong evidence, someone tries to counter all apparently contradictory or undercutting evidence by dismissing it.”