Suppose you publicly declare, “X causes Y.” If people have strong emotions about X or Y, they’re likely to misinterpret you in at least one of the following ways.
1. Misinterpretation of certainty: “So X certainly causes Y.”
Reply: Who said anything about certainty? In normal English, assertions are not certain. That’s why people will often hear a statement, then request further information about your confidence. As in: “Are you certain of that?” or even “Are you absolutely certain of that?”
2. Misinterpretation of necessity: “So X necessarily causes Y.”
Reply: Who said anything about necessity? In normal English, assertions describe what is, not what must be.
3. Misinterpretation of universality: “So every X causes Y.”
Reply: Again, who said anything of the kind? In normal English, assertions describe what typically happens, not what invariably happens.
4. Misinterpretation of monocausality: “So every Y is caused by X.”
Reply: In normal English, naming one cause does not preclude the existence of endless other causes.
5. Misinterpretation of hyperbole: “So even a grain of X causes tons of Y.”
Reply: Again, naming X as a cause of Y says nothing about how responsive Y is to X.
Given how childish all of these misinterpretations seem, why do they run rampant? The best story, in my view, is that these misinterpretations are offshoots of simpler forms of motivated reasoning. As Jonathan Haidt observes, when we hear a statement we want to believe, we usually ask ourselves, “Can I believe it?” When we hear a statement we don’t want to believe, in contrast, we usually ask ourselves, “Must I believe it?”
My extension: When we want to believe that “X causes Y,” we rarely impute any of the preceding misinterpretations to the speaker. After all, misinterpretations make it harder to answer “Can I believe it?” affirmatively. In contrast, when we don’t want to believe “X causes Y,” the Great Misinterpretations are exceedingly helpful. Once we inject the humble claim that “X causes Y” with spurious certainty, necessity, universality, monocausality, or hyperbole, the answer to “Must I believe it?” is bound to be “No.”
What’s the better epistemic path?
The obvious step: Don’t selectively ask “Can I believe it?” or “Must I believe it?” Instead, just ask, “What’s the probability?”
The less obvious step, though, is: Before you assign probabilities, listen to the speaker’s precise words. If he didn’t claim certainty, necessity, universality, monocausality, or hyperbole, he probably believes none of them. So don’t pretend otherwise!
READER COMMENTS
Alan Goldhammer
Nov 13 2018 at 3:21pm
Professor Caplan and others would do well to read Rex Sorgath’s wonderful book, “The Encyclopedia of Misinformation: A Compendium of Imitations, Spoofs, Delusions, Simulations, Counterfeits, Impostors, Illusions, Confabulations, Skullduggery, Frauds, Pseudoscience, Propaganda, Hoaxes, Flimflam, Pranks, Hornswoggle, Conspiracies & Miscellaneous Fakery.”
He covers this topic in depth and one comes away with wonder that anyone ever believed the JT LeRoy saga or that the Turing Test hypothesis has consistently been misinterpreted from what Turing originally proposed. Take a step back and consider the agnotology of it all.
JFA
Nov 13 2018 at 3:30pm
I think that pretty much only 4 (which is just a logical fallacy when applied to the statement “x causes y”) and 5 (which can’t be made without more info) can be considered true misinterpretations (maybe… maybe… you can toss in #1). #2 and #3 are basically saying the same thing. If x necessarily causes y, then for every x there will be a y (that’s just modus ponens). If no qualifications are given when making causal claims, normal English correctly imbues the statement with the qualities of necessity and universality because THAT’S THE DEFINITION OF CAUSE. Rather than misinterpretations of causal statements running rampant (i.e. blaming the listener), it is misrepresentations that run rampant (i.e. you really should blame the speaker).
If someone says that “x causes y” without any qualifications, this suggests that the person meant to represent the statement as certain, necessary, and universal. If someone did not want this impression given, they would say “x probably causes y” or “x causes y under certain circumstances…
Hey you know what the speaker can do to avoid non-malicious misrepresentation? He could say, “x probably causes y under certain circumstances.”
Scott Sumner
Nov 13 2018 at 6:42pm
So when people say “smoking causes cancer”, the average listener assumes they believe smoking always causes cancer? Is that your claim about “normal English”?
JFA
Nov 14 2018 at 7:00am
That is exactly what I am saying. If you say smoking causes cancer to someone, what are they likely to say? Either “yeah, sure… dug” or “Well my granddad smoked for 50 years and died peacefully in his sleep.” The person has rightfully interpreted you as implying smoking leads to cancer. If I tell my kids not to smoke and they ask why, I tell them because “smoking cause cancer”. With that statement, I am purposefully neglecting any kind of wishy-washy language that would cause my kids to doubt the connection between smoking and cancer.
If a public health official releases dietary guidelines that say to decrease saturated fat consumption because it causes heart disease, how would you expect many in the public to respond? They would say something like, “My granddad ate bacon and sausage everyday of his life, and he died peacefully at the age of 96.”
That’s why when someone does not want to impart the qualities of necessity and universality, they use terms like “linked to”, “is correlated with”, etc. If the relationship is truly causal but needs qualifications, the speaker says, “x causes y in these certain cases”. If there are enough cases in which someone might have a response starting with “Well my granddad”, maybe the speaker should use clarifying language if they don’t want the claim “misinterpreted”.
JFA
Nov 14 2018 at 10:29am
I also think the use of the term “normal English” can be a bit slippery. Scott’s use of the term seems to imply that the “normal English” understanding of “cause” carries with it some preexisting understanding of the underlying relationship between smoking and cancer, as in “the normal English understanding of ‘smoking causes cancer’ doesn’t imply necessity/universality because we already know that not all smokers develop cancer.” So Scott’s use of “normal English” suggests this definition: “normal English” understanding of the word cause is the definition of cause (which has a necessary/universal meaning) plus some knowledge of the relationship between x and y that might dampen the necessary/universal meaning of cause.
Bryan’s use does not suggest this kind of case dependent definition. He just thinks that the common usage doesn’t mean what the dictionary says. I forget where I read this (maybe even from Bryan), but let’s say you are reading some non-fiction book and you are really enjoying it. You come to a part where the author discusses something you are familiar with (say economics). You disagree with the author slightly or maybe think the author needed to clarify some points, maybe add in some qualifications. Then you keep reading and enjoy the rest of the book. Now, when you disagreed with the author, it wasn’t because you didn’t have a firm grasp on the definitions of the words he was using; it was because you took the author at his word and you thought he was wrong. You knew claims should have been qualified. You knew that his language did not convey what you thought was the truth.
So why would it be that when someone says “x causes y”, rampant “misinterpretations” are due to the listener/reader not using the “normal English” definition of cause? Couldn’t it be that if “misinterpretation” of “cause” is so “rampant” that the error lies with speakers and writers who misuse the word “cause”?
Robert Rounthwaite
Nov 13 2018 at 7:29pm
Sometimes, things are more clear with a specific instance then with the general example. Here’s the most obvious one: “smoking causes cancer”
1. Certainly? Yes, I think this is the normal interpretation. People tend to say things that they believe, and clarification is required to see if it’s a chance the person is not certain. “John took my candy.” is an accusation, not equivalent to “John is quite likely to have taken my candy” even if I back down to that position when pressed.
2. Necessarily? No, I can imagine a treatment that prevented people from getting cancer from smoking. Or perhaps vaping does not cause cancer but is still smoking.
3. Universality? No, of course not. This is a common way that people dismiss things: my uncle smoked his whole life and never got cancer, so you are wrong. But in normal conversation, causes are not always equivalent to the logical statement of A implies B.
4. Monocausality? No, other things can cause cancer, even lung cancer. Again, I do see this is an often deployed argument – this B was caused by something else, so you are wrong
5. ‘even a grain’: I more often see this deployed in reverse, that since A causes B, you should avoid even doing a tiny bit of A. But, no, smoking one cigarette will not change your lifetime risk of cancer to any appreciable degree. I’ve smoked 3 cigars in my life, the last a decade ago, I doubt with ill effect.
So, my conclusion is that Scott is right on two through four, and is only partially right on one since most people who say things expect you to believe them, although clarification *can* sometimes elicit doubt.
[I see that Scott has added this exact example, but I’m submitting anyway.]
John Alcorn
Nov 13 2018 at 8:23pm
Bryan,
I have a nit to pick with your stimulating, insightful blogpost.
You posit a context in which people have strong emotions about X or Y. And you make a plea for people to exercise the principle of charity in interpretation about shorthand public declarations, that X causes Y.
In the course of your analysis, you switch from addressing the public speaker as “you”, to addressing the uncharitable audience as “you”. The switch seems seamless because you get there via “we”:
A complement to the principle of charity in interpretation is: “Know your audience.” A speaker who wishes to be understood correctly (i.e., to communicate effectively) should exercise circumspection (rather than imprecise shorthand), when phrasing public declarations to people who have strong emotions about the topic.
It takes two to tango in communication. The context in this blogpost seems to involve an asymmetry between a dispassionate speaker and an emotional audience. In such circumstances, a plea to the speaker — Know your audience! — probably will be more effective than a plea to the audience to exercise the principle of charity in interpretation.
John Alcorn
Nov 13 2018 at 9:25pm
Causal explanation in the social sciences is tricky. Per Bryan Caplan’s analysis, causal explanation differs (often subtly) from statements about certainty, necessitation, or sufficiency; and also from expressions of hyperbole.
But there is more. Causal explanation differs also, subtly, from spurious correlation, just-so stories, and prediction.
And the experts often disagree about specific causal explanations in the social sciences.
Is it any wonder that imprecise, shorthand, public declarations of the type, “X causes Y,” about emotionally charged topics, are greatly misinterpreted?
Matt C.
Nov 14 2018 at 3:12pm
Part of the problem is imprecise language. When we say “smoking causes cancer” (to borrow an example from another comment), what we really mean is “smoking significantly increases the probabilities of certain types of cancer” or perhaps “smoking, when combined with other factors, causes cancer.”
That imprecision allows for occurrence of 1, 2, 3, and perhaps even 5. Or rather, the intentional ignorance that “X causes Y” should not be taken literally allows for such occurrence.
I would add that not all of the effects of “strong belief” are irrational. Bryan provided an example in class: given all we know about economics, how likely is it that the results of the Card-Krueger study (that demonstrated a positive correlation between the minimum wage and employment) is correct? In other words, if a neoclassical economist hears “minimum wage increases cause increases in employment,” he or she will likely look for holes in the argument because it conflicts with almost everything they understand about economics.
Certainly, if they take the lazy approach, they will fall back on one of the straw-men arguments presented here. A better approach would be to say “that can’t be right, but I can’t yet identify why it’s wrong.”
Finally, this: these arguments are used so often because they are effective. The masses often will not probe any further than the surface of the argument.
Comments are closed.