Two Track Mind: How Irrational Policy Endures
By Alexander Combs
“Wherever the truth lies, one thing is clear: not all deaths are judged equally. In the moral calculus of the mind emotional distance is heavily weighted.”
Now imagine a second scenario. You are standing on a footbridge above a stretch of railroad track. Suddenly, you notice a runaway railcar approaching five workers on the track. Their death is imminent; they have no hope of avoiding the car. Next to you stands a large man; large enough that if you were to push him off the bridge onto the tracks he would stop the trolley. He would face certain death, but would prevent the death of five others. Do you push him off the bridge?
If you are like most people you answered that you would pull the switch, but that you would not push the man from the bridge. For philosophers, these results have always been troubling. Why would people find it acceptable to take the life of one person in order to save five others in first scenario, but not in the second? Both require taking an action that directly ends the life of someone who otherwise would have been unharmed. Why the inconsistency?
In an attempt to better understand this puzzle, researchers1 at Princeton posed these scenarios, along with several more mundane ones, to a group of subjects while they underwent a functional Magnetic Resonance Imaging (fMRI)2 brain scan. This scan works by detecting minute changes in oxygen consumption by individual neurons. From this, researchers are able to observe which brain structures are actively engaged in answering a question.
Scientists have long known that different structures within the brain perform specialized functions.3 But previously, the only method available to researchers was to correlate physical damage to the brain observed at autopsy with specific afflictions suffered during the patient’s lifetime. The fMRI allows researchers to see these correlations in real-time. When they turned to these machines to see how people make decisions, the results were surprising.
When subjects answered the first railcar scenario, their brain scans looked remarkably similar to scans taken when they were asked to solve everyday questions such as which coupon to use at the supermarket. The most active areas, in this scenario, were those associated with working memory. When subjects answered the second railcar scenario, however, the scans showed relatively less activity in working memory and more activity in those areas associated with empathy—specifically, one area associated with interpreting other people’s feelings from facial expressions. The first question, then, was answered logically, while the second question was answered emotionally.
The researchers theorized that while rationally and arithmetically the two scenarios are equivalent—five saved, one killed—the footbridge scenario engages the emotional centers in such a way that people perceive it as morally incongruent to the first.
Prior to this study, some scholars attributed the difference to the doctrine of double effect, which says that a bad action preformed to achieve a greater good is unethical, while a good action with equally harmful side effects is ethical.4 Other scholars disagreed.
Wherever the truth lies, one thing is clear: not all deaths are judged equally. In the moral calculus of the mind emotional distance is heavily weighted. Just as it is easier to drop a bomb on a battalion than to bayonet a soldier, it is easier to support a policy with many unseen victims over one with a few highly visible ones.
Take, for example, the Food and Drug Administration’s approval process. All new drugs sold in the U.S. must undergo a series of rigorous clinical trials in order to gain approval for sale. The pharmaceutical companies must prove not only safety, but efficacy as well. This process typically takes seven to ten years and can cost upwards of $900 million dollars per drug.5
Although safe and effective medicine is a noble aim, there are unseen costs to this policy—costs which can greatly overwhelm the benefits. For every new treatment awaiting approval there are untold numbers of people who may die simply because they were denied access to it. In one particularly egregious example, it was estimated that over 100,000 Americans died needlessly due to the seven-year delay in approving beta-blockers for heart conditions.6
Beyond the deaths attributable to delay, are those which result from the extraordinary expense of testing. Since pharmaceutical companies must recover the cost of these trials, they are incentivized to produce a few “home-run” drugs rather than numerous “singles”. According to the Office of Technology Assessment, most companies will not begin the process of seeking approval if the market for a drug is less than $100 million a year—with few exceptions. Compounding this problem is that most drugs have “off-label” uses (meaning that they are prescribed for non-FDA approved uses—which, ironically, is legal once they are approved for treatment of any one condition). Essentially, this means having 25 fewer drug approvals per year may represent having 50 fewer treatments.
Why, then, would we implement and maintain such a perverse institution? Because we have an overwhelming, emotionally charged reaction to seeing someone suffer or die from taking an “unsafe” drug. Even when we logically understand that we are keeping medicine from those whose lives may depend on it, we are still compelled to favor the few and visible victims over the many and nameless.
When television programs such as 60 Minutes show people suffering from unintended effects of a drug, we feel enraged and disgusted. We wonder how such tragedies are permitted. Those who suffer and die because of drugs that will never be available to them receive no television coverage; no headlines decry the injustice. They simply pass away untreated, unrecognized victims of an irrational policy.
The Princeton study of railroad death and decision-making does offer some hope, however. Not all subjects in the study yielded to their emotional misgivings. Unsurprisingly, people who made consistent decisions in the two scenarios took more time than those who reacted emotionally. That’s a lesson we would all do well to learn the next time we watch 60 Minutes. Significant decisions with emotional consequences are best done with care.
Breuer, Hubertus. 2004. Anguish and Ethics. Scientific American. Special, Vol. 14, No. 1.
Crane, Edward H., and David Boaz, eds. 2003. Cato Handbook for Congress, 108th. 2003. 38: 405-408
Greene et al. 2001. An fMRI Investigation of Emotional Engagement in Moral Judgment, Science. 293: 2105-2108
Honderich, Ted, ed. 1995. The Oxford Companion to Philosophy. Oxford: Oxford Press.
Klein, Daniel and Alexander Tabarrok. 2001. Is the FDA Safe and Effective? Available at: http://www.fdareview.org
Kolb, Bryan, and Ian Q. Whisaw. 1996. Fundamentals of Human Neuropsychology. W H Freeman & Co.
Zimmer, Carl. 2004. Whose Life Would You Save? Discover. Vol. 25, No. 4.
An fMRI Investigation of Emotional Engagement in Moral Judgment. Joshua D. Greene et al., Science, Vol. 293, pgs. 2105-2108; September 14, 2001.
fMRI or functional Magnetic Resonance Imaging is a relatively recent form of brain imaging. The traditional, structural MRI is used primarily to visualize opaque tissue for pathological or physiological alterations. The fMRI presents, instead, a visual representation of neural activity in response to a specific stimulus by measuring the flow of oxygenated blood.
The first to provide proof of this “localization” was the French surgeon Paul Broca in 1861. He was able to map language centers in the brain by observing lesions in patients suffering from aphasia.
Killing the single worker was a side effect of attempting to save the five in the first scenario, while killing the fat man was instrumental to saving the five in the second scenario. According to the doctrine of double effect, then, the first is ethical, while the second is not.
Tufts University Center for the Study of Drug Development, 2003. See http://csdd.tufts.edu/NewsEvents/RecentNews.asp?newsid=29.
Louis Lasagna, Tufts University Center for the Study of Drug Development.