John D. Graham, “Improving Chemical Risk Assessment,” Regulation 14, no. 4 (1991): pp. 14–18; online at: http://www.cato.org/pubs/regulation/reg14n4-graham.html.
Since the late 1950s, the regulation of risks to health and safety has taken on ever-greater importance in public policy debates—and actions. In its efforts to protect citizens against hard-to-detect hazards such as industrial chemicals and against obvious hazards in the workplace and elsewhere, Congress has created or increased the authority of the Food and Drug Administration, the Environmental Protection Agency, the Occupational Health and Safety Administration, the Federal Trade Commission’s Bureau of Consumer Protection, and other administrative agencies.
Activists in the pursuit of a safer society decry the damage that industrial progress wreaks on unsuspecting citizens. Opponents of the “riskless society,” on the other hand, complain that government is unnecessarily proscribing free choice in the pursuit of costly protection that people do not need or want. This article describes some facts about risk and discusses some academic theories about why people on both sides of the risk debate take the positions they do.
The health of human beings is a joint product of their genetic inheritance (advice: choose healthy and long-lived parents), their way of life (the poor person who eats regularly and in moderation, exercises, does not smoke, does not drink to excess, is married, and does not worry overly much is likely to be healthier than the rich person who does the opposite), and their wealth (advice: be rich). Contrary to common opinion, living in a rich, industrialized, technologically advanced country that makes considerable use of industrial chemicals and nuclear power is a lot healthier than living in a poor, nonindustrialized nation that uses little modern technology and few industrial chemicals. That individuals in rich nations are, on average, far healthier, live far longer, and can do more of the things they want to do at corresponding ages than people in poor countries is a rule without exception.
Prosperous also means efficient. The most polluted nations in the world, many times more polluted than democratic and industrial societies, are the former communist countries of Central Europe and the Soviet Union. To produce one unit of output, communist countries used two to four times the amount of energy and material used in capitalist countries. On average, individuals unfortunate enough to live in an inefficient economy die younger and have more serious illnesses than those in Western and industrial democracies. A little richer is a lot safer. As Peter Huber demonstrated in Regulation magazine, “For a 45-year-old man working in manufacturing, a 15 percent increase in income has about the same risk-reducing value as eliminating all hazards—every one of them—from his workplace.”
Among the many facts that might be observed from Figure 1 and Tables 1A and 1B is that longevity has increased dramatically since 1900. The trend continues if we look further back—boys born in Massachusetts in 1850 could expect to live to an average age of 38.3, girls until 40.5.
Turning to death rates, note the decline by half since 1900 of deaths from all forms of accidents and the spectacular declines in all sorts of diseases. The 88 percent drop in deaths from pneumonia and influenza is par for the course. On the other side of the ledger, cancer deaths continue to rise, though their increase has slowed, and deaths from major cardiovascular diseases remain high. Why these discrepancies? Cancer is largely a disease of old age. When people died at roughly half the present life expectancy, they died before they had an opportunity, if one may call it that, to get cancer. Of course, people must die of something. Lacking other information, it is usual to classify deaths as due to heart failure, given that heart stoppage is one of the signs of death.
Figure 1 Expectation of Life in the United States, 1900-2000
Note: The dip in life expectancy in 1918 was due to the global influenza epidemic (“pandemic”). An estimated 675,000 Americans died of influenza during the pandemic, ten times as many as in the First World War.
The most dangerous activities are precisely what we might think they are—sports such as motorcycling and parachuting and occupations such as firefighting and coal mining. However, many of the risks that people have begun to worry about in recent years are far smaller than generally perceived. However low the risk of being killed by lightning (see Table 2), the risk of getting cancer from drinking tap water (chlorine forms chloroform, which is a weak carcinogen) is even lower than that, and the harm—if any—done by pesticides in food is even less. One of the smallest risks that statisticians have measured—dying as a result of cancer caused by the release of plutonium from a deep space probe that loses control during its swing around the Earth to gain velocity and burns up in the atmosphere—measures in at three-millionths of one percent.
In its regulations specifying maximum discharges of potentially harmful substances from factories, the Environmental Protection Agency (EPA) sets a safety threshold of one additional death in a million given a lifetime of exposure. How, we might ask, did the EPA arrive at one in a million? Well, let’s face it, no real man tells his girlfriend that she is one in a hundred thousand. The root of the “one in a million” standard, however, can be traced to the Food and Drug Administration’s (FDA) efforts to find a number that was essentially equivalent to zero. That this is a stringent standard can be seen by noting that each of us has roughly one chance in a million of expiring every hour of every day of our lives.
Many experts argue that insisting on essentially zero risk is going too far. As John D. Graham, director of the Harvard School of Public Health’s Center for Risk Analysis, wrote, “No one seriously suggested that such a stringent risk level should be applied to a hypothetical maximally exposed individual.”1 This mythical “maximally exposed” human being is created by assuming that he or she lives within two hundred meters of the offending industrial plant, lives there for a full seventy years, remains outdoors day and night—or at least all day—and will get cancers at the same level as rodents or other small animals that are bred to be especially susceptible to such cancers and are given doses proportionally thousands of times larger than would be given to any person other than those who receive lifetime occupational exposures on the job.
Another questionable assumption is that cancer causation is a linear process, meaning that there is no safe dose and that damage occurs at a constant rate as exposure increases.
This is known as the “linear no-threshold hypothesis.” Scientific evidence increasingly shows that there are indeed threshold effects. Such effects were observed five hundred years ago by the physician and scientist Paracelsus, sometimes known as the father of toxicology, who noted: “Dosis facit venenum” (the dose makes the poison). One can readily observe the threshold effect in action. Consuming two gallons of 100-proof liquor in an hour would be enough to kill most of us. If the linear no-threshold hypothesis applied to alcohol, one would expect that if 256 people consumed an ounce of liquor each, then on average one of them would keel over and die. It would be only a slight exaggeration to say that were the EPA to regulate ethyl alcohol—that is, the alcohol in alcoholic beverages—the same way it regulates other chemical compounds, we would each be limited to sixteen-millionths of an ounce per lifetime.
In many cases a hormesis effect may apply as well. Hormesis is the phenomenon whereby a small dose has the opposite effect of a large dose. Many things that are bad for us in large quantities, for example, are good for us in small quantities. Some familiar examples are vitamin A, vitamin C, and aspirin.
If threshold or hormesis effects apply, then the cancers animals develop as a result of being subjected to huge doses in short periods tell us essentially nothing about the reactions of human beings. To go from mouse to man, for instance, requires statistical adjustments for the hugely different weights of the two creatures and for the hugely different doses. Many statistical models fit the data that scientists have gathered on risks. Even though these models vary in their outcomes for risk by thousands of times over, there is no scientific way of choosing among them. Only if the mechanism by which a chemical causes cancer were well known would it be possible to choose a good model. In short, current measures of risk from low-level exposures to industrial technology have no validity whatsoever. This explains why health rates keep getting better while governments estimate that safety keeps getting worse.
Why are some people frightened of certain risks while others are not? Surveys of risk perception show that knowledge of the known hazards of a technology does not determine whether or to what degree an individual thinks a given technology is safe or dangerous. This holds true not only for laymen, but also for experts in risk assessment. Thus, the most powerful factors related to how people perceive risk apparently are “trust in institutions” and “self-rated liberal and conservative identification.” In other words, these findings suggest strongly that people use a framework involving their opinion of the validity of institutions in order to interpret riskiness.
According to one cultural theory, people choose what to fear as a way to defend their way of life. The theory hypothesizes that adherents of a hierarchical culture will approve of technology, provided it is certified as safe by their experts. Competitive individualists will view risk as opportunity and hence will be optimistic about technology. Egalitarians will view technology as part of the apparatus by which corporate capitalism maintains inequalities that harm society and the natural environment.
One study sought to test this theory by comparing how people rate the risks of technology compared with risks from social deviance (departures, such as criminal behavior, from widely approved norms), war, and economic decline. The results are that egalitarians fear technology immensely and think that social deviance is much less dangerous. Hierarchists, by contrast, think technology is basically good if their experts say so but that social deviance leads to disaster. And individualists think that risk takers do a lot of good for society and that if deviants don’t bother them, they won’t bother deviants; but they fear war greatly because it stops trade and leads to conscription. Thus, there is no such thing as a risk-averse or risk-taking personality. People who either take or avoid all risks are probably certifiably insane; neither would last long. Think of a protester against, say, nuclear power. He is evidently averse to risks posed by nuclear power, but he also throws his body on the line (i.e., takes risks in opposing it).
Other important literature pursues risk perception through what is known as cognitive psychology. Featuring preeminently the pathbreaking work of Daniel Kahneman and Amos Tversky, using mainly small-group experiments in which individuals are given tasks involving gambling, this work demonstrates that most of us are poor judges of probability. More important, perhaps, is the population’s general conservatism: a great many people care more about avoiding loss than they do about making gains. Therefore, they will go to considerable lengths to avoid losses, even in the face of high probabilities of making considerable gains.
In extreme form this conservatism expresses itself as a modern variation of Pascal’s wager known as the “precautionary principle.” As enacted into law by the government of the city of San Francisco in 2003, it states: “Where threats of serious or irreversible damage to people or nature exist, lack of full scientific certainty about cause and effect shall not be viewed as sufficient reason for the City to postpone measures to prevent the degradation of the environment or protect the health of its citizens.”
The precautionary principle is a marvelous piece of rhetoric. It places the speaker on the side of the citizen—I am acting for your health—and portrays the opponents of the contemplated ban or regulation as indifferent or hostile to the public’s health. The rhetoric works in part because it assumes what actually should be proved, namely that the health effects of the regulation will be superior to the alternative. This comparison is made possible in the only possible way—by assuming that there are no health detriments from the proposed regulation.
If one is concerned primarily with health, then the question for any regulation ought to be whether the health gains associated will outweigh the health costs. This is precisely the calculation the precautionary principle says we need not make. Likewise there is no moral norm stating that health must be our only value or even our dominant value. Emphasizing a single value, to which all others must be subordinated, is a sign of fanaticism. How much is a marginal gain in health worth compared with losses in other values such as freedom, justice, and excellence? The answer will vary by individual. A Patrick Henry, for instance, will not thank you for attempts to protect life at the expense of liberty.
With regard to the consequences of technological risk, there are two major strategies for improving safety: anticipation versus resilience. The risk-averse strategy seeks to anticipate and thereby prevent harm from occurring. In order to make a strategy of anticipation effective, it is necessary to know the quality of the adverse consequence expected, its probability, and the existence of effective remedies. The knowledge requirements and organizational capacities required to make anticipation an effective strategy—to know what will happen, when, and how to prevent it without making things worse—are large, often impossibly so.
A strategy of resilience, on the other hand, requires reliance on experience with adverse consequences once they occur in order to develop a capacity to learn from the harm and bounce back. Resilience, therefore, requires the accumulation of large amounts of generalizable resources—such as organizational capacity, knowledge, wealth, energy, and communication—that can be used to craft solutions to problems that the people involved did not know would occur. Thus, a strategy of resilience requires much less predictive capacity but much more growth, not only in wealth, but also in knowledge. Hence it is not surprising that systems such as capitalism, based on incessant and decentralized trial and error, accumulate the most resources. Strong evidence from around the world demonstrates that such societies are richer and produce healthier people and a more vibrant natural environment.
Crouch, Edmund, and Richard Wilson. Risk/Benefit Analysis. 2d ed. Cambridge: Harvard University Press, 2001.
Dake, Karl, and Aaron Wildavsky. “Theories of Risk Perception: Who Fears What and Why?” Daedalus 119, no. 4 (1990): 41–61.
Dietz, Thomas, and Robert Rycroft. The Risk Professionals. New York: Russell Sage Foundation, 1987.
Douglas, Mary, and Aaron Wildavsky. Risk and Culture. Berkeley: University of California Press, 1982.
Kahneman, Daniel, Paul Slovic, and Amos Tversky, eds. Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press, 1982.
Kahneman, Daniel, and Amos Tversky. “Variants of Uncertainty.” Cognition 11, no. 2 (1982): 143–157.
Keeney, Ralph L. “Mortality Risks Induced by Economic Expenditures.” Risk Analysis 10, no. 1 (1990): 147–159.
Moronwe, Joseph G., and Edward J. Woodhouse. Averting Catastrophe. Berkeley: University of California Press, 1986.
Rand, Ayn, and Peter Schwartz. Return of the Primitive: The Anti-industrial Revolution. New York: Penguin Putnam, 1999.
Schwarz, Michael, and Michael Thompson. Divided We Stand: Redefining Politics, Technology and Social Choice. Philadelphia: University of Pennsylvania Press, 1990.
Slovic, Paul. “Informing and Educating the Public About Risk.” Risk Analysis 6, no. 4 (1986): 407.
Wildavsky, Aaron. But Is It True? A Citizen’s Guide to Environmental Health and Safety Issues. Cambridge: Harvard University Press, 1995. Published posthumously.
Wildavsky, Aaron. Searching for Safety. New Brunswick, N.J.: Transaction Press, 1988.
John D. Graham, “Improving Chemical Risk Assessment,” Regulation 14, no. 4 (1991): pp. 14–18; online at: http://www.cato.org/pubs/regulation/reg14n4-graham.html.
Return to top