Facts and Fears: Societal Perception of Risk

ABSTRACT - Subjective judgments, whether by experts or lay people, are a major component in any risk assessment. If such judgments are faulty, efforts at public protection are likely to be misdirected. The similarities and differences between lay and expert evaluations are examined in the context of a specific set of activities and technologies.


Paul Slovic, Baruch Fischhoff, and Sarah Lichtenstein (1981) ,"Facts and Fears: Societal Perception of Risk", in NA - Advances in Consumer Research Volume 08, eds. Kent B. Monroe, Ann Abor, MI : Association for Consumer Research, Pages: 497-502.

Advances in Consumer Research Volume 8, 1981      Pages 497-502


Paul Slovic, Decision Research, A Branch of Perceptronics [Eugene, Oregon.]

Baruch Fischhoff, Decision Research, A Branch of Perceptronics [Eugene, Oregon.]

Sarah Lichtenstein, Decision Research, A Branch of Perceptronics [Eugene, Oregon.]

[Much of the material presented in this paper is taken from our paper "Perceived Risk" in R. C. Schwing and W. A. Albers, Jr. (Eds.), Societal Risk Assessment: How Safe is Safe Enough? New York: Plenum, Press, 1980.]


Subjective judgments, whether by experts or lay people, are a major component in any risk assessment. If such judgments are faulty, efforts at public protection are likely to be misdirected. The similarities and differences between lay and expert evaluations are examined in the context of a specific set of activities and technologies.


People respond to the hazards they perceive. If their perceptions are faulty, efforts at public and environmental protection are likely to be misdirected. For some hazards, extensive statistical data are readily available; for example, the frequency and severity of motor vehicle accidents are well documented. The hazardous effects of other familiar activities, such as the consumption of alcohol and tobacco, are lees readily discernible; their assessment requires complex epidemiological and experimental studies. However, even when statistical data are plentiful, the 'hard" facts can only go so far towards developing policy. At some point human judgment is needed to interpret the findings and determine their relevance.

Still other hazards, such as those associated with recombinant DNA research or nuclear power, are so new that risk assessment must be based on complex theoretical analyses such as fault trees (see Figure 1), rather than on direct experience. Despite their sophistication, these analyses too, include a large component of judgment. Someone, relying on educated intuition, must determine the structure of the problem, the consequences to be considered, and the importance of the various branches of the fault tree.

Once the analyses have been performed, they must be communicated to the various people who actually manage hazards, including industrialists, environmentalists, regulators, legislators, and voters. If those people do not see, understand, or believe these risk statistics, then distrust, conflict and ineffective hazard management are likely.

In this paper, we shall explore some of the psychological elements of the risk assessment process that are critical to the management of hazards. Our basic premises are that both the public and the experts are necessary participants in that process, that assessment is inevitably subjective, and that understanding public perceptions is crucial to effective decision making.



[Source: P.E. McGrath, "Radioactive Waste Management," Report EURFNR 1204, Karlsruhe, Germany, 1974.]


In order to aid the hazard management process, a theory of perceived risk must explain people's extreme aversion to some hazards, their indifference to others, and the discrepancies between these reactions and experts' recommendations. Why, for example, do some communities react vigorously against locating a liquid natural gas terminal in their vicinity despite the assurances of experts that it is safe? Why, on the other hand, do many communities situated on earthquake faults or below great dams show little concern for experts' warnings? Such behavior is doubtless related to the perceived probability of possible consequences from these hazards. The studies reported below broaden the discussion. They ask, when people judge the risk inherent in a technology, are they referring only to the (possibly misjudged) number of people it could kill or also to other, more qualitative features of the risk it entails?

Quantifying Perceived Risk

In one study, we asked four different groups of people to rate 30 activities (e.g., smoking, firefighting), substances (e.g., food coloring), and technologies (e.g., railroads, aviation) according to the present risk of death from each (Fischhoff 1978, Slavic 1980(a)). Three groups were from Eugene, Oregon; they included 30 college students, 40 members of the League of Women Voters (LOWV), and 25 business and professional members of the "Active Club." The fourth group was composed of 15 persons selected nationwide for their professional involvement in risk assessment. This "expert" group included a geographer, an environmental policy analyst, an economist, a lawyer, a biologist, a biochemist, and a government regulator of hazardous materials.

All these people were asked, for each of the 30 items, "to consider the risk of dying (across all U.S. society as a whole) as a consequence of this activity or technology." In order to make the evaluation task easier, each activity appeared on a 3" x 5" card. Respondents were told first to study the items individually, thinking of all the possible ways someone might die from each (e.g., fatalities from non-nuclear electricity were to include deaths resulting from the mining of coal and other energy production activities as well as electrocution; motor vehicle fatalities were to include collisions with bicycles and pedestrians). Next, they were to order the items from least to most risky and, finally to assign numerical risk values by giving a rating of 10 to the least risky item and making the other ratings accordingly. They were also given additional suggestions, clarifications and encouragement to do as accurate a job as possible.

Table 1 shows how the various groups ranked these 30 activities and technologies according to riskiness. There were many similarities between the three groups of lay-people. For example, each group believed that motorcycles, motor vehicles and handguns were highly risky, while vaccinations, home appliances, power movers, and football posed relatively little risk. However, there were strong differences as well. Active Club members viewed pesticides and spray cans as relatively much safer than did the other groups. Nuclear power was rated as highest in risk by the LOWV and student groups, but only eighth by the Active Club. The students viewed contraceptives as riskier and mountain climbing as safer than did the other lay groups. Experts' judgements of risk differed markedly from the judgements of laypeople. The experts viewed electric power, surgery, swimming and X-rays as more risky than did the other groups and they judged nuclear power, police work and mountain climbing to be much less risky.



What Determines Risk Perception?

What did people mean, in this study, when they said that a particular technology was quite risky? A series of additional studies was conducted to answer this question.

Perceived risk compared to frequency of death.  When people judge risk, are they simply estimating frequency of death? To answer this question, we collected the best available technical estimates of the annual number of deaths for the activities included in our study. For some, such as commercial aviation and handguns, there is good statistical evidence based on counts of known victims. For others, such as nuclear or fossil-fuel power plants, available estimates are based on uncertain inferences about incompletely understood processes, such as the effect of low doses of radiation on latent cancers. For still others, such as food coloring, we could find no estimates of annual fatalities.

For the 25 cases for which we found technical fatality estimates, we compared these estimates with perceived risk. The experts' judgments of risk were so closely related to these statistical or calculated frequencies that it seems reasonable to conclude that they both knew what the technical estimates were and viewed the risk of an activity or technology as synonymous with them. The risk judgments of laypeople, however, were only moderately related to the annual death rates, raising the possibility that, for them, risk may not be synonymous with fatalities. In particular, the perceived risk of nuclear power was remarkably high compared to its estimated number of fatalities.

Lay fatality estimates.  Before concluding that perceived risk does not mean annual fatalities, we investigated the possibility that laypeople based their risk judgments on subjective fatality estimates which were inaccurate. To test this hypothesis, we asked additional groups of students and LOWV members "to estimate how many people are likely to die in the U.S. in the next year (if the next year is an average year) as a consequence of these 30 activities and technologies."

These subjective fatality estimates are shown in columns 2 and 3 of Table 2. If laypeople really equate risk with annual fatalities, their own estimates of annual fatalities, no matter how inaccurate, should be very similar to their judgements of risk. There was, however, only a low to moderate agreement between these two sets of judgments (r = .60 for LOWV and .26 for students). Of particular importance was nuclear power, which had the lowest fatality estimate and the highest perceived risk for both LOWV members and students. Overall, laypeople's risk perceptions were no more closely related to their own facility estimates than they were to the technical estimates. Thus we can reject the idea that laypeople wanted to equate risk with annual fatalities, but were inaccurate in doing so. Apparently, laypeople incorporate other considerations besides annual fatalities into their concept of risk.

Disaster potential.  One striking result is the fact that the LOWV members and students assigned nuclear power the highest risk values and the lowest annual fatality estimates. One possible explanation is that LOWV members expected nuclear power to have a low death rate in an average year, but considered it to be a high risk technology because of its potential for disaster.

In order to understand the role played by expectations of disaster in determining laypeople's risk judgments, we asked these same respondents to indicate for each activity and technology 'how many times mere deaths would occur if next year were particularly disastrous rather than average." The geometric means of these multipliers are shown in columns 4 and 5 of Table 2. For most activities, people saw little potential for disaster. The striking exception is nuclear power, with a mean disaster multiplier in the neighborhood of 100.



For any individual, an estimate of the expected number of fatalities in a disastrous year could be obtained by applying the disaster multiplier to the estimated fatalities for an average year. When this was done for nuclear power, almost 40% of the respondents expected more than 10,000 fatalities if next year were a disastrous year. More than 25% expected 100,000 or more fatalities. An additional study (Slovic, in press), in which people were asked to describe their mental images of the consequences of a nuclear accident, showed an expectation that a serious accident would likely result in hundreds of thousands, even millions, of immediate deaths. These extreme estimates can be contrasted with the Reactor Safety Study's conclusion that the maximum credible nuclear accident, coincident with the most unfavorable combination of weather and population density, would cause only 3,300 prompt fatalities (U.S. Nuclear Regulatory Commission 1975). Furthermore, that study estimated the odds against an accident of this magnitude occurring next year to be about 3,000,000 : 1.

Disaster potential seems to explain much of the discrepancy between the perceived risk and the annual fatality estimates for nuclear power. Yet. because disaster plays only a small role in most of the other activities and technologies, it provides only a partial explanation of the perceived risk data.

Qualitative characteristics.  Are there other determinants of risk perceptions besides frequency estimates? We asked experts, students. LOWV members and Active Club members to rate the 30 technologies and activities on nine qualitative characteristics that have been hypothesized to be important (Lowrance 1976).

The "risk profiles" made from mean ratings on these characteristics showed nuclear power to have the dubious distinction of scoring at or near the extreme on all of the characteristics associated with high risk. Its risks were seen as involuntary, delayed, unknown, uncontrollable, unfamiliar, potentially catastrophic, dreaded, and severe (certainly fatal). Figure 2 contrasts its unique risk profile with non-nuclear electric power and another radiation technology, X-rays, both of whose risks were judged to be much lower. Both electric power and X-rays were judged mere voluntary, less catastrophic, less dreaded, and more familiar than nuclear power.



Across all 30 items, ratings of dread and of the severity of consequences were closely related to lay judgments of risk. In fact, the risk judgments of the LOWV and student groups could be predicted almost perfectly from ratings of dread and severity, the subjective fatality, estimates, and the disaster multipliers in Table 2. Experts' judgments of risk were not related to any of the nine qualitative risk characteristics.

Judged seriousness of death.  In a further attempt to improve our understanding of perceived risk, we examined the hypothesis that some hazards are feared more than others because the deaths they produce are much worse than deaths from other activities. We thought, for example, that deaths from risks imposed involuntarily, from risks not under one's control, or from hazards that are particularly dreaded might be given greater weight in determining people's perceptions of risk.

However, when we asked students and LOWV members to judge the relative seriousness of a death from each of the 30 activities and technologies, the differences were slight. The most serious form of death (from nuclear power and handguns) were judged only about 2 to 4 times worse than the least serious forms of death (from alcoholic beverages and smoking). Furthermore, across all 30 activities, judged seriousness of death was not closely related to perceived risk of death.


Our recent work extends these studies of risk perception to a broader set of hazards (90 instead of 30) and risk characteristics (18 instead of 9). Although data have thus far been collected only from college students, the results appear to provide further insights into the nature of risk perception. In addition, they suggest that some accepted views about the importance of the voluntary-involuntary distinction and the impact of catastrophic losses may need revision.

Design of the Study

For the extended study 90 hazards were selected to cover a very broad range of activities, substances, and technologies. To keep the rating task to a manageable size, some people judged only risks, others judged only benefits and others rated the hazards on five of the risk characteristics. Risks and benefits were rated on a 0-100 scale (from "not risky" to "extremely risky").

After rating the hazards with regard to risk, respondents were asked to rate the degree to which the present risk level would need to be adjusted to make the risk level acceptable to society. The instructions for this adjustment task read as follows:

The acceptable level of risk is not the ideal risk. Ideally, the risks should be zero. The acceptable level is a level that is "good enough," where "good enough" means you think that the advantages of increased safety are not worth the costs of reducing risk by restricting or otherwise altering the activity. For example, we can make drugs "safer" by restricting their potency; cars can be made safer, at a cost, by improving their construction or requiring regular safety inspection. We may, or may not, believe such restrictions are necessary.

If an activity's present level of risk is acceptable, no special action need be taken to increase its safety. If its riskiness is unacceptably high, serious action, such as legislation to restrict its practice, should be taken. On the other hand, there may be some activities or technologies that you believe are currently safer than the acceptable level of risk. For these activities, the risk of death could be higher than it is now before society would have to take serious action.

On the answer sheets, participants were provided with three columns labeled: (a) "Could be riskier: it would be acceptable if it were _____ times riskier;" (b) "it is presently acceptable;" and (c) "Too risky; to be acceptable: it would have to be _____ times safer."

The 18 risk characteristics included eight from the earlier study. The ninth characteristic from that study, controllability, was split into two separate characteristics representing control over the occurrence of a mishap (preventability) and control over the consequences given that something did go wrong. The remaining characteristics were selected to represent additional concerns thought to be important by risk assessment researchers. As in the earlier study, all characteristics were rated on a bipolar 1-7 scale representing the extent to which the characteristic described the hazard. For example:

15. To what extent does pursuit of this activity, substance or technology have the potential to cause catastrophic death and destruction across the whole world?

very low catastrophic potential   1   2   3    4   5   6   7  very high catastrophic potential


Risk characteristics.  The mean ratings for the eighteen risk characteristics revealed a number of interesting findings. For example, the risks from most of these hazards were judged to be at least moderately well known to science (63 had mean ratings below 3 where 1 was labeled ''known precisely"). Most risks were thought to be better known to science than to those who were exposed. The only risks for which those exposed were thought to be more knowledgeable than scientists were those from police work, marijuana, contraceptives (judged relatively unknown to both science and those exposed), boxing, skiing, hunting, and several other sporting activities.

Only 25 of the hazards were judged to be decreasing in riskiness; two of then (surgery and pregnancy/childbirth) were thought to be decreasing greatly. Risks from sixty-two hazards were judged to be increasing, thirteen of these markedly so. The risks from crime, warfare, nuclear weapons, terrorism, national defense, herbicides and nuclear power were judged to be increasing most. None of the hazards were judged to be easily reducible. The lowest of the 90 means on this characteristic was 3.2 (where 1 was labeled "easily reduced"); it was obtained for home appliances and roller coasters.

The ratings of the various risk characteristics tended to be rather highly intercorrelated, as shown in Table 3. For example, risks with catastrophic potential were also judged as quite dreaded (r = .83). Application of a statistical technique known as factor analysis showed that the pattern of intercorrelations could be represented by three underlying dimensions or factors. The nature of these factors can be seen in Table 3 in which the characteristics ere ordered on the basis of the factor analysis. The first 12 characteristics represent the first factor; they correlate highly with one another and less highly with the remaining six characteristics. In other words, these data suggest that risks whose severity is believed not to be controllable tend also to be seen as dreaded, catastrophic, hard to prevent, fatal, inequitable, threatening to future generations, not easily reduced, increasing, involuntary, and threatening to the rater personally. The nature of these characteristics suggests that this factor be called "Dread." The second factor primarily reflects five characteristics that correlate relatively highly with one another and less highly with other characteristics. They are: observability, knowledge, immediacy of consequences, and familiarity (see Table 3). We have labeled this factor "Familiarity." The third factor is dominated by a single characteristic, the number of people exposed. This characteristic can be seen in Table 3 to be relatively independent of the other characteristics.

Just as each of the 90 hazards has a mean score on each of the 18 risk characteristics, it also has a score for each hazard on each factor. These scores give the location of each hazard within the factor space. Figure 3 plots the hazards on Factors 1 and 2. Items at the high end of Factor 1 are all highly dreaded. Items at the negative end of Factor 1 are seen as posing risks to individuals and being injurious rather than fatal. The placement of teens on the vertical dimension, Factor 2, intuitively fits the theme of familiarity and observability associated with the dimension label. Hazards lying at the extremes on Factor 2 (number exposed) are shown in Table 4.



This three-dimensional factor structure is of interest because it differs considerably from the two-dimensional structure obtained from ratings of 30 hazards on 9 characteristics (Fischhoff 1978). That structure, in which Factor 1 was labeled "severe" (i.e., certain to be fatal) and Factor 2 was labeled "high technology," had been found to be remarkably consistent across four different groups of lay and expert respondents (Slovic 1980(a)). The present results indicate that the particular set of hazards and the particular set of risk characteristics under study can have an Important effect on the nature of the observed "dimensions of risk."

One point of commonality between the present analysis and the previous one is that nuclear posse is an isolate in both. Although activities such as crime, nerve gas, warfare and terrorism are seen as similarly dreaded (Factor 1), none of these is judged as new or as unknown (Factor 2) as nuclear power.






Although research into the nature of perceived risk is still incomplete, we offer the following tentative conclusions:

1.  Perceived risk is quantifiable and predictable.

2.  Groups of laypeople sometimes differ systematically in their perceptions. Experts and lay persons also differ, particularly with regard to the probability and consequences of catastrophic accidents.

3.  The degree of adjustment judged necessary to make risk levels "acceptable" is strongly determined by the perceived level of current risk; the greater the perceived risk, the greater the desired reduction. Perceived benefit plays a secondary role; all else being equal, somewhat less reduction in risk is deemed necessary to make highly beneficial activities "acceptable."

4.  Many of the eighteen characteristics of risk hypothesized to be important to the public do correlate highly with perceived risk and desire for risk reduction. Certain clusters of characteristics are highly interrelated across hazards indicating that they can be combined into higher-order characteristics or "factors." Three factors, labeled Dread, Familiarity, and Exposure, seem able to account for most of the interrelations among the eighteen characteristics.


Fischhoff, Baruch, Slovic, P., Lichtenstein, S., Read, S. and Combs, B.(1978), "How Safe is Safe Enough? A Psychometric Study of Attitudes Towards Technological Risks and Benefits," Policy Sciences, 8, 127-152.

Lowrance, W. (1976), Of Acceptable Risk: Science and the Determination of Safety, Los Altos, California: William Kaufmann Co.

Slovic, Paul, Fischhoff, B. and Lichtenstein, S.(1980(a)), Expressed Preferences, Eugene, Oregon: Decision Research Report 80-1.

Slovic, Paul, Fischhoff, Baruch and Lichtenstein, Sara (1980(b)), "Perceived Risk," Societal Risk Assessment: Safe is Safe Enough?, R. C. Schwing and W. A. Albers, Jr., (eds.), New York: Plenum Press.

Slovic, Paul, Lichtenstein, S. and Fischhoff, B. (in press), "Images of Disaster: Perception and Acceptance of Risks from Nuclear Power," Energy Risk Management, G. Goodman and W. D. Row (eds.), London: Academic Press.

U.S. Nuclear Regulatory Commission (1975), Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants, WASH 1400 (NUREG-75/014), Washington, D.C.



Paul Slovic, Decision Research, A Branch of Perceptronics [Eugene, Oregon.]
Baruch Fischhoff, Decision Research, A Branch of Perceptronics [Eugene, Oregon.]
Sarah Lichtenstein, Decision Research, A Branch of Perceptronics [Eugene, Oregon.]


NA - Advances in Consumer Research Volume 08 | 1981

Share Proceeding

Featured papers

See More


So-Bad-It’s-Good: When Consumers Prefer Bad Options

Evan Weingarten, University of California San Diego, USA
Amit Bhattacharjee, Erasmus University Rotterdam, The Netherlands
Patti Williams, University of Pennsylvania, USA

Read More


Guilt Undermines Consumer Willingness to Buy More Meaningful Time

Ashley V. Whillans, Harvard Business School, USA
Elizabeth W. Dunn, University of British Columbia, Canada

Read More


The Viciousness and Caring of Sharing: Conflicts and Motivations of Online Shamers

Chen Pundak, Tel Aviv University, Israel
Yael Steinhart, Tel Aviv University, Israel
Jacob Goldenberg, IDC

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.