Special Session Summary What Are the Chances? Biases in the Assessment of Probability and Risk



Citation:

Tina Kiesler and Vicki G. Morwitz (2001) ,"Special Session Summary What Are the Chances? Biases in the Assessment of Probability and Risk", in E - European Advances in Consumer Research Volume 5, eds. Andrea Groeppel-Klien and Frank-Rudolf Esch, Provo, UT : Association for Consumer Research, Pages: 195.

European Advances in Consumer Research Volume 5, 2001      Page 195

SPECIAL SESSION SUMMARY

WHAT ARE THE CHANCES? BIASES IN THE ASSESSMENT OF PROBABILITY AND RISK

Tina Kiesler, California State University, Northridge, U.S.A.

Vicki G. Morwitz, New York University, U.S.A.

Past research provides a mixed picture of the quality of human judgment of probabilities and assessments of risk. Research based on inductive probability learning suggests that people behave much like intuitive statisticians, and provide probability assessments that by in large conform to the norms of probability calculus (e.g., Peterson and Beach 1967). Although these studies concluded that people behave like intuitive statisticians, the studies also found that people’s probability and risk assessments were not perfect and that in some cases people behave more like biased statisticians. For example, subjects don’t always extract as much information from samples as required by Bayes rule, they sometimes misperceive elements of Bayes rule (e.g., priors=base rates), or they fail to combine them correctly. Still, by and large this stream of research concluded that humans performed well in assessing probabilities and risk.

A very different view of human probability and risk assessment came to light in the early seventies when Kahneman and Tversky introduced their influential research stream and identified many heuristics and biases people use in assessing probabilities and risk (Kahneman and Tversky 1973; Tversky and Kahneman 1974). Their stream of research suggested that people are not simply myopic or imperfect Bayesians, but rather people are not Bayesians at all. Instead, they argued that people assessed probabilities and risk using heuristic processes, such as assessments of representativeness (similarity) or availability i retrieval from memory. Their research showed that use of such heuristics can result in reasonable probability and risk judgments in some circumstances, but in many situations they lead to serious and systematic errors, variously referred to as "cognitive biases" or "cognitive illusions". Examples of such biases are the overconfidence bias (Mahajan 1992), the self-positivity bias (Taylor and Brown 1988) and base rate neglect (Ofir and Lynch 1984). Thus, this research provided a more negative picture of human competence when it comes to probability and risk assessment, suggesting that the cognitive algorithms deviate from normative principles.

Most of the past research on peoples’ assessments of probabilities and risk has demonstrated either that people behave like intuitive statisticians or that people’s assessments are biased. Few studies have focussed on understanding why such assessments are sometimes accurate and other times are biased. In addition, little research has examined the behavioral consequences of biased assessments. In this special session, we sought to answer the following questions. What factors influence the methods people use to assess probabilities and risk? What factors dictate biases in such assessments? And what are the behavioral consequences of the different methods people use? In an effort to illustrate the robustness of the phenomena, this session explored biases in the assessment of probabilities and riskCand the antecedents and consequences of these biasesCin three different settings: contracting an infectious disease, having a successful auction outcome, and predicting the outcome of a sporting event.

Each of the four papers demonstrated the antecedents and consequences of one or more bias in the assessment of probability and risk. The first paper by Menon, Block and Ramanathan examined individuals’ predictions of their likelihood of contracting hepatitis C and the risk of contracting the disease from engaging in different behaviors. They demonstrated that individuals’ assessments of their likelihood of contracting hepatitis C are prone to a self-positivity bias and they identified factors that reduce this bias. In a similar domain, the second paper by Sen, Bhattacharya and Johnson illustrated that an individual’s estimate of their (untested) partner’s likelihood of being infected with HIV can be biased, particularly when they have been tested for the disease. The extent of this bias is moderated by several individual-level variables such as promiscuity, perceived vulnerability and, more generally, sexual attitudes.

The third paper examined the effect of biases in probability assessment in a different domain, namely auctions. This paper by Greenleaf examined how the process sellers use to assess the utilities and probabilities of auction outcomes affects the reserve prices they set in open English auctions. In this auction context, in order to set the optimal reserve, sellers must assess the utility they earn from different auction outcomes and the probabilities of each outcome. Greenleaf showed that sellers are prone to certain biasesBtheir utilities are affected by anticipated regret and rejoicing, and their probability assessments by a tendency to favor frequency information over magnitude information. Finally, Kiesler, Morwitz and Yorkston examined how the process used to assess an outcome varies for experts and novices. In the context of predicting sports outcomes, they showed that contrary to intuition, experts are not more accurate than novices in assessing probabilities. They showed that an inverted U can describe the relationship between accuracy in predictions and knowledge with the most knowledgeable subjects in fact performing the worst. Their data indicated this occurs because people performing the best tended to use base rates (team past performance) more often than others. More knowledgeable subjects did not perform as well in this task because they too often relied on their own specific basketball knowledge and did not utilize base rates as often as they should. In addition, they found evidence of an additional perceptual bias they call the false-loyalty bias.

The paper in this session provided distinct perspectives in the study of biases in the assessment of probability and risk. The studies employed different empirical and methodological techniques to examine the antecedents and consequences of the self-positivity bias, overconfidence bias, frequency-over-magnitude bias, and base-rate neglect. The results of the set of papers provide an understanding of the factors influencing the methods people use to assess probabilities and risk, the factors that lead to biases in these assessments, and the behavioral consequences of the biased assessments.

REFERENCES

Kahneman, Daniel and Amos Tversky (1973), "On the Psychology of Prediction," Psychological Review, 80, 237-251.

Mahajan, Jayashree (1992), "The Overconfidence Effect in Marketing Management Predictions," Journal of Marketing Research, 24(August), 329-342.

Ofir, Chezy, and John G. Lynch, Jr. (1984), "Context Effects on Judgment Under Uncertainty," Journal of Consumer Research, 11 (September), 668-679.

Peterson, Cameron R. and Lee Roy Beach (1967), "Man as an Intuitive Statistician," Psychological Bulletin, 68 (1), 29-46.

Taylor, Shelley and J.D. Brown (1988), "Illusion and Well-Being: A Social Psychological Perspective on Mental Health," Psychological Bulletin, 103 (2), 193-210.

Tversky, Amos and Daniel Kahneman (1974), "Judgment Under Uncertainty: Heuristics and Biases," Science, 185, 1124-1131.

----------------------------------------

Authors

Tina Kiesler, California State University, Northridge, U.S.A.
Vicki G. Morwitz, New York University, U.S.A.



Volume

E - European Advances in Consumer Research Volume 5 | 2001



Share Proceeding

Featured papers

See More

Featured

The Positivity Problem: Using Mass-Scale Emotionality to Predict Marketplace Success

Matthew D Rocklage, Northwestern University, USA
Derek Rucker, Northwestern University, USA
Loran F Nordgren, Northwestern University, USA

Read More

Featured

Effects of Affective Language on Perceived Helpfulness of Online Reviews

Nikolay Georgiev, HEC Paris, France
Marc Vanhuele, HEC Paris, France

Read More

Featured

A Rational Model to Predict Consumers’ Irrational Behavior

Vahid Rahmani, Rowan University

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.