Hypothesis Testing and Consumer Behavior: &Quot;If It Works, Don't Mess With It&Quot;

Stephen J. Hoch, University of Chicago
ABSTRACT - "Most generics are Just as good as brand name products." The higher the price, the better the quality. Japanese cars are much better made than American cars.' Everyone should go to the dentist twice each year. My husband is definitely a meat and potatoes guy." The above see of statements are examples of everyday beliefs that consumers might hold concerning how the world around them functions. Within the context of this paper, we will consider these beliefs as hypotheses about consuming. I will examine hypothesis testing behavior as a basis for consumer learning and address three issues: (l) how people come up with these rules about consuming; (2) how these rules are maintained or changed in response to new information; and (3) what marketers can do to maintain or change consumers' rules to their own advantage.
[ to cite ]:
Stephen J. Hoch (1984) ,"Hypothesis Testing and Consumer Behavior: &Quot;If It Works, Don't Mess With It&Quot;", in NA - Advances in Consumer Research Volume 11, eds. Thomas C. Kinnear, Provo, UT : Association for Consumer Research, Pages: 478-481.

Advances in Consumer Research Volume 11, 1984      Pages 478-481

HYPOTHESIS TESTING AND CONSUMER BEHAVIOR: "IF IT WORKS, DON'T MESS WITH IT"

Stephen J. Hoch, University of Chicago

ABSTRACT -

"Most generics are Just as good as brand name products." The higher the price, the better the quality. Japanese cars are much better made than American cars.' Everyone should go to the dentist twice each year. My husband is definitely a meat and potatoes guy." The above see of statements are examples of everyday beliefs that consumers might hold concerning how the world around them functions. Within the context of this paper, we will consider these beliefs as hypotheses about consuming. I will examine hypothesis testing behavior as a basis for consumer learning and address three issues: (l) how people come up with these rules about consuming; (2) how these rules are maintained or changed in response to new information; and (3) what marketers can do to maintain or change consumers' rules to their own advantage.

LEARNING RULES ABOUT CONSUMING

We are all familiar with instances of consumer learning: learning about microwave ovens by reading Consumer Reports; learning to make a hazelnut torte from Paul Bocuse; learning never to go shopping on the days immediately following Thanksgiving and Christmas for fear of suffocation; learning that Miracle Whip is not the same as real mayonnaise. We know learning when we see it, but a precise definition is more problematic (Hilgard & Bower 1966). With the emergence of the information processing metaphor in cognitive psychology, discussion of learning has almost reached extinction (e.g., Lachman, Lachman and Butterfield 1979) in favor of memory processes and structure. However, everyone still implicitly agrees that we do learn.

The Consumer is Given a Rule

How do people learn rules about consuming? The most common and efficient way we learn is by listening to and remembering what other people tell us. Mothers, fathers, teachers, books, friends, and TV provide us with most of our hypotheses. From this perspective much of learning can best be represented as the remembering of previously derived (and tested) rules. One of the most important rules we learn is that in most instances a tried and true rule already exists even if we can't remember it; and it is much easier to ask someone else than to derive or rediscover the rule ourselves. Young children very quickly learn this rule about learning. While the young infant has a learning repertoire limited to exploratory trial and error and possibly some limited imitation skills, consider the two most common utterances of the ever curious two year old: "me, me, me," and slightly later, "why, Mommy, why?-- On the surface this type of learning seems inherently less interesting than cases requiring the consumer to be a more active participant in the rule discovery process. However, it is interesting to consider why some rules are remembered and others forgotten, some believed and others discarded.

The Consumer Induces a Rule

A second way we learn is by inducing a rule based upon what we observe happening around us. Scholars in virtually every field have speculated on how people generate hypotheses, but there is little actual research on the underlying psychological processes (Gettys & Fisher 1979). Clearly trial and error is one frequent behavior. Though we typically frown upon random responding in favor of techniques based upon more systematic variation, Campbell (1966) has argued that blind variation coupled with selective retention is the basis for creative thought. In a developmental sense, this seems quite plausible--an infant will try a multitude of small actions but retain only those actions that work (aka the law of effect).

Why do people experience such difficulty in generating hypotheses? At least part of the problem seems to occur as a product of searching memory for relevant information. Consider the following task. You observe a particular pattern of data and then are asked to generate multiple hypotheses (or rules) the might account for the data. In many cases, there will be many, sometimes an infinite number of plausible hypotheses, but most people will be hard pressed to generate even a handful of alternatives. In previous research on predictive judgement (Hoch 1983), I found that the problems in hypothesis generation are of ten due to retrieval interference during the search of associative memory. I found that the more hypotheses subjects had previously generated, the harder it was to come up with new hypotheses. The assumption is that subjects use the "to be explained" data as a retrieval or generation cue. As they think harder and harder about how to ex?lain the data, what do they naturally think about first? The hypotheses that they have al ready generated.

These problems in hypothesis generation can be accounted for by a model of retrieval in associative memory proposed by Shiffrin (1970) and Rundus (1973). As applied to the hypothesis generation task, the model makes three assumptions: ( 1) generation is probabilistic, based upon the strength of the associations between the retrieval cue and potential alternative hypotheses; ()) the generation process is analogous to sampling with replacement, where previously generated hypotheses can be retrieved again; and (3) the very act of generation serves to strengthen the association between the gene rated hypothesis and the retrieval cue. Therefore the probability of retrieving a previously generated hypothesis is increased which in turn reduces the probability of generating new hypotheses. It is easy to see how such retrieval processes in associative memory could lead to what we commonly refer to as "thinking blocks." People spontaneously rehearse the things they already know over and over again, precluding the generation of new ideas. From this perspective, the "Eureka" phenomenon and the practice of putting aside the problem until another day are more understandable--the strength of the associations between previously generated hypotheses and the retrieval stimulus decay in the interim.

The Consumer Changes Previously Learned Rules

The third way we learn is by adapting or changing previously learned rules to be in accord with existing information. While the initial generation of hypotheses is a fascinating area, it probably accounts For only a small fraction of adult learning. Learning is a dynamic process. And because adults typically have developed a large repertoire of rules that would be applicable to most frequently encountered situations, most of the research has been concerned with the processes underlying the maintenance and revision of these rules over time.

Piaget (1954) addressed this issue in his work on the development of perceptual and intellectual skills. In the early stages of cognitive development, infants accommodate stimulus objects and adjust their mental representations to be consistent with the incoming data. An appropriate metaphor is that of the child as a sponge, absorbing all the information that the environment offers. As the child continues to mature, however, incoming information is increasingly assimilated to existing cognitive schemes-(Bartlett's schemata). With assimilation the child deals -with environmental events in terms of current structures. Piaget viewed adaption as the interplay between accommodation and assimilation. Bobrow and Norman (1975) discussed memory schemata in a similar way by distinguishing between data-driven (accommodation) and concept-driven (assimilation) processing. .Adult learning is dominated by the assimilation process because we are rarely at a loss for a ready-made rule for most situations. The advantages and liabilities associated with various forms of schematic processing have been well documented in recent years, especially in the social cognition literature (see reviews by Hastie 1981; Taylor and Crocker 1981). Therefore I will only discuss a few studies more directly relevant to learning and hypothesis testing behavior before moving to the question of the persistence of erroneous rules.

Research on multiple cue probability learning has demonstrated that people have a very difficult time learning probabilistic relationships between two or more variables (see Castellan 1977 for a review). Under certain conditions, however, people are able to learn complex rules and relationships when the variables are given realistic labels, such as price and quality (Adelman 1981; Miller 1971; Muchinksy and Dudycha 1975). The necessary condition is that the observed data is congruent with a priori theories that subjects have about how the world works. For example, in detecting a negative correlation between x1 and x2, Learning would be greatly facilitated if the variables were labeled price and demand. Here the cue-criterion relationship matches the pre-existing world Knowledge of the subjects. Here we see the adaptive advantages of assimilation --a priori theories guiding the perception and understanding of a complex stimulus by providing a ready-mate mental structure. Alternatively, if the relationship between the labeled variables violates world knowledge (e.g., price and quality in the case of a negative correlation), learning becomes virtually impossible (Camerer 1981).

It appears that existing hypotheses can be a mixed blessing; they can both promote and hinder perception and cognitive learning. When the observable data are congruent with world knowledge, then existing rules speed the perception process by allowing people to assimilate a large amount of data and interpret it within the framework of -well-developed knowledge structures. [A priori theories are similar to the illuminating, often magical character of the analogies and metaphors found in discussions of memory in cognitive psychology (Roediger 1980). We know that memory is not really a wax tablet, cow's stomach, or digital computer, but somehow these concrete prototypes make the abstract theories much more understandable to experts and laymen alike.] However, when the data are incongruent with a person's conception of how the world works, the predominant finding is that people have quite a difficult time accommodating the environment by adjusting their rules accordingly.

In terms of hypothesis testing, there are several lines of research indicating that people search for information that is congruent with their a priori theories. Bruner, Goodnow, and Austin (1956) found that subjects had a "thirst for confirming redundancy" in concept identification tasks. One of the most cited examples of the socalled "verification or confirmation bias" is Wason's (1960) 2-4-6 rule discovery task. Subjects were shown the sequence of numbers 2-4-6 and told that it obeyed a rule that the experimenter had in mind. Subjects generated additional sequences and received feedback as to whether their sequences obeyed the rule. Using this feedback, their task was to specify the correct rule. Wason found that subjects continued to offer sequences that obeyed the rule and confirmed their hypotheses, rather than pursuing a logically superior falsification strategy. Mynatt, Doherty, and Tweeney (1977) also found a confirmation bias in a simulated research environment. Snyder and Swann (1978) and Darley and Gross (1983) extended these findings to hypothesis testing in social interaction and labeling. The concern is that if people have a predisposition to always search for information that only confirms their hypotheses and rules, how can consumers possibly learn when their rules are actually not right (Brehmer 1980)?

WHY DO RULES PERSIST?

There seem to be three general reasons why people would maintain their rules about consuming. First, the rules could be right. Now this is not a particularly interesting possibility from a researcher's point of view, but a majority of our rules are probably at least "mostly" right. As Toda (1962) said, "Man and rat are both incredibly stupid in an experimental room. On the other hand . . . man drives a car, plays complicated games and organizes society, and rat is troublesomely cunning in the kitchen." (p. 165) The other two possibilities concern rules that are wrong, either normatively or pragmatically. In one case, the rule is wrong but the consumer doesn't realize it because of never having been exposed to any evidence to the contrary. In the other case, the rule is wrong, the consumer knows it or at least has been exposed to some disconfirming evidence, but the evidence is ignored, disregarded or forgotten. In the latter case, I think there would be little disagreement that such a consumer is biased or in error. However, the former case is more problematic.

One reason that the consumer might not know that the rule is wrong is because of a bias to search for confirming information. If the person never encounters disconfirming evidence, how can we expect him to change his rule? Does this constitute an error? From the perspective of most of those working in judgment, reasoning, or social cognition, the answer would be affirmative because the behavior clearly deviates from the logical model. However, with the resurrection of J.J. Gibson's (1966) ecological approach to perception, others (McArthur and Bacon 1983) would disagree. A simplified version of their argument is that perception serves an adaptive function; therefore if all the biases were really error, how could man survive? They stress the essential accuracy of perception based knowledge/learning (which seems reasonable), but they rely too heavily upon the irrefutability of the natural selection argument (i.e., not every maladaptive trait will immediately be selected out, though in the long run they might). However, they do make the important point that many "errors" could be over-generalizations of highly adaptive perceptual attunements. Taking the cue of the ecological perspective, it is important to view the limitations of consumer learning within commonly encountered decision environments. People do make mistakes, but we also need to understand the conditions where such errors will actually affect performance (see Hoch and Tschirgi 1983 and Hogarth 1981 for discussions of how people can take advantage of redundancy in the environment to improve performance). Wrong Rules in Spite of the Evidence

Here we include consumers who simply refuse to learn. How common is this type of behavior? We'll start off with the more flagrant cases. Lord, Ross and Lepper (1979) gave people purportedly "objective" evidence, some which supported and some which contradicted the prior theories of their subjects. Not only did subjects ignore the disconfirming evidence, but their attitudes actually became more polarized in favor of their prior theories. I don't really understand why people would behave in such a manner, but it is hard to come up with a non-motivational explanation. Superstitions provide other examples of refusing to learn. 'Why does the basketball coach continue to wear the same loud sport coat despite the fact that he admits that doing so has nothing to do with whether his team WinS or not? First, there is no readily observable cost to doing so (in fact in this case there is an obvious cost savings by keeping the wardrobe to a minimum) and second, people fall back upon the psycho logic of "not taking chances."

Research on the assessment of contingency and covariation (Crocker 1981) and illusory correlation (Hamilton and Rose 1980) provide other examples of problems in learning in the presence of disconfirming evidence. The overwhelming finding is that people pay too much attention to positive hits and neglect other sources of information, both confirming and disconfirming (Arkes and Harkness 1983). However, in a cross-study reanalysis of several contingency estimation studies where task characteristics varied widely Lipe (1982) found that all four cells in the 2 x 2 table influenced judgment, though not exactly as specified by the equation for Pearson's rho. Shaklee and Mims (1982) and Arkes and Harkness (1983) found that many people did not rely solely upon positive confirmations but in fact adopted quite sophisticated rules (e.4., comparison of conditional probabilities) when memory load was reduced by presenting the complete 2 x 2 table. Schustack and Sternberg (1982) found that subjects paid attention to both confirming and disconfirming evidence when the task involved causal diagnosis rather than contingency assessment. One reason for this difference may be that the subject is more attuned to consider the counterfactual (i.e., would the effect have occurred if the hypothesized cause had not) in the causal reasoning task.

The fact that consumers do not learn from disconfirming evidence could also be due to differential encoding of Outcomes. Estes (1976) found selective encoding of outcomes in a sequential task, where confirming instances were better remembered than their disconfirming counterparts. I4 many instances, disconfirming data could actually be considered irrelevant to the validity of the rule. Consider the wife who has the rule "I never serve fish because my husband hates it. If she sees her husband eat fish a) a friend's house, should she change her rule or treat the behavior as a minor exception to the rule attributable to politeness or lack of other alternatives? If she has observed several confirming instances of the husband-hates-fish rule in the past, why should she change the rule in response to seemingly irrelevant evidence? This type of ex-post redefinition of evidence is probably more prevalent than we might first imagine (e.g., hindsight bias, Fischhoff 1975), especially as outcomes are further separated in tine from the original actions of the consumer. Consumer rules are likely to be fuzzy (Zadeh 1905) and this elasticity in specification makes the categorization of instances as confirming, disconfirming, or irrelevant, quite problematic. Moreover, the encoding of evidence will be at least partially controlled by a priori theories, with a tendency for consumers to give their theories the benefit of the doubt. To the native that believes in the power of his shaman, a rain dance at 10 A.M. during monsoon season that is followed by rain that evening probably would be encoded as confirmation of mystical powers To the Western missionary, it serves as another example of primitive hokum.

A final reason why rules may not be abandoned in the face of conflicting evidence is that no better alternatives are available. This is actually Kuhn's (1983) notion of the progression of a science. Disconfirmation of a theory does not imply its automatic replacement (Einhorn and Hogarth 1983). The practice is common in many disciplines (e g., utility theory as a basis for individual behavior in economics). It also occurs in the consumer realm, where people maintain "consumer myths" (Levy 1981) that provide a "logical" model capable of overcoming contradictions or paradoxes in natural and social experience. Although disconfirming evidence does not have to lead to rule/theory discovery and replacement, finding that your rule is wrong can be extremely therapeutic. In fact generating alternative hypotheses is virtually impossible without some disconfirming instances upon witch to base your inductive efforts.

Let's go back to Wason's (1960) 2-4-6 task. The rule in this case is very general: any three ascending numbers. The problem is tricky because the very sequence 2-4-6 suggests so many plausible rules (e.g., even numbers, A + B = C, etc.), all of which foment sequences which coincidently obey the true rule. Getting negative evidence in this task is very difficult without random generation. In a study I conducted on the task a couple of years ago, I found that people had a lot of trouble coming up with new hypotheses to test when each sequence they generated continued to obey the rule. However, as soon as they were fortuitous enough to generate a disconfirming instance, a flood of new hypotheses came to mind and subjects were able to solve the problem fairly quickly. It could be that the disconfirming instance acts as a new retrieval cue that helps to overcome the interference of the previously generated confirming instances; subjects can now discard all old hypotheses and begin anew. Subjects find themselves "between a rock and a hard spot" in this task. It is hard to generate disconfirming evidence without an alternative hypothesis, but difficult to generate an alternative hypothesis without some disconfirming instances from which to induce a new rule.

Wrong Rule and Ignorance of Evidence to the Contrary

'When does a consumer not have access to evidence that a rule is not correct? One possibility is when the learned rule guides a behavior pattern that precludes the observation of disconfirming events (Einhorn and Hogarth 1978). A prime example is the real estate agent with the rule: people in double-knit leisure suits are not good prospects. The rule could easily lead to a self-fulfilling prophesy where the prospect will likely not receive the class of service afforded the Burberry suited gentleman.

The consumer also will not encounter contradictory evidence when the rule works. The rule may be pragmatically but not logically correct. Schwartz (1982) conducted a series of experiments examining the effects of contingent reinforcement on both simple and complex response sequences in college students. To obtain reinforcement, subjects had to respond by using one of seventy possible successful sequences. The students, just like pigeons, developed stereotyped behavior patterns even though reinforcement did not require stereotypy. This stereotypy also interfered with their ability to discover the underlying rule because of little disconfirming evidence once a rule that worked had been discovered. when subjects were explicitly instructed to find the rule, stereotypy did not develop. However, if subjects had previously been rewarded for successful outcomes, rule discovery was nigh impossible. This suggests that if negative feedback is not received early on after the adoption of an erroneous rule, then it may be very difficult to change the rule because stereotyped behavior may preclude the observation of disconfirming instances. Reinforcement seems to teach organisms to repeat precisely what has worked in the past (superstition) and this stereotyped behavior interferes with the ability to uncover generalizations. Brand loyalty and buyer inertia provide examples of stereotypy; however, variety-seeking (McAlister and Pessimier 1982) seems just the opposite, where consumers should have access to plenty of disconfirming information. It is worth pointing out that behavior stereotypy will provide the biggest obstacle to learning the true rule when the rule is sufficient but not necessary. in all practicality, however, people could do a lot worse than maintaining erroneous but sufficient rules. The old political adage, "If it works, don't mess with it" probably is good advice.

The experimental evidence (Schwartz 1982; Tschirgi 1980) seems to indicate that man is not naturally predisposed to engage in hypothesis testing. The reason is that such testing must occur within the context of ongoing behavior. People must live with the results, and if their goal is continued success, they will not be so interested in proving an abstract point. While we might consider such behavior shortsighted, it is not clear that we want to call it stupid or all that erroneous. For the working house wife/husband, it may be more rational to serve the six year old exactly what he wants to eat (hot dogs) when food preparation time is at a premium. There is a practical trade-off between making sure the child has something to eat rather than finding out all the foods he might eat through a potentially painful experimentation process. Horton (1967) calls these "mixed motive" situations, where there is a desire to uncover true generalizations at the same time as satisfying practical constraints. tie presents the example of an African farmer who is willing to do an experiment to improve crop yield. If the new method (X) works, however, the farmer is not particularly interested in doing not (X) to see if it leads to lower crop yield. Counterfactual reasoning is not practical here and in fact is not a part of traditional African thought patterns.

One should not go away with the impression that people have absolutely no desire to engage in hypothesis testing behavior. Instead, it is the case that when the rule works, people will probably pursue a confirmatory strategy if they do bother to test, the most sophisticated of which is holding the hypothesized variable constant and varying contexts. Although Wason and Johnson-Laird (1972) and many others imply that this is an undesirable bias, it seems far superior to the pattern of passive reinforcement-induced behavioral stereotypy that could develop? While this strategy can increase external validity (generalizability), it tells us nothing about internal validity which can only be addressed with a disconfirmation strategy of varying one thing at a time. Do we accept the Bruner, et al. (1956) conclusion that people have an inherent thirst for confirming evidence? I think this question deserves a qualified yes," qualified by the fact that people may often be operating in friendly environments where it is quite hard to find disconfirming evidence. Einhorn and Hogarth (1978), in their elegant treatment of overconfidence in judgment, make a similar observation when discussing base rates and selection ratios. How hard can it be to admit good students if you're the admissions officer at Stanford? d more literal example of the effect of a friendly environment upon rule persistence is provided by Davis and Ragsdale (1983) in their study of consumers' accuracy and confidence in predicting the preferences of their spouses for new product concepts. Let's say that you're a husband who has the theory that I know the kinds of clothes my wife likes.-- The reason you hold that belief is that every time you buy her a present, she tells you how much she likes it. Now, if you were really interested in testing the validity of your rule, you could buy her something that you think she would hate and observe her reaction . But life is too short to prove such a silly point with such unpleasant consequences. Instead you will probably maintain your rule because your wife will continue to provide a friendly environment full of the confirmation provided by a series of gentle white lies motivated by her not wanting to hurt your feelings or get into a fight.

Tschirgi (1980) argued that hypothesis testing behavior is most likely to be engaged when (l) something goes wrong or bad, or (2) something unexpectedly turns out right. In these cases, people may adopt what Van Duyne (1974) termed a detective set as they look for the culpable variable. Tschirgi found that hypothesis testing strategies chosen by both adults and children were dependent upon the outcomes associated with the to-be-tested rule. ';hen the outcomes were good (e.g., the cake was moist and the hypothesized cause was the ingredient honey), then subjects chose a confirmatory strategy of holding the suspected cause constant and varying the other variables. However, when the outcome was bad (e.g., a runny cake due to the honey), subjects adopted the logical disconfirming test of varying the hypothesized cause and holding everything else constant. Clearly these results seem driven by the pragmatic concern of subjects to reproduce positive outcomes and eliminate negative outcomes, a strategy that Gibson (1966) would see as ecologically adaptive though not normative. Tschirgi's work indicates that consumers may be capable of learning through hypothesis testing when it really matters even if they don't do so for exactly the "right" reasons. This view of the adaptiveness of hypothesis testing may be a bit optimistic, especially tn cases where something unexpectedly goes right and you want to figure out why so that you can reproduce the outcome. Because of the good outcome, the likely strategy would be confirmatory, holding the hypothesized constant and varying context. Unfortunately, this would not tell you how or why the outcome occurred. Moreover, if the consumer cannot even generate a hypothesis, precise repetition may occur (superstitious stereotypy).

MARKETING STRATEGY AND CONSUMER HYPOTHESIS TESTING

Marketers can take advantage of how consumers learn through hypothesis testing in a couple of different ways. First, they may want to prevent learning so that consumers maintain hypotheses and rules that are currently favorable to their products. Clearly this is a defensive posture and according to those who continually point out the frailty and durability of human judgment (e.g. Brehmer 1980; Nisbett and Ross 1980) should be easy to implement. I'm sure that PiG and IBM wished that this were more easily done than said. Second, marketers may try to help consumers learn new rules or change existing ones, a more offensive strategy.

Preventing learning

Firms can defend their market position by capitalizing upon the tendency to assimilate most information to a priori theories or schemes. In the retail trade it is rumored that S-Mart purposefully maintains their stores in a picked-over, sloppy condition to reinforce the belief that they offer no-frills low prices. An upscale linen store recently opened a "warehouse outlet," and to promote the "wholesale" prototype, they went to great pains to have the right look--drab gray industrial steel shelving stacked to the ceiling, instead of using some spare glass cabinets they used in their regular store. Marketers need to continually reinforce consumers' buying rationales so that when disconfirming evidence is encountered it is interpreted within that favorable framework (see Deighton 1983). when Zenith started getting killed by the Japanese because of their high priced manufacturing facilities, they tried to instantiate the "hand-crafted quality" belief. If it had worked, then consumers would have interpreted discrepant competitor claims of lower price due to efficiency as confirmations of Zenith quality. Clearly it didn't work in the long run, but the "made in Japan implies lower quality" hypothesis was a consumer rule in very good standing during the 1950s-1960s. (I wonder whether the rule was as untrue then as it appears now.)

Other defensive strategies can be aimed at promoting behavioral stereotypy (buying inertia) and preventing the exposure to disconfirming evidence. While it is impossible to keep consumers from hearing competitive claims, you can encourage confirmatory hypothesis testing. Marketers can stress the "why change if it works Logic and point to the unnecessary risks associated with experimenting. This is a common practice in industrial selling situations where purchasing agents must trade-off a lower price and a good service/dependability record ("sure their produce is cheaper, but does it work every time?)

Encouraged Learning

When Lee Iacocca told consumers to buy a better car if they could find one, he was challenging people actively to gather information that would disconfirm the widely held hypothesis that Chrysler was about to go down the tubes. Clearly, GM would not have wanted to encourage such disconfirmation since existing beliefs about GM were already favorable. Chrysler had nothing to lose, GM nothing to gain. The Schlitz live-TV taste tests were a similar situation to provide disconfirming information to loyal Bud and Miller drinkers who would normally never observe such evidence on their own because they would never bother to experiment.

Many advertisers already seem aware of how stereotyped behavior patterns prevent learning. Such practices as couponing and free samples can be viewed as attempts to break that "unthinking" (Deighton 1983) behavior by providing alternative evidence. Marketers face a tough situation when trying to make consumers take heed and learn from disconfirming information. Many new entrants come into a market where consumers are already quite happy with existing products. We know under these circumstances that consumers are not going to be all that attuned to active hypothesis testing. If they are already satisfied, why take chances and change. A great example of one possible strategy is that followed by Stove-Top Stuffing. Here the unsuspecting wife finds out that she holds a wrong rule about her husband's preferences for search. Moreover, she discovers that the reason she maintains this rule is because she has engaged in confirmatory hypothesis testing (how embarrassing) by always serving potatoes, which indeed her husband does like. This super-rational ad is intellectually very appealing; I hope it worked in practice, though it may have been too subtle or people may have denied its personal relevance (especially given the strength of the "I Know my spouse" hypothesis).

In conclusion, this paper has tried to make a few simple points about hypothesis testing in general and its relevance to consumer learning in particular. First, people are not as pathetically biased in their hypothesis testing behavior as sometimes portrayed (Nisbett and Ross 1980; Wason and Johnson-Laird 1972). In many situations, they actually follow the normatively prescribed pattern (disconfirmation instead of confirmation), though usually for pragmatic rather than logical reasons (Tschirgi 1980). Moreover, while a confirmation testing strategy is not a good as one motivated by falsification, it is a whole lot better than passive behavioral stereotypy (Schwartz 1982). Marketers can benefit from an understanding of the characteristics of decision environments that lead to various patterns of hypothesis testing.

REFERENCES

Adelman, L. (1980). "The Influence of Formal, Substantive, and Contextual Tasks Properties on the Relative Effectiveness of Different Forms of Feedback in Multiple-Cue Probability Learning Tasks," Organizational Behavior and Human Performance, 27, 423-427.

Arkes, H. R. and Harkness, A. R. (1983) Estimates of Contingency Between Two Dichotomous Variables, Journal of Experimental Psychology: General, 11, 117-135.

Bobrow, D. G. and Norman, D. A. 11975) "Some Principles in Memory Schemata, in D. G. Bobrow and A. Collins (Eds.) Representation and Understanding: Studies in Cognitive Science, New York: Academic Press.

Brehmer, B. (1980) "In One Word: Not from Experience, Acta Psychologica, 45, 223-241.

Bruner, J S., Goodnow, J. and Austin, G. A. (1956) A Study Of Thinking, New York: Wiley.

Campbell, O. T. (1960) "Blind Variation and Selective Retention in Creative Thought as in Other Knowledge Processes," Psychological Review, 67, 380-400.

Camerer, C. (1931) The Validity and Utility of Expert Judgment, unpublished dissertation, University of Chicago.

Castellan, N. J. (1977) "Decision Making with Multiple Probabilistic Cues, in N. .J. Castellan, N. B. Pisoni, and G. R. Potts (Eds.), Cognitive Theory, Vol. 2, Hillside, N. J.: Erlbaum.

Crocker, J. (1981) "Judgment of Covariation by Social Perceivers," Psychological Bulletin, 90, 272-292.

Darley, J. M. and Gross, P. H. (1983) "A Hypothesis-Confirming Bias in Labeling Effects," Journal of Personality and Social Psychology, 64, 20-33.

Davis, R. T. and Ragsdale, E.K.E. (1983) "Limitations and Biases in Perceiving Others," working paper, Graduate School of Business, University of Chicago.

Deighton, J. (1983) How to Solve Problems that Don't Matter: Some Heuristics for Uninvolved Thinking, in R. P. Bagozzi and A. M. Tybout (Eds.), Advances in Consumer Research, Vol. X, Ann Arbor: Association for Consumer Research.

Einhorn, H. J. and Hogarth, R. M. (1978) "Confidence in Judgment: Persistence of the Illusion of Validity," Psychology Review, 85, 395-416.

Einhorn, H. J. and Hogarth, R. M. (1983) A Theory of Diagnostic Inference: Judging Causality," working paper, Center for Decision Research, University of Chicago.

Estes, W. K. (1976) The Cognitive Side of Probability Learning, Psychological Review, 83, 37-64.

Fischhoff, B. (1975) "Hindsight/Foresight: The Effect of Outcome Knowledge on Judgments Under Uncertainty," Journal Of Experimental Psychology: Human Perception and Performance, 1, 288-299.

Gettys, C. G. And Fisher, S. D. (1979) Hypothesis Generation and Plausibility Assessment," Organizational Behavior and Human Performance, 94, 93-110.

Gibson, J. J. (1966) The Senses Considered as Perceptual Systems, Boston: Houghton-Mifflin.

Hamilton, D T. and Rose, T. L. (1980) "Illusory Correlation and the Maintenance of Stereotypic Beliefs," Journal of Personality and Social Psychology, 39, 832-845.

Hastie, R. (1931) Schematic Principles in Human Memory," in E. T. Higgins, C. P. Herman and M. P. Zanna (Eds.), Social Cognition, Hillsdale, N.J.: Erlbaum.

Hilgard. E. R. and Bower, G. H. (1966) Theories of Learning New York: Appleton-Century-Crofts.

Hoch, S. J. (1983) "The Effects of Interference in Hypothesis Generation upon Predictive Judgment, working paper, Center for Decision Research, University of Chicago.

Hoch, S. J. and Tschirgi, J. E. (1983) "Cue Redundancy and Extra Logical Inference in a Deductive Reasoning Task, Memory and Cognition, 11, 200-299.

Hogarth, R. M. (1981) Beyond Discrete Biases: Functional and Dysfunctional Aspects of Judgmental Heuristics." Psychological Bulletin, 98, 197-217.

Horton, R. (1967) "African Traditional Thought and Western Science, Africa, 37, 50-71; 155-187.

Kuhn, T. S. (1970) The Structure of Scientific Revolutions, Chicago: University of Chicago Press.

Lachman, R., Lachman, J. E. and Butterfield, E. C. (1979) Cognitive Psychology and Information Processing, Hillsdale. N.J.: Erlbaum.

Levy, S. J. (1981) Interpreting Consumer Mythology: A Structural Approach to Consumer Behavior,- Journal of Marketing, 45, 49-51.

Lipe, M. (1982) -A Cross-Study Analysis of Covariation Judgments, working paper, Graduate School of Business, University of Chicago.

Lord, C. Ross, L., and Lepper, M. (1979) "Biased Assimilation and Attitude Polarization: The Effect of Prior Theories on Subsequently Considered Evidence," Journal of Personality and Social Psychology, 37, 2098-2109.

McAlister, L. and Pessimier, E. (1982) "Variety Seeking Behavior: A Critical Review, Journal of Consumer Behavior, 9, 311-322.

McArthur, L. Z. and Baron, R.M. (1983) "Toward an Ecological Theory of Social Perception," Psychological Review, 90, 215-238.

MiLler, P. McC. (1971) -Do Labels Mislead? A Multiple-Cue Study, Within the Framework of Brunwik's Probabilistic Functionalism," Organizational Behavior and Human Performance, 6, 480-500.

Muchinsky, L. and Dudchya, J. (1975) "Human Inference Behavior in Abstract and Meaningful Environments," Organizational Behavior and Human Performance, 13, 377-391

Mynatt, C. r., Doherty, M. E., and Tweeney, R. D. (1977) "Confirmation Bias in a Simulated Research Environment: An Experimental Study of Scientific Inference, Quarterly Journal of Experimental Psychology 29, 85-95.

Nisbett, R. and Ross, L. (1980) Human Inference: Strategies and Shortcomings of Social Judgment, Englewood Cliffs, N.J.: Prentice-Hall.

Piaget, J. (1954) The Construction of Reality in the Child, trans. M. Cook, New York: Basic Books.

Roediger, H. L. (1980) "Memory Metaphors in Cognitive Psychology, Memory and Cognition, 8, 231-246.

Rundus, D. (1973) "Negative Effects of Using List Items as Recall Cues," Journal of Verbal Learning and Verbal Behavior, 12, 43-50.

Schustack, M.W. and Sternberg, R. J. (1981) "Evaluation of Evidence in Causal Inference, Journal of Experimental Psychology: General, 110, 101-120.

Schwartz, B. (1982) Reinforcement-Induced Behavioral Stereotypy: How Not to Teach People to Discover Rules," Journal of Experimental Psychology: General, 111, 23-59.

Shaklee, H. and Mims, M. (1982) Sources of Error in Judging Event Covariation, Journal of Experimental Psychology: Learning, Memory, and Cognition, 8, 908-224.

Shiffrin, R. M. (1970) "Memory Search," D.A. Norman (Ed.) Models of Human Memory, New York: Academic Press.

Snyder, M. and Swann, W. B. (1978) Hypothesis-Testing in Social Interaction," Journal of Personality and Social Psychology, 36, 1202-1219.,

Taylor, S. E. and Crocker, .J. (1981) "Schematic Bases of Social Information Processing," in E. T. Higgins, C. D. Herman and M P. 7,anna (Eds.), Social Cognition, Hillsdale, N.J.: Erlbaum.

Toda, M (1962) "The Design of a Fungus-Eater: A Model of Human Behavior in an Unsophisticated Environment, Behavioral Science, 7, 164-183.

Tschirgi, .J. E. (1980) Sensible Reasoning: A Hypothesis about Hypotheses," Child Development, 51, 1-10.

van Duyne, D. C. (1974) "Realism and Linguistic Complexity in Reasoning," British Journal of Psychology, 65, 59-67.

Wason, P. C. (1960) "On the Failure to Eliminate Hypotheses in a Conceptual Task," Quarterly Journal of Experimental Psychology, 12, 129-140.

Wason, P. C. and Johnson-Laird, P. N. (1972) Psychology of Reasoning: Structure and Content, Cambridge, MA: Harvard University Press.

Zadeh, L. A. (1965) Fuzzy Sets, Information and Control, 8. 338-353.

----------------------------------------