Consumers' Assessment of Covariation

James R. Bettman, Duke University
Deborah Roedder John, University of Wisconsin
Carol A. Scott, University of California, Los Angeles
ABSTRACT - The importance of assessment of covariation in understanding consumer learning is stressed. S hypothesis-testing view of learning is presented, and the role of covariation judgments in such a view is outlined. The steps in the process of covariation assessment proposed by Crocker are then examined, and these steps are applied to judgments of price-quality relationships. Finally, some preliminary empirical data on perception of covariation in rank-order data are presented.
[ to cite ]:
James R. Bettman, Deborah Roedder John, and Carol A. Scott (1984) ,"Consumers' Assessment of Covariation", in NA - Advances in Consumer Research Volume 11, eds. Thomas C. Kinnear, Provo, UT : Association for Consumer Research, Pages: 466-471.

Advances in Consumer Research Volume 11, 1984      Pages 466-471


James R. Bettman, Duke University

Deborah Roedder John, University of Wisconsin

Carol A. Scott, University of California, Los Angeles


The importance of assessment of covariation in understanding consumer learning is stressed. S hypothesis-testing view of learning is presented, and the role of covariation judgments in such a view is outlined. The steps in the process of covariation assessment proposed by Crocker are then examined, and these steps are applied to judgments of price-quality relationships. Finally, some preliminary empirical data on perception of covariation in rank-order data are presented.


Some of the most important issues in understanding consumer behavior involve learning. Answers to questions such as how consumers learn from their experiences with products, how new beliefs are formed or current beliefs changed, and how prior knowledge influences ongoing processing are crucial to comprehending the dynamics of consumer behavior. Despite the importance of the topic, however, consumer learning has not been treated very satisfactorily from a cognitive perspective. While discussions of stimulus-response approaches such as classical and instrumental conditioning abound, cognitive approaches to consumer learning have not been addressed in any detail.

The purpose of this paper is to redress this imbalance by presenting some initial ideas and data regarding the role of assessment of covariation in consumer learning. Assessment of covariation refers to the processes through which individuals judge the relationships between events or concepts. As Crocker (1981) points out, such knowledge of relationships is a crucial component of learning, helping individuals to explain, control, and predict their environments. The organization of the paper is as follows: first, a brief outline of a hypothesis-testing view of learning is presented to provide a context for the following discussion. Then the steps involved in the assessment of covariation are considered, including potential biases and an application to the example of judgments of the relationship between price and quality. Finally, a preliminary empirical investigation of individuals' abilities to perceive covariation in rank-ordered data is presented.


The major premise in a hypothesis-testing view of learning is that consumers have hypotheses about products and the marketplace which they are constantly forming, assessing, and adjusting. For example, consumers may have hypotheses that price and quality are related, that a particular brand tastes good, that advertisements sometimes exaggerate, that automobile salespeople can't be trusted, or that a typical 35 mm camera is bulky. Thus, hypotheses could include ideas about relationships, beliefs, attributions, or prototypical products.

A general model of hypothesis-testing and learning might include the following components: 1) assessment of the adequacy of the current hypothesis (if one even exists); 2) information acquisition; 3) information interpretation; 4) revision of existing hypotheses or formation of new ones; and 5) use of hypotheses to guide behavior. An individual may assess whether his or her current hypothesis (if one exists) is adequate. Such an assessment may be prompted by new information (either obtained by searching for it or incidentally), uncertainty, or contradictory information. If a hypothesis exists and is felt to be adequate, it can be used to guide behavior. For example, if the consumer hypothesizes that price and quality are related, he or she may only search for and examine high-priced items if quality is an important consideration.

If the hypothesis is felt to be inadequate or if no current hypothesis exists, the consumer may acquire information, either through direct experience with the product or through communication (e.g., word of mouth, mass media). The information is then interpreted, and the individual may then revise a current hypothesis or form a new one.

The discussion above is very brief and is not intended to provide a detailed specification of a hypothesis-testing view of learning, but rather a general context for the ensuing discussion (for a more detailed view of related issues, see Fischhoff and Beyth-Marom, 1983). However, one aspect of the model needs further examination. As presented, the model depicts the consumer as a naive scientist. It is doubtful that consumers devote such effort to many tasks. As Deighton (1983) points out in a fascinating article, there may be may be two modes of hypothesis testing: the "naive scientist-- mode outlined above, and a less rigorous mode subject to many biases of information acquisition and interpretation, which he calls "schematic inquiry." While the naive scientist-- mode may be used for very important decisions, the latter mode probably characterizes the majority of non-involving consumer decisions. Hypotheses may be formed based largely on easily available or highly salient information (e.g., hypotheses may be taken directly from advertising) and then may not be subjected to critical evaluation (see Deighton (1983) for a more detailed discussion of this less rigorous mode). Hence, arguing for a hypothesis-testing view of learning does not commit one to untenable assumptions about the degree of effort exerted by consumers.

The processes of formation and revision of hypotheses are central to the view of learning presented above. Further, while not all hypotheses require assessment of covariation, many do. Such hypotheses as "Brand X tastes saltier than Brand Y" may not involve covariation assessment to any great extent. However, hypotheses about the degree of relationship between price and quality, between the size of a car and ride comfort, or hypotheses concerning the relation between using fluoride toothpaste and having fewer cavities all require judgments of covariation. Since covariation assessment is thus an important component of formation or change for many types of hypotheses, the process of covariation assessment is now examined in more detail.


In an outstanding review of the literature on assessment of covariation, Crocker (1981) outlines six steps in this process: 1) Deciding what data are relevant; 2) Sampling cases; 3) Interpreting the cases; 4) Recalling the data that have been collected and estimating the frequencies of confirming and disconfirming cases; 5) Integrating the evidence; and 6) Using the estimates of covariation to make predictions or judgments. The steps proposed obviously overlap considerably with those suggested for the hypothesis testing view, with further detail added regarding information acquisition and interpretation. A brief description of each step, the potential biases involved in that step, and the application of these ideas to judgments of the relationship between price and quality is now presented; for a detailed discussion, see Crocker (1981).

To assess covariation, one must decide what data are relevant to the relationship. The major bias at this step appears to be a belief that positive confirming cases are more relevant than other cases. For example, if a consumer hypothesized that high-priced items were of higher quality, high-priced, high-quality items would be positive confirming cases; low-priced, low-quality items would be negative confirming cases; and the other two combinations would be disconfirming cases. Crocker (1982) shows that positive confirming cases are seen as most relevant, with negative confirming cases seen as least relevant. Other researchers have also reported a similar positive confirmatory bias (e.g., Snyder and Swann 1978; Snyder 1981), although recent results question these findings or their interpretation (Trope and Bassok 1982; Fischhoff and Beyth-Marom 1983).

After deciding which data are relevant, the individual must sample cases. Crocker notes that the major difficulties in this step involve biased sampling or very small samples. The individual who believes price and quality of clothing are related and hence only shops in higher-priced boutiques sees a biased sample of clothing. Also, individuals who have tried very few brands in a category may still feel very confident in their assessments of the price-quality relationship for that category.

After data have been observed, they must be classified as confirming or disconfirming. This is not always straightforward, as the interpretation of outcomes is often ambiguous. This is particularly true where prior expectations are involved. In many cases, consumers may possess strong expectations regarding taste (e.g., beers, colas, margarine vs. butter), smell (e.g., perfumes), or some other aspect of quality which is difficult to judge. In such cases, those expectations may guide the classification of instances when quality is hard to assess. That is, if one expects a high-priced item to be of high-quality, it may be consistently classified as high in quality if the outcomes are ambiguous. If one expects beer X to taste better than beer Y, one may perceive this to be true, even if expert beer tasters or chemical analyses might argue otherwise.

Before an individual can combine the data, he or she must estimate the frequencies of confirming and disconfirming cases. Crocker (1981) notes that information which conforms to one's expectations and information which is distinctive tends to be better remembered, and hence overweighted in assessing covariation. Thus, the occurrence of confirmatory information, being consistent with expectations, may be overestimated. The most striking demonstrations of this phenomenon are the studies of 'illusory correlation' (e.g., Chapman and Chapman 1967; 1969; Hamilton and Rose 1980). These studies show that variables which have strong prior positive associations are perceived to be positively related even in data sets where the actual degree of relationship has been manipulated to be zero or even negative. Hence, if a consumer believed price and quality were positively related, he or she might recall a positive relationship even if the available data (e.g., from prior sampling or Consumer Reports) did not support such a relationship.

Studies of the integration of the data and the accuracy of the resuLting covariation estimates have been LargeLy limited to two cases: relationships between continuous variables, and relationships between binary variables. As Crocker (1981) notes, the findings for these two areas give differing views of individuals' abilities as covariation assessors. In the case of continuous variables, judgments appear to be relatively accurate, with actual and judged correlations being highly ordinally related (e.g., Beach and Scopp 1966; Erlick and Mills 1967; Jennings, Amabile, and Ross 1980). On the other hand, judgments of relationships between binary variables have generally not been accurate. Positive confirming cases tend to he overemphasized in judging degree of relationship (Jenkins and Ward 1965; Smedslund 1963). However, if detailed instructions (Alloy and Abramson 1979) or easy to process formats (Ward and Jenkins 1965) are provided, subjects can he accurate. These results are for cases where strong prior expectations did not exist. Prior expectations would bias estimates, as noted previously. For example, when assessing price-quality relationships, much of the available data may be ordinal, especially for quality. Since judging price-quality relations may thus involve assessing covariation in rank-order data, the results reviewed above provide little guidance for predicting whether consumers could accurately assess price-quality relationships even if prior expectations could somehow be controlled.

Finally, individuals may use their estimates of covariation to aid them in making decisions. Biases which may arise at this step of the process include an overemphasis on case history information, the tendency for non-regressive predictions, and confusion between covariation and causation (Crocker 1981).

The above discussion provides some description of the process of covariation assessment. It also helps to explain why some consumers may persist in the belief that price and quality are related, even for categories where there may be no such relationships (see Riesz 1979; Sproles 1977 for attempts to estimate price-quality relationships for various product categories). As noted above, there are several potential areas for bias. Consumers may sample only higher priced items. If they have strong expectations that these items will be of high quality, these expectations can bias classification of ambiguous outcomes. Consumers with strong prior beliefs in price and quality may also be subject to the illusory correlation phenomenon, even if they are exposed to potentially disconfirming data. Thus, there are many possible explanations for the persistence of beliefs regarding relationships between price and quality. Similar arguments to those above could also be made for the persistence in the belief that price and quality are not related, even for categories where there was indeed a relationship.

The discussion above attempts to demonstrate that an understanding of the covariation assessment process is crucial for developing a hypothesis-testing view of learning. Whether consumers are able to estimate covariation using marketplace data thus becomes an important issue. For price-quality and many other typical relationships important to consumers, the data may be rank-order. Unfortunately, there has been no work done on individuals' abilities to estimate covariation in rank-order data, even for the most general case where no prior expectations about the degree of relationship exist. Accordingly, a preliminary investigation was carried out to examine the ability of individuals to estimate rank-order covariation for pure rank-order data (i.e., data where no potentially biasing prior associations existed).


Overview and Hypotheses

The main goal of this study was to examine individuals' assessments of rank-order covariation in the simplest case, where no prior expectations existed. Hence, subjects were given four sets of rank-ordered data. Each set had ten cases, with two attributes for each case. Two of the sets had Spearman rank correlations near zero, and two had rank correlations near .6. The four sets of data are shown in Table 1. The attributes were simply labelled X and Y, to eliminate prior expectations. There was no mention of price-quality or any other specific relationship. Subjects were then asked to provide an estimate of the degree of covariation for each of the four sets. An obvious first hypothesis is that subjects should give sets with high levels of actual correlation higher ratings than sets with low levels of actual correlation.



To provide additional insights, several factors which might impact the subjects' ratings and accuracy levels were manipulated. In particular, the presentation format and order of the cases for each set of data were manipulated. These manipulations were intended to vary the difficulty of the assessment task. For format, there were three types of presentations: simultaneous, with all 10 cases for each set of data and the covariation judgment scale on the same page; sequential with the ability to review the data, where each case was presented separately, and subjects could go back and re-examine the data before making their judgment; and sequential without review, where each case was presented separately, but subjects could not go back to reexamine the data before making their judgment. There were two orders for the cases: ordered, in which the 10 cases for each set were ordered consecutively from l to 10 on attribute X; and random, where the 10 cases for each set were in random order. One might hypothesize that accuracy would suffer when the task became more difficult (i.e., in the sequential and random conditions).

Finally, Jennings, Amabile, and Ross (1980) argued that subjects use the extreme values in a set of data as a simple heuristic to assess covariation. For rank-order data, one can have roughly the same level of rank-order correlation, but have different degrees of --discrepancy-- at the extreme ranks. For example, consider Sets 1 and 2 in Table 1. If one takes the absolute differences between the ranks on attributes X and Y for those cases ranked 1 and 10 on X and those ranked 2 and 9 on X for each set, it is clear that Set 2 has greater discrepancies at the extreme ranks of 1, 2, 9, and 10 on X than does Set 1. Likewise, Set 4 has greater discrepancies than Set 3.One might predict, based on Jennings et. al. (1980). thatfor distributions with a given level of actual rank correlation, higher levels of discrepancy would lead to reduced estimates of covariation. This might be especially true when the data are ordered. as the extreme cases would be more obvious.

Thus, the four sets of data can be arrayed according to level of correlation and level of discrepancy. Set l is low correlation, low discrepancy; Set 2 is low correlation, high discrepancy; Set 3 is high correlation, low discrepancy; and Set 4 is high on both correlation and discrepancy.

In sum, there are four factors in the design of the experiment. Two, format and order, were manipulated between subjects. Two, level of correlation and degree of discrepancy, were within subjects factors. It was hypothesized that the high level of correlation and low discrepancy conditions would lead to higher estimates by subjects, and that subjects would be less accurate in the more demanding sequential format and random order conditions. A potential interaction between format and discrepancy was also considered.


Subjects were 112 college students who were given questionnaires during class time, with different questionnaire versions used to manipulate format and order. Questionnaire versions were distributed randomly in each class. Completed questionnaires were available for 106 subjects, each providing ratings on four sets of data.

The first page of each questionnaire gave instructions for estimating degree of relationship and explained the scale used in the study, which was a line 100 mm long, with short vertical lines at the left end, mid-point, and right end. These points were labelled -1.0. Perfectly Negative Relationship; 0, No Relationship; and +1.0, Perfectly Positive Relationship. The subject marked the scale by drawing a line through the scale at the appropriate point. These responses were measured to the nearest mm (from 0 to 100) after the experiment was complete 1.

Following these instructions on use of the scale, each subject's questionnaire then presented the four sets of stimuli shown in Table 1, in the order Set 1, Set 4, Set 2, Set 3. Each set, as noted above, had 10 cases ranked on each of two attributes. The four stimulus sets had been taken from Consumer Reports price-quality data for four actual products. However, no mention of this fact or of price-quality was made in the current study. As also noted above, the four sets of rank-order data varied in actual level of rank-order correlation (2 low and 2 high), and in degree of discrepancy at the extremes (one low and one high at each level of correlation).

The format and order conditions were manipulated by the form of the questionnaire. In the simultaneous condition, each set of 10 pairs of ranks was presented on one page, with the rating scale at the bottom. In the sequential conditions, each set was presented as a -booklet' of 1() pages stapled together, with one pair of ranks on each page. In the sequential condition where the data could be reviewed, the subject was allowed to re-examine the data as desired before filling out the scale. In the sequential, no-review condition, instructions directed the subject to not turn back to the data before filling out the scale. The order conditions were manipulated by either having the 10 cases presented in order from I to 10 on attribute X or by having the 10 cases randomly ordered (different random orders for each set, but the same random orders for all subjects).

There were two main dependent variables. The >1()0 millimeter measurement of the subject's rating on the scale was linearly transformed to a scale ranging from -I to +1. This transformed scale will be called the rating of covariation in the results and discussion below. In attempting to measure accuracy, there is a problem in that there is no unambiguous measure of rank correlation. Hence, two measures were used: the absolute difference between the rating of covariation and Spearman's o (Spearman accuracy), and the absolute difference between the rating of covariation and Kendall's T (Kendall accuracy). These accuracy ratings are somewhat problematic in any case, as there is no necessary relationship between any metric possibly used by subjects and these definitions of rank-order correlation, even if subjects were able to discriminate varying levels or rank-order covariation. Hence, the accuracy findings must be treated with caution. The data from the experiment were analyzed with a two between (format and order), two within (level and discrepancy) mixed-factor analysis of variance design.


The data for each cell are presented in Table 2, which shows the rating of covariation, Spearman accuracy, and Kendall accuracy. The analysis of variance for the rating of covariation showed only three significant effects. Subjects did distinguish between the levels of actual correlation, as those sets with high correlation were rated higher than those sets with low correlation (X=.293 vs. X=.037, F( 1.294)=33. 60. P<.001)

Contrary to our hypothesis, there was no main effect due to discrepancy. There was a discrepancy x order interaction, however (F(1,294)=3.75, p<.054), and the form of this interaction was as predicted: when the data were ordered, high discrepancy produces lower ratings (X=.099) than low discrepancy (X=.201). When the cases in each set were ordered randomly, however, there was no difference between high (X=.183) and low (X=.166) discrepancy sets.

Finally, a level x discrepancy interaction (F(1,294)=8.46, p<.01) simply reflected the differences in means across the four sets of data, as each combination of level and discrepancy signifies a particular set of data. The means for each cell are low correlation, low discrepancy (Set 1), X=.027; low correlation, high discrepancy (Set 2), X=.101; high correlation, low discrepancy (Set 3), X=.391; high correlation, high discrepancy (Set 4), X=.196. Note that the actual correlations, presented in Table 1, do not follow this same ordering. For example, Set 1 had a higher actual rank-order correlation than Set 2, hut subjects assessed Set 2 higher than Set 1. Similarly, Set 4 had higher actual correlation than Set 3, but subjects assessed Set 3 higher than Set 4.



There were few effects for either measure of accuracy. For Spearman accuracy, there was a main effect due to level of correlation, with greater accuracy (lower absolute differences) for low correlations than for high (2=.346 vs. X=.42, F(1,294)=9.01, p<.003). There was also a main effect due to discrepancy, with greater accuracy (lower absolute differences) for low (X=.358) than for high (X=.430) discrepancy (F(1,294)=5.22, p<.023). Finally, for Kendall accuracy there was only one significant effect, with low levels of correlation showing more accuracy than the high level (X=.331 vs. X=.416, F(1,294)=7.48, p<.007). These results relating accuracy to level of correlation may simply show that subjects tended to be biased toward using mid-range values on the scale. Surprisingly, there were no effects of format or order on accuracy.

Discussion of Results

The results show that subjects are able to distinguish high from low levels of covariation. However, within each level, subjects did not order the sets in accordance with normative measures of rank correlation. Thus, the subjects may only be sensitive to relatively large differences in rank-order covariation. Further research with more levels of covariation in the design would be needed to more completely characterize subjects' assessment capabilities.

Note also that there were no biasing effects of expectation present in this study. If subjects had been told that the two characteristics represented meaningful dimensions such as price and quality rankings, the results might look quite different. In this study, the characteristics were not labelled, and thus no prior beliefs should have affected subjects' estimates.

Format and order had no effects on accuracy and only one interaction effect on ratings. It is surprising that making the task more difficult had few effects, but these results are similar to those of Jennings et. al. (1980), who also found few effects of attempting to increase the difficulty of the task. Perhaps the differences in covariation were large enough between levels that the format and order manipulations did not make discrimination between them very difficult. Once again, research with more levels (hence requiring more discriminating judgments) of actual rank-order correlation is needed to further our understanding. On the whole, the pattern of results appears to be more similar to chose for continuous variables than to the results for binary variables.

Finally, the results only partially support the notions of Jennings et. al. (1980) regarding the use of discrepancy as a heuristic. When the "extreme" values are easy to see, i.e., when the cases are ordered on Attribute X, discrepancy appears to matter. When it is difficult to see the extremes, i.e., the cases are randomly ordered, discrepancy has no effect. Thus, the proposed heuristic may be used in task environments where it is easy to implement. Studies of additional heuristics for assessing covariation, perhaps using process tracing methodologies, would be very valuable.

Unfortunately, the discrepancy measures have several limitations. First, they assume that the left-hand column, Attribute X, is the natural -anchor- column when individuals look at the data. That is, the assumption is made that individuals perceive what is at the extremes" of the set of data by focussing on Attribute X, since it is the left-most column and would be examined first in normal reading order. This would be especially true if the data were ordered on X, and this may be the cause of the discrepancy x order interaction. Second, the sets at each level of correlation were not precisely equal in correlation, nor were the discrepancies centered in the same place for each level of correlation. That is, Sets l and 2 have different values of rank correlation, even though they both represent near zero correlation. Sets 3 and 4 also differ. It would have been preferable to select two different sets which had equal correlations at each level. Finally, the source of the differences between the high and low discrepancy conditions is not the same for the low correlation (centered on the cases ranked 2 and 9) and high correlation (centered on the cases ranked 1 and 10) conditions. These latter problems stem from the use of rank order data derived from Consumer Reports, as noted above. Despite these problems, these preliminary data provide a good start toward investigating assessment of covariation in rank-order data.


The hypothesis-testing and assessment of covariation frameworks presented above provide good entry points for examining consumer learning. Several important areas for research have been identified by considering these literatures: the potentially biasing role of prior knowledge and expectations; possible search and sampling biases; and issues regarding consumers' abilities to detect relationships in rank-order data. Future research night concentrate on these issues in consumer settings. Price-quality hypotheses appear to offer a fertile setting for such research.

Characterizing the different "modes" of hypothesis testing suggested by Deighton (1983) also appears to offer great promise. His idea that consumers in low involvement situations -mindlessly- adopt hypotheses from advertising is quite provocative, and needs empirical research. This distinction between modes-- may also have implications for understanding assessment of covariation. The discussion of the persistence of price-quality beliefs presented above may represent an example of a schematic inquiry approach to covariation assessment.

In sum, the argument presented in this paper is that a hypothesis-testing view and an analysis of the processes of covariation assessment can provide a basis for a more systematic cognitive approach to consumer learning. Examining hypotheses and estimates of covariation and the factors which influence them can greatly expand our understanding of how consumers learn.


Alloy, Lauren B. and Abramson, Lyn Y. (1979), "Judgment of Contingency in Depressed and Nondepressed Students: Sadder but Wiser? Journal of Experimental Psychology: General, 108, 441-485.

Beach, Lee R. and Scopp, T. S. (1966), "Inferences about Correlations, Psychonomic Science, 6, 253-254.

Chapman, Loren J. and Chapman, Jean P. (1967), Genesis of Popular but Erroneous Psychodiagnostic Observations, Journal of Abnormal Psychology, 72, 193-204.

Chapman, Loren J. and Chapman, Jean P. (1969), "Illusory Correlation as an Obstacle to the Use of Valid Psychodiagnostic Signs, Journal of Abnormal Psychology, 74, 271-280.

Crocker, Jennifer (1981), 'Judgment of Covariation by Social Perceivers," Psychological Bulletin, 90, 272-292.

Crocker, Jennifer (1982), "Biased Questions in Judgment of Covariation Studies, Personality and Social Psychology Bulletin, 8, 214-220.

Deighton, John (1983), How to Solve Problems that Don't Matter: Some Heuristics for Uninvolved Thinking, in Richard P. Bagozzi and Slice M. Tybout (eds.), Advances in- Consumer Research, Volume X, Ann Arbor: Association for Consumer Research, 314-319.

Erlick, Dwight E. and Mills, Robert G. (1967), Perceptual Quantification of Conditional Dependency,' Journal of Experimental Psychology, 73, 9-14.

Fischhoff, Baruch and Beyth-Marom, Ruth (1983), Hypothesis Evaluation from a Bayesian Perspective, Psychological Review, 90, 239-260.

Hamilton, David L. and Rose, Terrence L. (1980), "Illusory Correlation and a e Maintenance of Stereotypic Beliefs, Journal of Personality and Social Psychology, 39, 832-845.

Jenkins, Herbert M. and Ward, William C. (1965), "Judgment of Contingency between Responses and Outcomes, Psychological Monographs, 79, Whole No. 594, 1-17.

Jennings, Dennis L., Amabile, Teresa M., and Ross, Lee (1980), "Informal Covariation Assessment: Data-based versus Theory-based Judgments, in Tversky, Amos, Kahneman, Daniel, and Slovic, Paul (eds.), Judgment under Uncertainty: Heuristics and Biases, New York: Cambridge University Press. 211-230.

Riesz, Peter C. (1979), Price-Quality Correlations for Packaged Foot Products, Journal of Consumer Affairs, 13, 236-247.

Smedslund, Jan (1963), "The Concept of Correlation in Adults, Scandinavian Journal of Psychology, 4, 165-173.

Snyder, Mark (1981), -Seek and Ye Shall Find: Testing Hypotheses about Other People, in Higgins, E. T., Heiman, C. P., and Zanna, M. P. (eds.), Social Cognition: The Ontario Symposium on Personality and Social Psychology, Hillsdale, N. J.: Lawrence Erlbaum.

Snyder, Mark, and Swann, William B. (1978), Hypothesis-Testing Processes in Social Interaction," Journal of Personality and Social Psychology, 36, 1202-1212.

Sproles, George B. (1977) "New Evidence on Price and Product Quality, Journal of Consumer Affairs, 11, 63-77.

Trope, Yaacov, and Bassok, Miriam (1982), "Confirmatory and Diagnosing Strategies in Social Information Gathering," Journal of Personality and Social Psychology, 43, 22-36.

Ward, William C., and Jenkins, Herbert M. (1965), "The Display of Information and the Judgment of Contingency," Canadian Journal of Psychology, 19, 231-241.