Research Orientations For Analyzing Consumer Judgment Processes



Citation:

Peter L. Wright (1974) ,"Research Orientations For Analyzing Consumer Judgment Processes", in NA - Advances in Consumer Research Volume 01, eds. Scott Ward and Peter Wright, Ann Abor, MI : Association for Consumer Research, Pages: 268-279.

Advances in Consumer Research Volume 1, 1974    Pages 268-279

RESEARCH ORIENTATIONS FOR ANALYZING CONSUMER JUDGMENT PROCESSES

Peter L. Wright, University of Illinois

"Beware the man of one method or one instrument...He tends to become method oriented rather than problem oriented (Platt, 1964, p. 251)."

If the choice act represents the focal point of consumer research, the cue-usage strategies a consumer employs in making choices are perhaps the most interesting question facing consumer researchers. There are a number of methods which might be used in studying these cue-usage processes, this paper is intended as a critical overview of these methods. The prime criterion will be the method's ability to provide unambiguous evidence. The general theme will be that traditional and popular methods, such as fitting math models or using protocol-simulation, have inherent limitations and that rigorous pursuit of answers to questions on consumer cue-utilization demands multiple me,hods of data collection and analysis. As a starting point, the review concerns only methods for studying the micro-process by which cues are weighted and combined into an overall judgment. The practical relevance of understanding these processes to problem solution efforts aimed at modifying consumer decisions has been developed elsewhere (Wright, 1973a). The basic paradigm of concern is one where the researcher can observe the cue usage process unfolding in its natural sequence, methods for uncovering existing structures of beliefs or already processed judgments are not of direct interest except to note that drawing inferences from retrospective studies of consumer belief systems to questions of cue-usage/choice strategies is fraught with ambiguities.

FITTING MATHEMATICAL MODELS

Undoubtedly the most pervasive approach to the study of human judgment has entailed (a) specification of some mathematical representation of a theoretically possible cue-usage process; (b) presentation of cue "profiles" to an individual subject who is asked to make a prediction or judgment based on each cue profile. The profile may take the form of a set of previously quantified cues, e.g., a graduate school candidate's scores on the verbal, quantitative, and advanced portions of the Graduate Record Exam, his grade point average, mean peer ratings of need achievement and of extraversion, etc. (Wiggins and Kohen, 1971). Or the cues may be expressed categorically, e.g., a common stock's Standard and Poor ratings on volume trend (up or down), near-term prospects (good or poor), profit margin trend (up or down), etc. (Slovic, Fleissner, and Bauman, 1972). Essentially, the subject is given a description of a hypothetical product --- his configuration of beliefs about a product's attributes is controlled by the researcher --- and he is asked to make a judgment about each of a set of such products not identified by unique semantic "brand names"; and (c) comparing the closeness of fit between the judgments predicted by the mathematical model and those actually make by the subject. Typically, if the researcher discovers that the degree of association is low, he assumes his mathematical/theoretical idea about how information was being processed was incorrect.

One of the foremost problems inherent in the mathematical model approach to judgmental processes is in interpreting goodness of fit. What does a poor fit mean? An extremely poor fit might reasonably be interpreted as evidence that the researcher's hypothesis about processing was invalid. On the other hand, a poor fit may invalidate only the particular mathematical representation used rather than the conceptual processing model which the math model was supposed to mirror. It is quite possible that a math model labeled by its creator as representative of a particular conceptual model doesn't actually capture the nature of that sort of processing at all. For example, Einhorn (1970) proposed mathematical representations of conjunctive and disjunctive judgment strategies which incorporate log transforms of the cue and criterion variables within an additive regression formula. Several consumer researchers (including, alas, the author) have employed these representations in model-fitting studies intended to explore whether conjunctive, disjunctive, or linear compensatory processing strategies were being used by consumers (Heeler, Kearney, and Mehaffey, 1973; Wright, 1972). Unfortunately, the Einhorn models have been criticized as not really approximating very closely the intended strategies since the math models proposed are still compensatory, thus missing the key aspect of the two non-compensatory (conjunctive, disjunctive) models (Goldberg, 1972; Birnbaum, 1973). We must be very careful therefore to give literal translations from math model to conceptual model so that the math model used is actually appropriate for examining the hypothesis of interest. For example, even though the log transform regression models proposed by Einhorn may not be ideal for hypotheses concerning conjunctive or disjunctive processing, it has been suggested that these models can be more appropriately viewed as "differential weighting" compensatory models (Anderson, 1972). The log transform suggested by Einhorn for the conjunctive process more literally portrays a pronounced "negative bias" in cue weighting, while the model suggested for the disjunctive process portrays a pronounced "positive bias" (Wright, 1973b). Thus, if the hypothesis of interest concerns the existence of such biases in cue usage, those particular math representations may still be a useful vehicle.

If a poor fit between math model and data demands careful evaluation before substantive interpretations are given, a good fit is perhaps even more treacherous. To be sure, a close fit is a necessary condition for even bothering to give further consideration to the credibility of the hypothesized process --but it must never be considered in and of itself as sufficient evidence. Discovering a close fit does not necessarily tell us much about the actual cue usage processes employed by the subject. Hoffman (1960) reminded us that a math model is a "paramorphic representation" of the covert mental operations. Thus, two different types of information processing strategies may suggest equivalent algebraic models, or two different algebraic models may provide equally good fits, particularly when the data contain errors. In an extreme example of this latter case, Hodges (1973) presented a variable-weighting additive model which will always make identical predictions with Anderson's (1972) averaging model. Although consumer researchers have not shown concern for the adding vs. averaging controversy, the possible problems in choosing among competing theoretical models based only on model fitting is well demonstrated.

THE UBIQUITOUS LINEAR MODEL

One theoretically plausible information processing strategy which consumers might use in certain situations is a linear compensatory process. This pictures the individual as mentally adding or averaging cues about the product's status on different dimensions, allowing negative cues to compensate or balance positive cues. Such a process can readily be represented by an additive mathematical mode'; the regression model and ANOVA model have been popular. In fitting judgmental data, the success of the linear compensatory model is well demonstrated. But these close fits provide a good illustration of the ambiguity inherent in model fitting, for linear compensatory regression models are, in Robyn Dawes' words, so robust that they are "lousy models of human judgment because they are too easily fit by the data (Dawes, 1972, p. 15)." (Dawes qualifies this complaint by noting the great practical value of linear models in decision-making, also a result of their robustness.)

The awesome robustness of linear mathematical models in describing data generated by non-linear processes is worthy of review, if only to accentuate the problem presented. Green (1968) noted that multivariate studies in a variety of problem areas have for years consistently found that the linear regression of the dependent variable on the independent predictors accounts for almost all of the accountable variance. Attempts to analyze nonlinearity or configurality rarely have added much to the analysis. He presents one example of why linearity tends to emerge so often from a statistical analysis. Data points generated by a perfectly parabolic function (a single-humped curve) would be totally non-linear if all the data points were entered as observations into the analysis. If, however, different patterns of those potential data points are entered (and some others consequently left out), the curve-fitting analysis might be totally nonlinear, partially linear and partially quadratic, or almost entirely linear. Thus any theoretically nonlinear pattern of data can be decomposed into segments which are entirely linear with the result that a linear model may provide a very close fit if only a portion of the entire range of observations are served up as fodder for the analysis. Nonlinearity will emerge only if the entire continuum is represented in the data or if that range of observations included contains the crucial segment of observations.

Thurstone (1947) demonstrated that linear factor analysis could recapture a good approximation to a set of data even when the underlying data structure were nonlinear. Yntema and Torgerson (1961) similarly demonstrated that even though the relation of a criterion to a set of predictors was completely interactive, a linear analysis-of-variance of the data showed 94% of the variance was accounted for by main effects. A key condition here is that each predictor variable be monotonically related to the criterion (i.e., that the predictor variables have a relationship with the criterion which is generally increasing or decreasing).

Dawes (1972) cited several additional factors contributing to the pervasive success of linear math models. He noted that "error" in measuring the criterion variable has no effect on the matrix of intercorrelations between independent variables, thus the relative beta weights of the predictors are impervious to such error. All correlations between predictors and criterion are reduced by a constant amount. Error in the predictor variables has the effect of making a conditionally monotonic relationship become more linear. For example, Dawes reported a study where data were generated by a perfect conjunctive multiple-cutting score process, such that a step-function was the ideal theoretical representation for the data. However, as the independent variables were measured with increasing error, the rectangular shape contour of the step-function began to become first curved, then flatten out until it dissipated into something very akin to a straight line. Lord (1967) reported a similar demonstration. Groner (1973) cited a study by Goldberg (undated) in which a noncompensatory lexicographic cue-usage process was used to generate judgments. When the data were made probabilistic by superposing a sufficiently large error component, the linear multiple regression analysis made better predictions than the "true" model. The linear regression model is apparently very hard to disqualify as a model of information integration, particularly in the frequently encountered context where variables are conditionally monotonic (e.g., no matter how an automobile is rated on other variables, it is viewed as more likely to yield maximal performance the higher its gas mileage, the higher its riding comfort, the lower its cost of maintenance, etc.) Or where a variable doesn't have such a relationship with the criterion it will usually have some sort of single-humped curvilinear relationship to the criterion, e.g., some mid-range level of "overall car size" or "maximum speed" is viewed as optimal. In this case, we can easily- transform into a monotonic relationship by introducing the notion of deviations from an ideal point. The other condition promoting the success of linear models is the existence of a fair degree of measurement error, a condition altogether too familiar in the contexts of consumer perceptions and reported judgments or predictions.

THE ROLE OF MATH MODELS

What might be done to overcome the robustness of the linear model if model fitting is retained as a means for choosing among competing hypotheses about judgment processes? The practice of fitting only one model to a set of data and proclaiming success for that model based on a reasonable correlation maximizes our chances of being misled. A researcher should be prepared to compare one theoretically attractive model with plausible rival models he can specify. If alternative models provide equally close fits to the data, the researcher (and his public) is alerted to the ambiguity of his test. Fitting only one model doesn't present the evidence crucial for high internal validity, i.e., how many plausible rival hypotheses did the study eliminate? The rival models contrasted may, ideally, be mathematical representations of other legitimate conceptual models; in this case, competing theories are in sense put head to head and the chance to really learn something is great. Or rival models may at least be simple alternative math formulations, not created specifically to represent a competing conceptual model but created merely to serve as "controls", as is common in experimental research. Goldberg (1971) employed such atheoretical control models in testing Einhorn's "conjunctive" and "disjunctive" models, simply to guard against accepting a close fit of one model as necessarily validating a hypothesis about processes.

Given that multiple models are tested, the researcher can take a giant step toward making his comparison meaningful by recognizing that for many configurations of product attributes, different judgmental strategies will lead to the same prediction or choice. Where this is true, it is obviously uninformative to compare the predictions of the models since both will provide equivalent fits and nothing is solved. Even if only a large proportion of the judgment problems facing the subject are of this indeterminate nature, the incorrect model will benefit (in terms of goodness of fit) from these "shared" cases to the point that it may be impossible to discriminate it from the true model. In testing hypotheses about rival models, the researcher should take pains to set up crucial tests by presenting product profiles which do imply different choices from different strategies.

As a general comment equally applicable to any research approach to information processing questions, I would like to emphasize the seeming futility of "fishing expeditions". Even though a reasonable number of studies have been reported in which the issue was, "Is man a linear or a configural information processor?", very few of these have evidenced any a priori reasoning about when a person might be configural, when he might not, and what sort of configurality would be likely. If we do not examine a person under conditions where he is highly likely to be employing a configural strategy, why expect to find evidence of configurality? If we don't have any idea of what type of configural strategy might be quite likely, given the situation, we aren't likely to discover it. Finally, as Anderson (1972) has argued, interaction effects in cue-usage studies should only be interpreted as evidence of true interactive processing if the interaction can be given a substantive, meaningful interpretation (preferably, on an a priori rather than post-hoc basis). Using a handy data base precludes much of this type a priori reasoning.

As another general comment, we can profit from hindsight by not joining in the search for some monolithic information processing strategy thought to operate across people across situations, even though chasing such a seeming will-of-the-wisp has been surprisingly popular among earlier researchers. Do different judgment strategies lead to different choices? Yes. Do different strategies differ on such dimensions as ease of execution, likelihood of indicating a specific choice, or likelihood of indicating an ideal choice? Apparently. Can consumers apply different judgment strategies? Yes. Is an individual likely to be adaptive? Yes. Will consumers adjust their judgmental strategies to suit the context of judgment? Probably. The interesting question is under what conditions will a consumer apply what strategy, and why?

The IP strategy and the task environment must be considered simultaneously. This demands both a taxonomy of strategies and a taxonomy of task factors, both of which have been discussed elsewhere (Wright, 1972). The researcher's role is to demonstrate how a particular strategy is relevant for a particular task environment. Math models developed for and empirically validated in a particular task setting shouldn't be misappropriated for testing in another task setting, at least without recognition of what is being done; further, the researcher has the responsibility for describing the task setting he has in mind in constructing and testing a model.

If task factors influence the consumer's IP strategy, experimental manipulations of such factors are called for. Neither model-fitting researchers nor protocol-simulation researchers have shown much inclination for experimentation. Model-fitting research has typically used "judgment task" or "survey" data collection techniques which, as Runkel and McGrath (1972) point out, deal with behavior not intrinsically connected with any behavior setting. The environment in which subjects make the judgments is not controlled, nor is it described by the researcher via empirical parameters. Since the environment looms large in IP questions, there is no reason why model-fitting data collection cannot be undertaken in conjunction with experimental variations of task conditions. Operational tests of hypotheses might then take the form of comparing the proportions of subjects best-fit by different models in different treatments (e.g., Einhorn, Komorita, and Rosen, 1972; Wright, 1973b).

PROTOCOL GATHERING-SIMULATION MODELING

There is, of course, no necessary reason why the data elicitation technique of protocol gathering must be used in conjunction with a computer-based simulation model of an individual's cue-usage processes. Thought protocol data can provide the input variables for experimental analysis or for entry into mathematical models. Simulation need not rely on protocol data. However, the frequent use of protocols and simulations together in the study of judgment processes argues that our examination of potential problems treat them concurrently. Bettman's (1972) recent review of the status of this research approach delineated six problem areas: data collection and modeling methodology; memory structure, social influences; model analysis; model generality; and the handling of change. No attempt will be made here to match the comprehensiveness of Bettman's coverage, instead some other limits of this research approach in supplying unambiguous evidence on consumer cue-usage processes will be explored.

THE PROTOCOL

Protocol elicitation involves asking a person engaged in some sort of cue-usage task to verbalize his ongoing thoughts as they occur. These are recorded verbatim and are used to construct or test a simulation model of the process. The researcher assumes that his subject is sufficiently aware of his processing activities that he can articulate them as they occur, and that conscious thoughts are determinants of subsequent behavioral choices. The math modeler, on the other hand, allows no "internal" data (save the global judgment) to enter his analysis.

Some major questions arise quickly if one takes the view expressed earlier about the adaptability of individuals to the task environment at hand. Protocol analysis has yielded considerable insight into the general upper limits of human information processing (Newell and Simon, 1973). As these authors have observed, however, task factors have emerged as dominant factors shaping a person's processing strategies. But little knowledge about the interaction of task factors and cue-usage strategies has emerged. Why not? One possibility is that individuals do in fact cook up their own strategy to suit each task such that there is no generality lurking out there to be discovered. More optimistically (and, probably, more realistically, given a consumer's limited capacity for cognitive work and his consequent need to order and simplify) there are a limited set of environmental variables and a limited set of "message" variables (properties of the basic input information) which induce systematic adjustments by the individual. In the case of math modeling, we have witnessed a premature and overly ambitious search for the extremely parsimonious general model; in protocol-simulation modeling, we seem to find a proliferation of special, highly idiosyncratic models. In both cases, the problem may be traced in large part to a failure to integrate the desirable qualities of each respective basic research approach with desirable qualities of another approach whose very strength is dealing systematically with generic classes of settings: experimentation.

The structure of task environments becomes relevant. In particular, since cue-usage strategies may differ in their relative attractiveness as simplifying devices (Wright, 1973c), we might be very interested in how the variables associated with the concept "information load" affect the processing strategy a consumer adopts. A general hypothesis would be that increasing information load would make simpler strategies more attractive. Information load might theoretically be considered a function of at least four variables: time available, number of alternatives to consider, number of cues per alternative, and number of extraneous cues competing for a portion of the individual's attention. Can protocol-simulation be used in an experimental program calling for manipulation of such variables?

Exploring effects of certain of those task variables under conditions where the subject is called on to constantly verbalize his thoughts may risk the internal validity of the study. Manipulations of time pressure or of extraneous distraction would probably be jeopardized by such overt, continuous, concurrent reporting. In a sense, a subject overtly talking a protocol is probably always distracting himself partially, since he is forced not only to concurrently do something extra but to introduce an audible cue into the environment which must be re-processed (if he bothers to listen to himself). Time pressure manipulations are likewise not clearcut since the extra task of reporting itself increases perceived time pressure. Other time pressure manipulations induce changes in the reporting, whereas the question of interest concerns effects on actual cue-usage.

The essence of the problem seems to be that the job of verbalizing is itself a potentially demanding information-processing activity. From the subject's perspective his overall tasks include both cue-usage to make a judgment and verbalization for the researchers. Any efforts he makes to simplify may come via either a change in cue-usage or in verbalization, since he might find either relieves his burden. (In interpreting any protocol evidence, we might constantly remind ourselves that the appearance of simple processes, such as the absence of multi-attribute tradeoffs, may be in part related to the relative difficulty of the basic task to the subject.)

The possibility a subject will react to experimentally induced IP burdens by censoring his verbalization of thoughts in some way does not rule out using protocols in experimental work. Self-censorship and reporting biases have troubled experimental work using much simpler modes of self-report for years.

Solutions have been sought by identifying types of bias (e.g., social desirability, yea-saying, etc.) and thinking up ways to distinguish them from true effects. What sort of biases might we then expect? As an example: picture a subject engaged in a task in which he has available several pieces of evidence about several candidate products and must make an eventual choice among the products. He must also protocolize. He may try one comparison strategy (e.g., a simple attitude-referral strategy in which he makes a unidimensional comparison on his "global affect" dimension), find it does not discriminate for him, immediately try another strategy, etc. until finally a choice is indicated. Will he, under moderately difficult conditions, be able to report the attitude-referral tactic which was instantly discarded and which he could, in trying to simplify, sacrifice because he assumes it is meaningless information for the researcher? In sorting out thoughts to verbalize and thoughts to ignore, we might therefore expect (a) thoughts related to "final solutions" to be overly favored by subjects and thoughts related to "preliminary" and especially "discarded" solution strategies to be underreported; and (b) as has been verified in all in other research contexts, subjects to form personal hypotheses about what is relevant for the researcher, based on the subject's hypothesis about the researcher's hypothesis.

As another example of how self-censorship might operate: Will we be able to distinguish a multiple-cutoff strategy from a strict lexicographic or compensatory strategy? Will the verbalizing and simplifying subject tend to report only the determinant (from his perspective) cue, such that he might note only the crucial below-cutoff cue when using a conjunctive strategy, the key above cutoff cue (which one?) when using a disjunctive strategy, or the key contingency cue when using any configural strategy. How reluctant are subjects to report using a compensatory process in which they ultimately made comparisons on a weighted average impression? Our alertness to reporting biases in protocol collection is tied to our opportunity to observe such reporting in experimental situations, since using protocols in experimental situations has been infrequent, we can really only speculate about possible biases now.

THE SIMULATION

Simulation models of judgmental processes-themselves suffer from some limitations. Simulation is of course properly viewed as quite similar to theory building in that it complements an empirical research approach. The model's behavior can be compared with actual behavior, to gain insight into both actual behavior and the model. The model can be used to discover the full implications of a system involving complex interdependencies, random processes, or extreme variable ranges. A simulation model deals with a concrete behavioral system rather than general processes; formal mathematical models generally do just the opposite. Simulations should not be expected to remain useful for too long without additional empirical research.

Bettman (1972), Reitman (1965), Uhr (1970), and others have commented on possible problems inherent in interpreting computer- ased simulation models. Reitman (1965) cited two practical problems rising from the inescapable fact that the consumer whose limited IP capacity leads him to simplifying tactics is one and the same with the researcher/manager who might want to try interpreting a simulation model. Communicating the "theory" embodied in a simulation model --- giving a realistic picture of its constraints and its concrete behavioral arena --- is much more difficult than with a math model. Further, and most relevant to marketing/consumer analysis, it is usually quite difficult for someone other than the model builder or programmer to understand the theory's implications for changes within the system. Rozeboom (1972) questioned the extent to which computer-based models are dependent on the Zeitgeist in computer-theoretic software; the implication being that the modelbuilder may try to force his theoretical ideas along the lines of his computer-programming capabilities. Bettman (1972) has noted the problems of making simulation models amenable to verification across people and across situations.

If this paper has a general theme, it is the advisability of using multiple data collection and model building methods to complement each other in eliminating the ambiguities of any single method. The role of simulation seems to be primarily in the hypothesis formulation stage of research, that is, this is the unique contribution simulation makes above and beyond nonsimulation theory building. Simulations may be used to discover possible models, and experimentation subsequently used to study the conditions where certain models are applicable. The use of experimentation to nail down conditional hypotheses seems unavoidable due to the concrete nature of the simulation model. As Simon and March (1973) have noted, ongoing thought processes are reactions to the specific environment at hand, and a simulation model based on though protocols may in large part be a model of that specific concrete setting. Once again the need for a taxonomy of environmental tasks to guide experimentation stimulated by simulation outputs is apparent.

SUPPLEMENTS AND COMPROMISES

What might be done to augment these two major research orientations while improving our ability to interpret evidence? The math modeler's abstention from direct reporting of cognitive events can be compromised, as can the protocol-simulation man's totally unstructured data collection.

EXTRACTING STRUCTURED "COMPONENTS" JUDGMENTS

Model fitting requires that the subject provide only the single overall judgment he makes about each stimulus cue-profile. Weights are derived from the data rather than reported directly by the subject. Alternately, it is possible to ask the subjects to report the relative weight they were attaching to each of the cues. Consumer researchers active in collecting the data required by models of attitude structure are familiar with asking for consumer self-reports on the "importance" attached to different dimensions, or the "affect" associated with certain attributes. In research on structure, the consumer is also asked what he believes in the properties of the product are, but these beliefs are controlled by the researcher in most research on the judgment process.

The major question arising on the use of this approach is: to what extent are consumers sensitive to the situational adjustments in weighting or combination rules they make in adapting to time pressures, distractions, etc.? When asked, in retrospect, to indicate the relative importance of different cues, we may suspect that consumers will often respond in terms of their traditional weighting pattern rather than the pattern they actually applied in the just-finished judgment task. Self reports of component weights may thus be quite reliable in estimating the stable system of values but somewhat less reliable where situational shifting is the focal point. As an example, studies (Hoepfl and Huber, 1970; Slovic, 1964; Hoffman, 1960, Slovic, Fleissner and Bauman, 1972) have shown that subjective reports of the relative importance of different cue dimensions in a judgment task don't match objective the evidence from statistical analyses. In particular, subjects seem to consistently overestimate the importance they attached to trivial cue dimensions; they feel they were using more dimensions systematically than they apparently were. One explanation (Shepard, 1964) is that, in trying to recall what he-did over multiple judgments, a person may often recall that at some time or another he did attend to each of the different factors. If he searches back and calls to mind one instance where he attended to a car's trunk size, he will tend to report that this dimension was somewhat important to him. His reliance on that dimension was not, however, a systematic. This over-reporting might be particularly troublesome in research on simplification strategies since adjustments to reduce the amount of information handled would be or prime interest, but self-reports might obscure these.

RETROSPECTIVE REPORTS ABOUT THE ENTIRE PROCESS

In self-report data is to be collected, it isn't necessary to confine measures only to estimates of weights used. The subject can be asked to describe what his entire strategy was in using the evidence to make the judgments. Surprisingly, this approach has not apparently been tried very often (or, if tried, the results were so unclear that the information wasn't reported.) Two data elicitation methods can be tried. The request for a description can be unstructured, leaving the subject free to respond as he sees fit to a question such as, "What tactics did you use in using the different cues to make the judgments?" Based on several pilot studies using this type of measure, the author cautions that the descriptions supplied are likely to be vague and quite ambiguous to code. Coding would, of course, be desirable if any analysis is to take place.

A more promising approach may be to create verbal descriptions of the theoretically different judgment strategies (such as those cited earlier) and to present these to the subject who has just completed a series of judgments. He is asked to read them all thoroughly and indicate those which seem to accurately describe what he was doing. Since he can indicate more than one, this approach may provide insight into combinations of tactics used (e.g., first a conjunctive approach, followed by a compensatory approach for those options still surpassing the cutoffs). Combinations of strategies offer a particularly troublesome problem to mathematical model fitting. Once again, though, the sensitivity of people to their own judgmental procedures is questionable; a measure such as this may be subject to strong normative biases.

THE STIMULUS MATERIAL

Should the researcher create nothing but factorial displays of cues or are non-factorial cue combinations sufficient as stimulus material? This probably depends on what question the researcher is trying to answer. Anderson (1972) argues convincingly that factorial designs should be used in order to be able to infer the subjective value system of the subject. If the researcher is essentially interested in describing or measuring the subjects basic value system, then using a non-factorial design may indeed be misleading. This is because his judgments are dependent on the nature of the cue set available. If that cue set is not exhaustive (i.e., unless each possible combination is presented), any portrait of the value system may be peculiarly dependent on the unique set of combinations used by the researcher. The use of factorial cue sets is found in work by Anderson (1972) and Slovic (1969), who consequently employ the ANOVA model in analyzing judgmental activities. Interestingly, using the ANOVA approach with factorial cue sets quickly becomes unwieldly in terms of the number of judgments a subject must make. For example, a set of four dimensions with four levels each generates over 250 combinations. Do we have subjects respond to all 250 of these? The anticipation of that task should itself lead to simplification tactics, plus boredom and exhaustion effects. Slovic suggests an incomplete factorial design in which his error estimates came from high order interactions, which he is willing to assume away. Anderson uses very simple stimulus sets. Both have drawbacks.

If the researcher is interested in questions of how external environmental factors, such as time pressures, distractions, threat, etc. might systematically affect the judgment strategies adopted, it is not clear that factorials are necessary or even appropriate. In order to achieve realism, the researcher may not feel comfortable using very limited cue sets. These might suffice for studying how people processing small amounts of information handle such conditions, but the researcher may want to create a more realistic experience for his subjects. However, using a factorial to construct the cue set balloons up the number of judgments the subject must make. The key question is whether the experimental manipulation of time pressure (or whatever) can be consistently maintained over the course of 200, or even 50, judgments. If nonfactorial cue sets are used, subjects in all presumably are reacting to the same stimuli and a systematic difference in cue usage may reasonably be interpreted as evidence for a treatment effect. The possible artifact here is an interaction between the specific cue patterns and the type of adjustment made by the subjects.

OVERVIEW

The various methods which a researcher can bring to bear in studying consumer information processing offer a potpourri of trade-offs which must be considered before the research paradigm is chosen. The most obvious difference between the methods reviewed is the amount of time and trouble each requires on the part of the researcher. Fitting a mathematical model, if approached correctly, may require forethought to delineate the model, but the actual data collection procedures are relatively painless. In contrast, the elicitation and coding of protocols is very time consuming. The other techniques range between these two on a continuum of difficulty. Experimentation using any of these methods of measurement is more difficult than non-experimental data collection. In general, the more input required of the researcher the greater the potential for sorting out ambiguities; the choice of approaches hinge, in part, on whether the researcher applies a least-effort principle himself.

One implication of this brief review seems to be that a multi-method attack on the question of judgmental tactics offers the greatest potential Since each method seems to carry some inherent ambiguity in interpreting results, a multi-method study in which several methods validate each other could be quite useful. Direct reports of component weights or entire strategies may be compared to the relative fits afforded by theoretically rational mathematical models. If they agree, the evidence is convincing; if not, the researcher at least has fodder for more reasoning. Mathematical modeling, in particular, can profit from cross-validation with other approaches because of the "paramorphic" problem cited earlier.

The necessity for imposing a structure on the task environment as a prelude to substantive interpretations about cue-usage strategies seems apparent. The subject's total information processing environment must be realistically appreciated by the researcher. Above all, a researcher can try to empathize with the subject who is executing under the conditions set up. Is this task fatiguing? Is it involving? Confusing? Difficult? What does that imply about the relative simplicity o-r complexity of the strategies the subject may adopt, and the consequent interpretation of evidence? Perhaps if we made a series of multiple cue judgments ourselves, or vocalized our way through a series of shopping decisions, or made paired comparison similarities judgments for a dozen products, we would better appreciate the type of situation we actually capture in our studies.

Having opened with a quote, I will close with another: "I never publish a finding until I have measured the phenomenon by at least five different methods. . . I expect a fact determined this way to stand unchanged for about fifty years". . . Von Bekesy, Nobel prize winner in physiology and medicine, as quoted in Teitelbaum, 1967, p. 12.

REFERENCES

Anderson, N. H. Looking for configurality in clinical judgment. Psychological Bulletin, 1972, 78, 93-102.

Bettman, J. Decision net models of buyer information processing and choice: findings, problems, and prospects. Paper presented at the ACR/AMA Workshop on Consumer Information Processing, Chicago, 1972.

Birnbaum, M. H. The Devil rides again: correlation as an index of fit. Psychological Bulletin, 1973, 79, 239-242.

Dawes, R. B. Slitting the decision maker's throat with Occam's Razor: the superiority of random linear models to real judges. Paper delivered at Seminar on Multiple Criteria Decision Making, Columbia, S.C., 1872.

Einhorn, H. The use of nonlinear, noncompensatory models in decision making. Psychological Bulletin, 1970, 73, 221-230.

Einhorn, H., Komorita, S. S., and Rosen, B. Multidimensional models for the evaluation of political candidates. Journal of Experimental Social Psychology, 1972, 8, 58-73.

Goldberg, L. Five models of clinical judgment: an empirical comparison between linear and nonlinear presentations of the human inference process. Organizational Behavior and Human Performance, 1971, 6, 458-479.

Green, B. F. Descriptions and explanations: a comment on papers by Hoffman and Edwards. In B. Kleinmutz (ed.), Formal Representation of Human Judgment, 1968, New York: Wiley, 1968.

Groner, R. Comments. In J. R. Royce and W. W. Rozeboom (eds.) The Psychology of Knowing. New York: Gordon and Breach, 1972, 328-335.

Heeler, R. M., Kearney, M. J., and Mehaffey, B. J Modeling supermarket product selection. Journal of Marketing Research, 1973, 10, 34-37.

Hodges, B. H. Adding and averaging models for information integration. Psychological Review, 1973, 80, 80-84.

Hoepfl, R. T., and Huber, G. P. A study of self explicated utility models. Behavioral Science, 1970, 15, 408-415.

Hoffman, P. The paramorphic representation of clinical judgment. Psychological Bulletin, 1960, 47, 116-131.

Lord, F M Cutting scores and errors of measurement, Psychometrika, 1967, 27, 19-30.

Newell, A. and Simon, H. A. Human Problem Solving. Englewood Cliffs, N.J.: Prentice-Hall, 1972.

Platt, J. R. Strong inference. Science, 1964, 146, 347-353.

Reitman, W. Cognition and Thought. New York: Wiley, 1965.

Rozeboom, W. W. Comments, in J. R. Royce and W. W. Rozeboom (eds.) The Psychology of Knowing. New York: Gordon and Breach, 1972, 390-397.

Runkel, P. J. and McGrath, J. M. Research on Human Behavior. New York: Holt, Rinehart, and Winston, 1972.

Shepard, R. N. On subjectively optimum selection among multiattribute alternatives. In M. W. Shelly and G. L. Bryan (eds.), Human Judgments and Optimality. New York: Wiley, 1964.

Slovic, P. Analyzing the expert judge, a descriptive study of a stockbroker's decision processes. Journal of Applied Psychology, 1969, 53, 255-263.

Slovic, P., Fleissner, D., and Bauman, W. S. Quantitative analysis of investment decisions. Journal of Business, 1972, 12, 779-799.

Teitelbaum, P. Physiological Psychology. Englewood Cliffs: Prentice-Hall, 1967.

Thurstone, L. L. Multiple Factor Analysis. Chicago: University of Chicago Press, 1947.

Uhr, L. Computer simulations of thinking are just (working, complete, big, complex, powerful, messy) theoretical M models. In F. Voss (ed.), Approaches to Thought. New York: Merrill, 1970, 287-297.

Wiggins, N. and Kohen, E. Man vs. model of man revisited. Journal of Personality and Social Psychology, 1971, 19, 100-106.

Wright, P. L. Consumer judgment strategies: beyond the compensatory assumption. In M. Venketesan (ed.), Proceedings of the Third Conference, Association for Consumer Research, 1972, p. 316-324.

Wright, P. L. Use of consumer judgment models in promotional strategy. Journal of Marketing, 1973(a), 37, (in press).

Wright, P. L. The harassed decision maker; time pressure, distraction, and the use of evidence. Unpublished working paper, 1973b, University of Illinois.

Wright, P. L. The simplifying consumer. Paper presented at the American Marketing Association Doctoral Consortium, East Lansing, Michigan, 1973.

Yntema, D. B. and Torgerson, W. S. Man-machine cooperation in decisions requiring common sense. IRE Transactions on Human Factors, 1961, HFE-2, 20-96.

----------------------------------------

Authors

Peter L. Wright, University of Illinois



Volume

NA - Advances in Consumer Research Volume 01 | 1974



Share Proceeding

Featured papers

See More

Featured

Intentionally “Biased”: People Purposefully Use To-Be-Ignored Information, But Can Be Persuaded Not To

Berkeley Jay Dietvorst, University of Chicago, USA
Uri Simonsohn, University of Pennsylvania, USA

Read More

Featured

On Politics, Morality, and Consumer Response to Negative Publicity

Chethana Achar, University of Washington, USA
Nidhi Agrawal, University of Washington, USA

Read More

Featured

K8. Framing Matters. How Comparisons to Ideal and Anti-Ideal Reference Points Affect Brand Evaluations.

Magdalena Zyta Jablonska, SWPS University of Social Sciences and Humanities
Andrzej Falkowski, SWPS University of Social Sciences and Humanities

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.