The Use of the Active Profile Evaluation Paradigm in Studying Consumer Judgment Processes

Peter Wright, Stanford University (student), Stanford University
Peter Rip,
[ to cite ]:
Peter Wright and Peter Rip (1978) ,"The Use of the Active Profile Evaluation Paradigm in Studying Consumer Judgment Processes", in NA - Advances in Consumer Research Volume 05, eds. Kent Hunt, Ann Abor, MI : Association for Consumer Research, Pages: 578-580.

Advances in Consumer Research Volume 5, 1978      Pages 578-580

THE USE OF THE ACTIVE PROFILE EVALUATION PARADIGM IN STUDYING CONSUMER JUDGMENT PROCESSES

Peter Wright, Stanford University

Peter Rip (student), Stanford University

A number of methods have recently been developed in the study of consumer information processing. In this paper we review how the Active Profile Evaluation paradigm has been used as one method of inquiry. We will also attempt to outline alternative applications of the paradigm in the study of consumers' judgment processes.

The paradigm requires the decision maker (DM) to rate or rank order several real or hypothetical alternatives. The profiles are generally composed of few (2-6) cues, with product dimensions generally referring to tangible characteristics. The analysis then proceeds to relate variations in a consumer's preferences to variations in the product characteristics. A variety of estimation procedures are available; all yielding an index of goodness-or poorness-of-fit and estimates of the relative value the DM assigned to the various attribute levels. Within this paradigm, researchers have asked a number of questions. The most popular issue has been the comparison of relative fits of various mathematical representations of evaluative strategies. The criteria of fit have generally been the multiple correlation coefficients, stress values, and cross-validation coefficients. Since this is an individual-level method of analysis, one generally examines these statistics using cross-sectional designs. One such approach would be to ask whether systematic individual differences, situational, or psychological factors (e.g. risk, distraction) lead to better fits for certain non-linear integration processes. Examples of this type of analysis are papers by Einhorn (1971) and Wright (1974; Scott and Wright, 1976).

The general linear model (OLS, ANOVA) has been the predominant technology associated with this paradigm (Slovic and Lichtenstein, 1971). Recent studies in marketing have also used non-metric estimation methods (e.g. linear programming [Parker and Srinivasan, 1976; Pekelman and Sen, 1974] and conjoint measurement [Green and Wind, 1973]). One consequence of these technologies has been the remarkable robustness of linear additive models of Judgment, naive theories not withstanding. Much of the early research focused upon the detection of nonlinearities in Judgment processes (Einhorn, 1971; Slovic, 1969; Slovic, Fleissner, and Bauman, 1971). The apparent robustness of the arithmetically simple, but cognitively complex, linear additive models now seems to be more an artifact of the model, rather than any real congruence with mental processes (Birnbaum, 1973; Einhorn and Hogarth, 1974; Wainer, 1975).

The search for appropriate statistical representations of choice processes naturally invites comparisons of alternative models of the same behavior. A number of formal approximations of alternative choice strategies have been developed (e.g. Einhorn [1971]). Typically, the approach has been to estimate the parameters of a variety of different models and select the one with the lowest mean squared error or stress as the best representation. This criterion of selection has been severely criticized (Birnbaum, 1973) because it artificially favors certain model forms.

Aside from the mere inspection of multiple correlations or cross validation coefficients of alternative models, two related validation methodologies have developed in the search for the "appropriate" paramorphic representation (Hoffman, 1960). Neither approach has been applied extensively in the consumer behavior literature, for reasons which will be discussed later. One method is based upon the axiomatic analysis of choice rules (Krantz and Tversky, 1971; Luce and Tukey, 1964). This method reduces a choice rule to a set of unique axioms, which can then be tested by inspection of the date. The analysis is diagnostic, rather than statistical, searching for alternative models which best account for the empirical results. The principal limitation of this method of assessment is the lack of an error theory (Krantz and Tversky, 1971) which allows us to distinguish minor perturbations in the data from systematic deviations. An alternative, but related, method of validation is Anderson's (1974) functional measurement technology. The details of the method are available from a variety of sources (Anderson, 1974; Slovic and Lichtenstein, 1971). The method, essentially, achieves a monotonic rescaling of the dependent variable such that the postulated model can usually be tested in an ANOVA framework. Although Anderson and colleagues have applied the technology across a number of tasks, there have been few applications of functional measurement in the consumer behavior literature. Exceptions are the work of Bettman, Capon, and Lutz (1974; 1975) and Troutman and Shanteau (1976).

Aside from the question of model comparisons, the weight and utility estimates derived from the procedure can be used as indirect measures of those factors which drive behavior. In a more familiar sense, we can either ask consumers for their attitudes directly or supplement their self-reports with indirect or derived estimates of their attitudes. These indirect methods of assessment may often be more reliable than more obtrusive techniques. Application of the active profile evaluation paradigm in these situations has been less common, but is potentially more powerful. Explicitly, if one's research pertains to certain aspects of consumers' product evaluation strategies, the weights and utilities are directly relevant. Using estimated weights or utilities as dependent variables, rather than global fits, it becomes less critical for these weights to be derived from a model which is a "true" description of the process. The research emphasis shifts from absolute fit to one of between- or within-subject differences in weights or utilities. Applied in this way, the active profile evaluation paradigm is simply a measurement technique for tapping cognitive events. Our concern for the reliability and validity of the measures obtained should parallel the concern we show over these issues with respect to other techniques.

One of the most interesting issues concerns a person's self-insight with respect to his or her own judgment processes. Evidence from a variety of sources (Cook and Stewart, 1975; Nisbett and Wilson, 1977; Scott and Wright, 1976; Slovic, Fleissner, and Bauman, 1972) suggests that subjects tend to report their behavior was influenced by more than the small subset actually observed. In cases of disagreements between derived parameters and subjective reports, one looks for systematic patterns in the deviations. The more thorough one's tests of reliability and validity of model-fitting analysis, the more confident one feels in attributing systematic differences to reporting biases. Observed discrepancies between self-reports and derived parameters have inspired further analyses of the source of the reporting bias and the meaning of "relative importance" to naive Judges (Cook and Stewart, 1975; Scott and Wright, 1976).

Until now we have discussed self-insight with respect to relative weights among dimensions. However, we may use estimated utilities within a dimension to assess how well consumers can recall preference orderings along that dimension. Interrogating consumers about relative preferences as just stated, rather than introspective reconstruction of cognitive processes, may prove to be a very fruitful area of research. This allows us to directly examine the extent of various cutoff-types of strategies in consumer choice. For example, we might expect that someone with accurate self-insight about his or her use of absolute cutoffs would demonstrate certain non-linearities within that dimension.

Often the research question may concern the effects of specific treatments upon generic aspects of the product evaluation strategy. Examples would be its complexity or degree of unidimensionality, or the presence of certain risk preferences or aversions. The first question may be addressed by comparing the relative weights that consumers gave to their two most important dimensions (Wright and Weitz, 1977), or the variance across the set of weights. The second question may be addressed by comparing gaps between the utilities of specific levels of a dimension. For example, one recent study found that women showed much larger decreases in utilities as the chance of negative outcomes increased modestly. This effect was particularly strong for those consumers for whom the outcome seemed imminent relative to when it seemed more distant.

A related question of interest to consumer researchers is how DMs account for others' preferences when making their own choices. Clearly, self-insight and awareness of others' utility functions are relevant to this point.

The actor-observer distinction (Jones and Nisbett, 1971; Ross, 1977) could serve as a useful model in studying how we assess others' utilities. The study of how we judge and integrate others' preferences is one area of basic consumer research which may be potentially fruitful.

A question related to the general problem of assessing others' utility functions is the degree to which group Judgments reflect preferences of the individual members. One approach to this question is to examine the correspondence between the individual members' utilities and those of the group outcome as derived from a group rating or ranking of hypothetical alternatives. While this method does not offer direct evidence concerning group decision making processes, it does offer a means of examining members' impact in terms of measures of net effect, rather than process-oriented criteria such as the number of influence attempts. Indeed, one of the major advantages of the active evaluation paradigm is its ability to complement process-oriented measures (e.g. protocols, eye movements) with rigorous behavioral measures of outcomes.

A final highly promising application of the active profile evaluation paradigm is in assessing the effects of alternative persuasive messages or other types of information displays upon consumers' evaluation strategies. If alternative messages and information displays impact upon such things as cue weights, relative or absolute attention, cutoff usage, attribute utilities, complexity of choice strategy, and accuracy of interpersonal assessment of utilities, the active evaluation of profiles promises to be one useful way of tracking these effects.

While the active profile evaluation paradigm is quite flexible and a potentially attractive measurement technique, one limitation seems noteworthy. The active evaluation paradigm requires multiple Judgments in order to derive reliable parameter estimates. Often the number of judgments can be reduced by the use of fractional designs. However, ten to thirty judgments may still be necessary for reliable parameterization. Therefore, there may be a strong task effect; biasing subjects toward simpler rules and candidate-wise processing. So the evidence produced may not generalize too well beyond situations in which people make a number of judgments in a limited period of time. A researcher can control the time span over which the succession of judgments is made, thereby replicating a setting in which rapid or highly dispersed judgments are made. Perhaps our ingenuity and flexible use of this task factor will allow us to minimize the potential for bias introduced by this essential task factor.

In conclusion, the versatility of the active profile evaluation paradigm should not be ignored. We continue to recommend that a hypothesis worth testing is worth testing in several ways, and the outputs from this paradigm should be viewed from this perspective. Furthermore, the paradigm provides important confirmatory behavioral evidence in a task which is fully compatible with other consumer choice technologies.

REFERENCES

Anderson, Norman H., Information Integration Theory: A Brief Survey, in D. H. Krantz, R. C. Atkinson, R. D. Luce, and P. Suppes (eds.), Contemporary Developments in Mathematical Psychology, Vol. II, San Francisco; W. H. Freeman, 1974.

Bettman, James R., Noel Capon, and Richard J, Lutz, Cognitive algebra in multi-attribute models, Journal of Marketing Research, 12, (May 1975), 151-164.

Bettman, James, Noel Capon, and Richard J. Lutz, Multi-attribute measurement models and multiattribute attitude theory; a test of construct validity, Journal of Consumer Research, 1, (March 1975), 1-15.

Birnbaum, Michael H., The Devil Rides Again: correlation as an index of fit, Psychological Bulletin, 79, 1973, 239-242.

Cook, Richard C., and Thomas R. Stewart, A comparison of seven methods for obtaining subjective descriptions of Judgment policy, Organizational Behavior and Human Performance, 13,(February, 1975), 31-45.

Einhorn, Hillel J., Use of nonlinear, non compensatory models as a function of task and amount of information, Organizational Behavior and Human Performance, 6, (January, 1971), 1-22.

Einhorn, Hillel J., and Robin M. Hogarth, Unit Weighting Schemes for Decision Making, Organizational Behavior and Human Performance, 13, 1975, 171-192.

Green, Paul and Yoram Wind, Multiattribute Decision Making in Marketing, Englewood Cliffs: Prentice Hall, 1973.

Hoffman, Paul J., The Paramorphic Representation of Clinical Judgment, Psychological Bulletin, 57, 1960, 116-131.

Jones, E. E. and R. Nisbett, The Actor and the Observer: Divergent Perception of the Causes of Behavior, in E. E. Jones et al (eds.), Attribution: Perceiving the Causes Of Behavior, Morristown, NJ; General Learning Press, ;.971.

Krantz, David and Amos Tversky, Conjoint-Measurement Analysis of Composition Rules in Psychology, Psychological Review, 78, 1971, 151-169.

Luce, R. D. and J. W. Tukey, Simultaneous Conjoint Measurement; a new type of fundamental measurement, Journal of Mathematical Psychology, 1, 1964, 1-27.

Nisbett, R. and T. D. Wilson, Telling More Than We Know: verbal reports on mental processes, Psychological Review, 84, 1977,231-259.

Parker, Barnett R. and V. Srinivasan, A Consumer Preference Approach to the Planning of Rural Primary HealthCare Facilities, Operations Research, 24, 1976, 991-1025.

Pekelman, Don and Subrata K. Sen, Mathematical Programming Models for the Determination of Attribute Weights, Management Science, 20, (April, 1974), 1217-1229.

Ross, Lee, The Intuitive Psychologist and his Shortcomings: Distinctions in the Attribution Process, in Berkowitz, L. (ed.) Advances in Experimental Social Psychology, Vol. 10, 1977, New York; Academic Press.

Scott, Jerome E. and Peter Wright, Modeling an Organizational Buyer's Product Evaluation Strategy; Validity and Procedural Considerations, Journal of Marketing Research, 13, (August, 1976), 211-224.

Slovic, Paul, Analyzing the Expert Judge; A Descriptive Study of a Stockbroker's Decision Processes, Journal of Applied Psychology, 53, (August, 1969), 255-263.

Slovic, Paul, Don Fleissner, and W. Scott Bauman, Analyzing the use of information in investment decision making: A methodological proposal, Journal of Business, 45, (April, 1972), 283-301.

Slovic, Paul and Sarah Lichtenstein, Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment, Organizational Behavior and Human Performance, 6, (November, 1971), 649-744.

Troutman, C. Michael and James Shanteau, Do Consumers Evaluate Products by Adding or Averaging Attribute Information?, Journal of Consumer Research, 3, 1976, 101-106.

Wainer, Howard, Estimating Coefficients in Linear Models; It Don't Make No Never Mind, Psychological Bulletin, 83, (March, 1976), 213-217.

Wright, Peter, The Harassed Decision Maker: Time Pressures, Distractions, and the Use of Evidence, Journal of Applied Psychology, 59, (November, 1971), 555-561.

Wright, Peter and Barton Weitz, Time Horizons and Product Evaluation Strategies, Journal of Marketing Research, 1977, in press.

----------------------------------------