Does Task Complexity Or Cue Intercorrelation Affect Choice of an Information Processing Strategy? An Empirical Investigation

Michael Reilly, (student), The Pennsylvania State University
Rebecca H. Holman, The Pennsylvania State University
ABSTRACT - This experiment was a 3 by 2 factorial design intended to explore the effects of task complexity and cue correlation on the selection of a processing strategy by consumers placed in a choice situation. Undergraduate marketing students (n = 176) were asked to select an automobile from a set of automobiles rated on seven attributes. They were presented with either 2, 6 or 10 automobiles (Factor 1) whose attribute designations were based either on actual product ratings or on randomized ratings (Factor 2). The choice of a processing strategy was measured by the structured protocol technique.
[ to cite ]:
Michael Reilly and Rebecca H. Holman (1977) ,"Does Task Complexity Or Cue Intercorrelation Affect Choice of an Information Processing Strategy? An Empirical Investigation", in NA - Advances in Consumer Research Volume 04, eds. William D. Perreault, Jr., Atlanta, GA : Association for Consumer Research, Pages: 185-190.

Advances in Consumer Research Volume 4, 1977   Pages 185-190

DOES TASK COMPLEXITY OR CUE INTERCORRELATION AFFECT CHOICE OF AN INFORMATION PROCESSING STRATEGY? AN EMPIRICAL INVESTIGATION

Michael Reilly (student), The Pennsylvania State University

Rebecca H. Holman, The Pennsylvania State University

ABSTRACT -

This experiment was a 3 by 2 factorial design intended to explore the effects of task complexity and cue correlation on the selection of a processing strategy by consumers placed in a choice situation. Undergraduate marketing students (n = 176) were asked to select an automobile from a set of automobiles rated on seven attributes. They were presented with either 2, 6 or 10 automobiles (Factor 1) whose attribute designations were based either on actual product ratings or on randomized ratings (Factor 2). The choice of a processing strategy was measured by the structured protocol technique.

THEORETICAL BASIS

Information Processing

Consumer behavior researchers have recently become interested in research that focuses on understanding the human decision process. This research in marketing has been subsumed under the rubric of information processing. The impetus behind such interest can largely be traced to the desire on the part of marketers to develop a basic understanding of the decision process of the consumer in the hope that effective and efficient marketing strategy can be predicted on such an understanding. Study of human information processing originated in the study of human clinical judgment (Dawes, 1971, 1972; Einhorn, 1970, 1971; Goldberg, 1968, 1970, 1971; Hoffman, 1960, 1968) and in the attempts to program computers to simulate human problem solving (Newell, Shaw & Simon, 1958; Newell and Simon, 1972; Simon and Newell, 1970). For recent reviews of this research and its applicability to marketing problems see Bither and Ungson (1975), and Jacoby & Chestnut (1976).

PROCESSING STRATEGIES

One of the major thrusts of this research stream has been the development of a taxonomy of the methods which an individual can use to combine and integrate cues present in the decision situation to arrive at a final choice. These strategies have come to be known as information processing strategies and can be divided into three groups: compensatory, non-compensatory, and sequential.

Compensatory Strategies

In the linear compensatory strategy the individual is seen as summing or averaging the alternative ratings for each of the attributes to arrive at a final rating for each of the alternative stimulus objects. He then chooses the stimulus object having the highest rating overall. Thus, the judgment is based on the combined ratings of the cues. One weak attribute in a stimulus object may be compensated for by strength in another attribute. In an empirical sense this model has been operationalized in a regression equation similar to the multi-attribute model.

Non-Compensatory Strategies

Non-compensatory strategies on the other hand do not allow the desirability or undesirability of one cue to be balanced by another cue. The three primary non-compensatory strategies that have been identified are the conjunctive, the disjunctive, and the lexicographic. In conjunctive processing, the decision maker is seen as establishing a cutoff point for each attribute. If the alternative's attributes exceed all cutoff points it is seen as being satisfactory and is hence chosen (Coombs, 1964, Dawes, 1971, 1972; Einhorn, 1970, 1971). The disjunctive strategy is similar to the conjunctive model in that the decision maker also constructs a set of attribute cutoffs. The difference is that a disjunctive possessor chooses an alternative which possesses an attribute that exceeds at least one of these cutoffs. Lexicographic processing is done when the individual rank orders the attributes on an importance dimension and chooses the product with the highest rating on the most important attribute; if discrimination is not possible then the decider breaks the tie based on the second most important attribute and so on until only a single alternative remains (Tversky, 1972).

Sequential Strategies

Sequential processing strategies are commonly characterized by the operation of more than one of the "pure" strategies described earlier. For example, an individual in a choice situation may decide to first eliminate some of the alternatives in a conjunctive manner and then choose from the alternatives remaining by use of a compensatory strategy. The sequential strategies considered in this research are: Conjunctive-Compensatory, Disjunctive-Compensatory, Conjunctive-Disjunctive, Disjunctive-Conjunctive in which the individual first limits the set with the first strategy and then makes a final choice based on the second strategy. Henceforth these strategies will be abbreviated thusly: Comp, Lex, Disj, Conj, etc.

One of the factors that has inhibited the effectiveness of research measuring choice of a processing strategy has been the lack of an adequate measuring device to access the workings of the human mind in a choice situation. Attempts have been made to measure this by the use of a process strategy (Jacoby, Speller and Kohn-Berning, 1973, 1974a, 1974b) where the decision maker is presented with a board containing a number of cues about a number of products and asked to select the cues required to make a decision. This research dealt primarily with the effects of information load on decision quality. Another method is the use of an actuarial method to determine post hoc the model which best represents the decision process of the consumer (Einhorn, 1970, 1971; Goldberg, 1968, 1970, 1971).

A third method has been the use of protocols where the decider is asked to recreate the decision process he went through in making the choice (Bettman, 1972, 1974; Wright, 1974a, 1974b, 1975). A problem with this method is that there may be tendencies on the part of the subjects to simplify, censor, or rationalize the descriptions of what they did. A fourth technique which shows promise in alleviating the problems of the others is what has been termed the structured protocol method. In this method the decider is presented with descriptions of the processing strategies under investigation and asked to specify the one which most closely resembles what he used (Park, 1976). Accordingly, the latter method was used in this experiment to assess the dependent variable, choice of a processing strategy.

FACTORS AFFECTING THE CHOICE OF A PROCESSING STRATEGY

The consideration of which factors of the decision environment affect the choice of a processing strategy is not new to marketing. Wright (1974a, 1974b, 1975) has considered the effect of such factors as the desire to simplify, the desire to optimize, information load, time constraints, distraction, and cue interrelationship. Park (1976) considered product familiarity and product complexity. Sheth and Raju (1974) also suggest that product or product class familiarity may determine the relevant processing strategy. The research conducted here was designed to determine the effect of two of these variables, task complexity and cue intercorrelation.

Task Complexity

Task complexity is very similar to the related concept of information load. Information load can be defined in terms of the amount of information that a person is handling per unit of time. Information load is seen as a function of four major variables (Bither and Ungson, 1975), the number of options, the number of cues per option, the number of distracting cues and the amount of time available for the decision. Since in this situation there is no time constraint, the variable is not information load but rather will be called task complexity and operationalized in terms of the number of brands present in the choice situation. Given a time constraint, higher task complexity would he the same as higher information load.

Research in marketing on task complexity has generally supported the finding that increasing the complexity of the task results in a decrease in the optimality of the decision (Jacoby, et al., 1973, 1974, 1975; Wright, 1974a), although there have been criticisms of these studies on methodological and conceptual grounds (Summers, 1974; Wilkie, 1974). Note that none of this research has focused on the actual relationship between task complexity and processing strategy. However, it has explored the relationship indirectly if one makes the assumption that increased task complexity results in a choice of a different and presumably simpler process-ins strategy which in turn may affect the optimality of the choice.

Park (1976) measured the effect of product complexity, defined in terms of the number of attributes the decision maker rated as greater than four in importance on a seven point scale from "most important to me" to "not important at all to me." Park found that product complexity had an effect on the choice of a strategy. Row-ever, because subjects self-selected their experimental groups it was not possible to infer causality. The most that can be said is that the greater the perceived complexity of the decision the better the correlation between the compensatory and conjunctive models and the actual choice. While product complexity and task complexity are not equivalent it is possible to explore the causal link between complexity and processing strategy in the current design.

Cue Intercorrelation

The inclusion of cue intercorrelation as a factor in the design is based on a controversy that developed in the clinical judgment literature. Einhorn (1970) used the actuarial technique based on logarithmic transformations of the data to assess the use of conjunctive and disjunctive strategies in two tasks, ranking applicants to graduate school and rank-ordering jobs in terms of preferences. He selected cue configurations designed to accentuate the use of these strategies. He found that for many subjects the conjunctive model fit better than the linear model. Goldberg (1971) attempted to replicate these findings using actual clinical judgments and actual patients' MMPI scores and was unable to replicate Einhorn's results, finding overwhelming evidence that the linear model fit most judges better. Goldberg states that Einhorn's findings may be attributable to the fact that in cue selection, Einhorn may have decreased the correlations that exist between the cues in reality.

The second factor in the design presented here was meant to test Einhorn's conclusion. The use of actual automobile rating in one case insures that any intercorrelations that exist in reality are also present in the choice task. The random assignment of these cues in the second case guarantees that these correlations will not exist in the second case. Thus if cue correlation is a factor in the use of conjunctive strategy we would expect higher percentage of conjunctive processing in the random case.

METHODOLOGY

A standardized survey instrument was administered to 176 students in introductory marketing and consumer behavior courses. Eight of the returned instruments contained unusable responses, leaving a sample of 168 subjects. In completing the survey task a subject was instructed to examine descriptions of automobiles, each which was rated on seven attributes. These attributes were:

front and rear seat comfort

ride will full load

interior noise level

normal and emergency handling

braking power

repair and maintenance cost

combined mileage rating

Attributes were selected on the basis of their inclusion in ratings made by Consumer Reports, assuming that salient attributes were used in such an evaluation. After examining the descriptions (attributes were ranked on a 1-10 scale, 1 indicating worst evaluation, 10-best) subjects were asked to rank order the automobiles on a best-worst scale. Then subjects were asked to indicate overall acceptability of each automobile.

Finally, subjects were asked to examine a list of structured protocols which described 8 decision-making strategies. These descriptions are presented in Table 1. The following instructions were then given to subjects:

Try to remember exactly how you chose the car you did (your first choice) then read the descriptions and answer the following questions.

Does one of the descriptions exactly match the way you chose?

If yes, which one?

If no, which one came closest?

Each subject received one of the six experimental treatments. Three levels of task complexity, and two methods for assigning attributes to each automobile brand were used.

TABLE 1

STRUCTURED PROTOCOLS DESCRIBING INFORMATION-PROCESSING STRATEGIES

Task complexity was operationally defined as the number of automobiles a subject evaluated. Experimental tasks of 10, 6, and 2 stimulus-objects were used here. Automobile brands, identified only by a letter cue to avoid possible a priori bias elicited by use of brand names, were presented to subjects in random order.

Attributes were assigned to stimulus objects in two ways to test the hypothesis that cue intercorrelations affect the decision strategy reported. For half of the survey instruments, "real" automobiles were described, as derived from ratings found in Consumer Reports. Stimulus-objects whose cues contained the intercorrelations found in actual cars were referred to as the "regular" group. For the other half of the survey instruments, attribute ratings found in the first group were randomly reassigned. This group was called the "random" group.

The complete data matrix is presented in Table 2. As can be seen, the reported exactness with which a protocol was perceived to describe the decision-making process was recorded. If the subject stated that the protocol description exactly matched the decision process used, the subject was counted as an "exact" for the strategy chosen. If the subject stated that the protocol description was only the "closest" description, the subject was counted as a "modified" for the strategy. Thus, sixteen different model choices were defined.

TABLE 2

FREQUENCY OF REPORTED MODEL CHOICE FOR EXPERIMENTAL CONDITIONS

DATA ANALYSIS

The data being categorical in nature may be analyzed with the use of loglinear models. This technique, explained in-depth in Ku and Kullback (1974) and Bishop, Feinberg and Holland (1975), was recently brought to the attention of marketers by LaGarce (1974). The data were therefore conceptualized in the following manner.

Three variables were defined.

A= model choice

B= task complexity

C= cue intercorrelations

Levels of each variable were:

"1 = Exact Conj-Comp B1 = 10 stimulus objects

"2 = Exact Lex B2 = 6 stimulus objects

"3 = Modified Lex B3 = 2 stimulus objects

"4 = Exact Comp C1 = Regular cue intercorrelation

"5 = Modified Comp C2 = Random cue intercorrelation

As can be seen, the modified Conj-Comp, exact and modified Disj-Comp, and modified Conj models were dropped from the contingency table analysis because of excessive O's in cells.

A general linear approach was used in the contingency table analysis. The data were thus evaluated in a manner analogous to an analysis of variance with fully metric data. The obtained c2 can be seen as a measure of the degree to which the posited model fits the observed data. When systematically dropping terms from the model, the point at which there is a significant c2, signals the point at which a critical term has been omitted from the model.

Ten models were used in this analysis. The obtained c2's and concomitant probabilities are presented in Table 3. As can be seen, for the Y=E model (which tests the hypothesis of an equal distribution of frequencies across all cells) there is a significant c2. The conclusion must be that neither cue intercorrelation nor task complexity affect the choice of an information processing strategy in this experiment, but that choice of a strategy was not a random process.

DISCUSSION

The results of this study support the conclusion that task complexity and cue intercorrelation did not alter the choice of a processing strategy by subjects placed in this decision situation. This is indicated by the relatively small increase in the c2 values when the AC and AB terms are eliminated from the models.

TABLE 3

OBTAINED c2'S FOR EACH LINEAR MODEL TESTED

The effect of the elimination of AC (the term representing the effect of cue intercorrelation on the dependent variable, choice of a processing strategy) is seen by comparing c2 values for models ii and u. The information contributed by AC is determined by computing the increase in c2 values when AC is eliminated from a model. Thus the c2 value for model ~ which includes AC is subtracted from the c2 value for model u which excludes AC to result in an information value of 4.83 for AC. In a similar manner, the information value of each term can be computed and is shown in Table 4.

TABLE 4

The effect of task complexity on processing strategy choice is more marked but still not significant. This effect, represented in the model by the AB term, can be assessed by comparing the X2 value of model ~ with that of model iv. As can be seen in Table 4, the AB term contributes 15.54 to the total information. Table 3 reveals that the probability of the significance of this term is .228. Thus task complexity accounts for more of the "variance" in the data than any other term, with the exception of A, the dependent variable.

els which omitted the processing strategy-task complexity interaction (the AB term).

The explanation for the processing strategy-task complexity interaction importance in this data seems to be due to a slight tendency for individuals in the more complex task situations to report a "modified lexicographic" processing strategy, and for individuals in the less complex task situations to report an "exact compensatory'' strategy. However, these inferences are only tentative since the data are by no means conclusive on these points.

The processing strategy most frequently chosen was lexicographic, regardless of the level of task complexity or the cue intercorrelation, with compensatory a distant second (see Table 5).

TABLE 5

NUMBER OF SUBJECTS SPECIFYING EACH STRATEGY

This finding is an interesting contrast to the previously reported finding of Park. The reasons for this may have to do with the fact that Park only measured four strategies, which were: compensatory equal weighting, compensatory unequal weighting, conjunctive and disjunctive. He found for automobiles that compensatory strategies were more predictive of the order of preference. However, it is possible that some of this effect was due to the non-inclusion of the lexicographic strategy in his analysis. Park's study was designed to measure the preference ranking of various products based on the functioning of these four strategies. Since the use of a lexicographic strategy leads only to one choice rather than a complete ranking of alternatives the non-inclusion of a lexicographic strategy should not be seen as a shortcoming of that study. The following may partially explain the difference between Park's results and the results of this study.

In the unequal weighting compensatory model the attribute ratings are weighted differently in terms of the importance of that attribute to the decider. Therefore the attribute which is weighted the most heavily is the most important attribute. If a decision maker does process lexicographically it seems likely that the unequal weighted compensatory model would provide a reasonable simulation of the choice, and would certainly fit the choice better than any of the other alternatives Park tested. It should be possible for Park to test such a hypothesis for the data he has collected by looking only at the first choice of his subjects and creating an expected first choice based on each of the strategies as well as one based only on the product rating highest on the most important attribute. In this way it should be possible to determine which strategies predict brand choice best. One conclusion that can be drawn is that the lexicographic strategy may be far more important than was previously thought, regardless of task complexity or cue intercorrelation. This may also explain part of the success of multiattribute attitude models in the prediction of choice behavior because a multiattribute model is similar to the linear compensatory model. The managerial implication of this finding is clear. If a significant portion of the consumers in a given market segment process lexicographically, the promotional strategy for that segment must concentrate on improvement of the brand image through concentration on the most important attribute.

The finding that task complexity is only weakly related to choice of a processing strategy, at least in this situation, would seem to indicate that perhaps a reassessment of the other variables that have been postulated to affect processing strategy choice is desirable.

It appears that perhaps the importance of product familiarity should be reemphasized and empirically determined. The finding that the level of cue intercorrelation does not appear to affect the choice of a processing strategy by increasing the percentage of deciders who process either conjunctively or disjunctively is an interesting one in light of the debate between Einhorn and Goldberg cited earlier. On the surface it would appear that the results of this study do not support the interpretations that Goldberg gave for Einhorn's results. However, it is also possible that results obtained here were due to the limitations of this study rather than to the lack of the power of cue intercorrelation as an explanatory variable. Because the decision makers in this study were only required to make a decision once rather than a number of times as in the Einhorn and Goldberg studies one alternative explanation could be offered for the lack of effects due to the correlation of the cues. If the correlations exist in the minds of the subjects based on their past experience with the product, then it is highly unlikely that one exposure to a set of such products in which the correlations did not exist would be enough to change their standard processing mechanism. Thus, it would require a number of trials in which these correlations were not present to reveal the actual nature of the relationship between the correlation of the cues and the choice of a processing strategy.

It appears that using the structured protocol has promise as a method for determining the variables that affect the choice of a processing strategy. Of the 168 subjects, 82 or 48.8% reported using strategies exactly similar to those described by the protocol. But because the structured protocols were not explicitly pretested, the dominant choice of lexicographic model in the results may be due to the attractive description of that model. Subjects may have been influenced by the language used to describe each model rather than by the content of the description. Nevertheless, it appears that the structured protocol technique can be a valuable tool in future research on information processing.

REFERENCES

James Bettman, "Decision Net Models of Buyer Information Processing and Choice: Findings, Problems, and Prospects.'' Paper presented to the ACR/AMA Workshop in Consumer Information Processing, University of Chicago, 1972.

James Bettman, "Toward A Statistics for Consumer Decision Net Models." Journal of Consumer Research, 1 (June, 1974), 71-80.

Yvonne M. M. Bishop, Stephen E. Feinberg, and Paul W. Holland, Discrete Multivariate Analysis: Theory and Practice, (Cambridge, Mass: MIT Press, 1975).

Stewart W. Bither and Gerardo Ungson, "Consumer Information Processing Research, An Evaluative Review," Working Series in Marketing Research, No. 28, (April, 1975), The Pennsylvania State University.

Consumer Reports, 39(December, 1974).

Clyde H. Coombs, Theory of Data, (New York: Wiley, 1964).

Robyn M. Dawes, "A Case Study of Graduate Admissions: Application of Three Principles of Human Decision Making." American Psychologist, 26(February, 1971), 180-188.

Robyn M. Dawes, "Slitting the Decision Maker's Throat With Occom's Razor: The Superiority of Random Linear Models to Real Judges." Paper delivered at Seminar on Multiple Criteria Decision Making, Columbia, S. C., 1972.

Hillel Einhorn, "The Use of Nonlinear, Noncompensatory Models in Decision-Making," Psychological Bulletin, 73 (March, 1970), 221-230.

Hillel Einhorn, "Use of Nonlinear, Noncompensatory Models As A Function of Task and Amount of Information." Organizational Behavior and Human Performance, 6(January, 1971), 1-17.

Lewis R. Goldberg, "Simple Models or Simple Processes? Some Research On Clinical Judgments," American Psychologist, 23(July, 1968), 483-496.

Lewis R. Goldberg, "Man vs. Model of Man: A Rationale, Plus Some Evidence for A Method of Improving On Clinical Inferences." Psychological Bulletin, 73(June, 1970), 422-432.

Lewis R. Goldberg, "Five Models of Clinical Judgment: An Empirical Comparison Between Linear and Non-Linear Representations of the Human Inference Process." Organizational Behavior: Human Performance, 6(June, 1971), 458-479.

Paul Hoffman, "The Paramorphic Representation of Clinical Judgment," Psychological Bulletin, 57(March, 1960), 116-131.

Paul Hoffman, "Cue Consistency and Configurality In Human Judgment." In B. Klernmutz (Ed.). Formal Representation of Human Judgment, (New York: Wiley, 1968).

Jacob Jacoby and Robert W. Chestnut, "Consumer Information Processing: Theory and Empirical Findings," presented at the Symposium on Consumer and Industrial Buying Behavior, University of South Carolina, March 24-26, 1976.

Jacob Jacoby, Donald Speller and Carol Kohn, "Brand Choice Behavior As A Function of Information Load: Study I." Journal of Marketing Research, 11 (February, 1974a), 63-69.

Jacob Jacoby, Donald Speller and Carol Kohn, "Brand Choice Behavior As A Function of Information Load, Replication and Extension," Journal of Consumer Research, 1 (June, 1974b), 33-42.

Jacob Jacoby, Donald Speller and Carol Kohn, "More Happiness: Constructive Criticism and Programmatic Research, Round 2." Purdue Papers in Consumer Psychology, No. 145, 1974c.

Harry H. Ku and Solomon Kullback, "Loglinear Models In Contingency Table Analysis," The American Statistician, 28(November, 1974), 115-122.

Raymond LaGarce, "An Analysis of Second-Order Interaction In Multidimensional Contingency Tables," Journal of Marketing Research, 11 (August, 1974), 343-345.

Allen Newell and Herbert Simon, "Elements of A Theory of Human Problem Solving," Psychological Review, 65(May, 1968), 151-166.

Allen Newell and Herbert Simon, Human Problem Solving, (Englewood Cliffs, Prentice-Hall, 1972).

C. Whan Park, "The Effect of Individual and Situation-Related Factors On Consumer Selection of Judgmental Models," Journal of Marketing Research, 13(May, 1976), 144-151.

Jagdish Sheth and P. S. Raju, "Sequential and Cyclical Nature of Information Processing Models In Repetitive Choice Behavior." In Scott Ward and Peter Wright (Eds.). Advances in Consumer Research, Vol. I, Urbana, Illinois: Association for Consumer Research, 1974, 348-358.

Herbert Simon and Allen Newell, "Human Problem Solving: The State of the Theory In 1970," American Psychologist, 26(February, 1971), 145-159.

John Summers, "Less Information Is Better?" Journal of Marketing Research, 11 (November, 1974), 467-468.

Amos Tversky, "Elimination by Aspects: A Theory of Choice," Psychological Review, 79(July, 1972), 281-299.

William Wilkie, "Analysis of Effects of Information Load." Journal of Marketing Research, 11 (November, 1974), 462-466.

Peter Wright, "The Harassed Decision Maker: Time Pressure, Distraction, and the Use of Evidence," Journal of Applied Psychology, 59(October, 1974a), 555-561.

Peter Wright, "Consumer Choice Strategies: Simplifying vs. Optimizing." Journal of Marketing Research, 12 (February, 1975), 60-67.

Peter Wright, "The Use of Phased Noncompensatory Strategies In Decision Between Multiattribute Products," Research Paper 223, Graduate School of Business, Stanford University, August, 1974b.

----------------------------------------