The Role of Attribute Importance in Sequential Consumer Choice

ABSTRACT - Two key issues in sequential choice include deciding which attribute to acquire next if additional information is desired and integrating the newly-acquired information into some running counter of cumulative discrimination. The current work investigates the role of attribute weights in such an environment. The key findings are: 1) the Q-Sort procedure is robust in eliciting attribute ranks; 2) attributes are acquired in decreasing order of importance; 3) the fit between attribute ranks and acquisition order yielded some improvement across multiple sessions; 4) an attribute’s weight and the time spent deciding to acquire (integrate) it were negatively (positively) related.


Gad Saad (1999) ,"The Role of Attribute Importance in Sequential Consumer Choice", in NA - Advances in Consumer Research Volume 26, eds. Eric J. Arnould and Linda M. Scott, Provo, UT : Association for Consumer Research, Pages: 51-57.

Advances in Consumer Research Volume 26, 1999      Pages 51-57


Gad Saad, Concordia University

[The data was collected as part of the author's doctoral dissertation at the Johnson Graduate School of Management, Cornell University. The results of the preliminary analyses section were previously reported in the dissertation. The author thanks his doctoral committee, Professors J. Edward Russo (chair), Douglas M. Stayman, Alberto Segre and Ali Hadi, for their valuable support and mentorship. Additionally, Khalil Khoury and Darren Bicknell were dedicated research assistants during the data analysis stage. The author is indebted to four anonymous reviewers and the ACR co-chair (Eric J. Arnould) for their constructive feedback. Finally, the author acknowledges the financial support of Cornell University (doctoral fellowship), Concordia University (Faculty Research Development Program) and the Social Sciences and Humanities Research Council of Canada (New Scholar Grant).]


Two key issues in sequential choice include deciding which attribute to acquire next if additional information is desired and integrating the newly-acquired information into some running counter of cumulative discrimination. The current work investigates the role of attribute weights in such an environment. The key findings are: 1) the Q-Sort procedure is robust in eliciting attribute ranks; 2) attributes are acquired in decreasing order of importance; 3) the fit between attribute ranks and acquisition order yielded some improvement across multiple sessions; 4) an attribute’s weight and the time spent deciding to acquire (integrate) it were negatively (positively) related.

In a sequential multiattribute choice, an individual acquires information until sufficient cumulative discrimination has been achieved to justify a final choice. Three of the key issues facing an individual in such an environment are: 1) deciding when to terminate the search process; 2) if additional information is needed, deciding which information to acquire next; and 3) integrating the newly-acquired information. This paper solely addresses the latter two issues, namely the attribute selection and information integration stages. For work on stopping strategies (i.e., termination of thesearch process) in sequential multiattribute choices, see Aschenbrenner, Albert and Schmalhofer (1984), Schmalhofer et al. (1986), Bockenholt et al. (1991), Busemeyer and Townsend (1993), Hutchinson and Meyer (1994), Saad (1994), Diederich (1995), Saad and Russo (1996) and Meyer (1997).

The current work has four objectives: 1) investigate the relationship between the ordinal ranking of attributes and their acquisition order; 2) determine whether the latter relationship is invariant across experimental sessions; 3) determine whether the time spent deciding which attribute to acquire next is related to its importance weight; 4) determine whether the time spent integrating attribute information is related to the attribute’s importance weight.


1.1 Acquisition Order

Most studies that have tracked the order of acquired attributes have used informational display boards (IDBs) (e.g., Payne 1976; Jacoby, Chestnut and Fisher 1978; Dahlstrand and Montgomery 1984; see Ford et al. 1989 for a review). In general, these studies have attempted to infer the decision strategies used by subjects in reaching a decision or to investigate various characteristics of the search process including the depth, sequence, content and latency of search (for an explanation of these measures, see Ford et al.).

Within the IDB tradition, researchers have proposed many factors that might affect the attribute acquisition order. Hagerty and Aaker (1984) proposed four such factors: 1) the cost of acquiring and subsequently processing a piece of attribute information; 2) an attribute’s importance weight; 3) the correlations between the attributes; 4) the range and variance of an attribute’s values. Furthermore, they normatively argued that individuals will acquire the piece of attribute information that maximizes their Expected Value of Sample Information. Hagerty et al. (1984, p. 233) admit that one of the key limitations of their model is that it cannot predict a full acquisition sequence. Instead, it is solely capable of predicting the next piece to be acquired.

Behavioral scientists have carried out empirical investigations of various moderators of acquisition order. Meyer (1982) manipulated the range of possible values of two attributes to determine whether it would affect the likelihood of acquiring information on either attribute. Using a 2x2 IDB, Meyer showed subjects the information corresponding to cell (1,1). Subjects could then request only one of the three remaining pieces of information prior to making a choice. Meyer demonstrated that the likelihood of acquiring information on alternative 1, namely uncovering cell (1,2), increased as the range of attribute 2 increased. Simonson, Huber and Payne (1988) investigated the effects of the certainty and favorability of prior brand knowledge on the acquisition order. They found that information, corresponding to more uncertain and unfavorable prior brand beliefs, was acquired earlier. Aschenbrenner, Albert and Schmalhofer (1984) proposed, as one of the assumptions of the Criterion Dependent Choice Model, that attributes would be acquired in decreasing order of importance. While Aschenbrenner et al. did not explicitly test this assumption, they demonstrated that preference reversals over repeated trials were best explained via the inherent stochasticity of the acquisition order. In other words, it was argued that the attribute acquisition order was not deterministically known. Rather, there is a probabilistic likelihood as to which attribute is acquired next, based on the attribute’s importance weight. In follow-up work, Aschenbrenner et al. (1986) found empirical support for the latter postulate.

The first postulate of this paper is that, ceteris paribus, ordinal ranks are good predictors of attribute acquisition order (H1). This replicates the Aschenbrenner et l. (1984; 1986) studies with a key difference being that instead of importance weights, ranks are used here as the predictors of acquisition order. Secondly, it is posited that the fit between the elicited ordinal ranks and the acquisition order will improve as individuals become more familiar with the experimental task (H2). In other words, a learning effect is expected, resulting in improved calibration of the elicited ranks vis-a-vis the acquisition order. Many of the real-world applications of decision analysis (e.g., MultiAttribute Utility Theory) which rely on attribute importance weights and/or ranks as an integral part of the model, typically use methodologies that elicit the latter measures on only one occasion from individuals. Hence, by showing that the fit between ranks and acquisition order improves across repeated sessions, this will hopefully serve as a caveat for practitioners using such decision analysis technologies.



1.2 Time Allocation in the Attribute Selection and Information Integration Stages

Recently, Saad and Russo (1996) documented a new stopping strategy, the Core Attributes (CA) heuristic. The latter stipulates that an individual will terminate the information search process and commit to a choice once a personally preferred set of most important attributes have been acquired. Thus, if individuals are a priori aware that they will acquire a fixed number of attributes as part of their core, the acquisition order for the core attributes is somewhat irrelevant hence making it sub-optimal to deliberate over it. However, should an individual acquire past his/her core set (typically occurs when, upon having acquired the full core set, the alternatives are not very differentiated), the acquisition order takes on greater importance. In other words, ceteris paribus, the likelihood of achieving maximal differentiation between the alternatives is greater, if of the remaining non-core attributes, the more important ones are acquired first. Thus and contrary to one’s intuition, as attributes become less important (i.e., as individuals move from their core attributes to the non-core ones), the decision as to which to acquire next becomes a more crucial one. Accordingly, H3 posits that there exists a negative relationship between an attribute’s weight and the amount of time spent deciding to acquire it.

H4 posits that when integrating information (i.e., once it has been acquired), the more important an attribute, the greater amount of time will be spent attending to that information. Once again, from a differentiation perspective, the likelihood of achieving maximal differentiation is greater for more important attributes and hence the individual is likely to be more focused when integrating such information. H3 and H4 are particularly interesting for no research to date has looked at whether the amount of time spent deciding which attribute to acquire next and the time spent integrating newly-acquired information is moderated by an attribute’s importance weight.


2.1 Task

The experimental task consisted of choosing between pairs of apartments to rent for one year. The apartments were defined by 25 attributes thus one could request anywhere from 1 to 25 pieces of attribute information prior to making a final choice. Whenever additional information was desired, subjects could choose which attribute to acquire next from an alphabetized listing of the attributes. Requesting an additional piece of attribute information implied that both attribute values corresponding to the two competing alternatives would be shown simultaneously. Subjects participated in four separate experimental sessions and made 15 binary choices in each of the sessions.

2.2 Apparatus

The SMAC computer interface (Saad 1996) was used to implement the sequential multiattribute choice process. See Brucks (1988) for a discussion of the advantages and disadvantages of using computer interfaces in process-tracing research.

2.3 Subjects

Twenty-two (5 male) undergraduate students participated in the experiment. They were recruited on the campus of Cornell University. Subjects were paid $20 for the entire experiment (i.e., 4 sessions). None of the sessions lasted longer than 1 hour.

2.4 Procedure

Subjects were first shown an alphabetical listing of the 25 attributes with their respective ranges (see Beattie and Baron (1991) and Fischer (1995) for recent work on attribute range effects). Table 1 displays the 25 attributes. [Attribute ranges were subjectively chosen as to minimize the likelihood of range effects. Also, in constructing the stimuli, none or the pairs of alternatives had a dominant option.] Subsequently, a Q-Sort procedure was performed to elicit their attribute ranks and weights. The latter consisted of the following steps: 1) classifying each attribute into one of five categories (very important to unimportant); 2) ranking the attributes within each category; 3) reviewing the full ordering of the 25 attributes, switching any as needed, and assigning weights (from 5 to 100) to reflect the attributes’ relative importance. [For a comparison and discussion of other weight elicitation techniques, see Stillwell, Seaver and Edwards (1981), Jaccard, Brinberg and Ackerman (1986) and Borcherding, Eppel and Von Winterfeldt (1993). Borcherding, Schmeer and Weber (1993) discuss various biases that can arise when eliciting attribute weights.]

Once the Q-Sort procedure was completed, subjects made 15 choices between pairs of competing apartments. For each of the 15 binary choices, subjects acquired one piece of attribute information at a time, updated their cumulative confidence as a result of this new information and decided whether to stop and choose the leading apartment or to acquire additional attribute information. A cumulative confidence of x in favor of an alternative meant that based on the information acquired thus far, there was a (1-x) probability that the preference would be reversed in light of all possible information. The lower and upper boundaries of this measure were 50 and 100 respectively, corresponding to a toss-up between a pair of apartments up to a zero chance of a preference reversal. At any point in the process, subjects could also review previouslyacquired information and/or view a pictorial record of the cumulative confidence measure up to the current point in the decision.

The latter procedure was repeated in each of the four sessions, with the sole exception being that in the fourth and final session, subjects were placed under time duress. Following the results of a pilot study, it was determined that providing the subjects with 65% of the time that they had taken to make the 15 binary choices in the third session would be appropriate. For a more detailed discussion as to why time pressure was explored, the reader is referred to Saad (1994).


3.1 Preliminary Analyses

In order to account for practice effects, the data corresponding to a subject’s first choice were removed from all analyses reported here.

3.1.1 Pearson Correlation between attribute importance and acquisition frequency.

Recall that Aschenbrenner et al. (1986) demonstrated that an attribute’s weight influenced its acquisition order. A related proposition would posit that the more important an attribute is, the more often it will be acquired across repeated sequential choices. A preliminary analysis was performed on each subject’s data to empirically test the latter proposition. For each of the 25 attributes, its weight as elicited in each of the first three sessions was summed. The data of session 4 was excluded here given the possibility of modifications in acquisition behavior due to time pressure. To create subject-specific frequencies of attribute acquisitions, the number of times that each attribute was acquired across the three sessions was summed. The mean Pearson product moment correlation coefficient between importance weights and frequencies across the 22 subjects [For subjects 3 and 14, only their data from sessions 2 and 3 were used. A portion of subject's 3 session 1 data was lost due to a faulty diskette while subject 14 inadvertently quit session 1 at the end of 7 trials.] was 0.878 (s.d.=0.081). These large correlations not only indirectly validate the use of the Q-Sort procedure as a weight/rank elicitation technique but also demonstrate that subjects worked "intelligently" and took the experimental task seriously.

3.1.2 Spearman rank correlations of the attributes across sessions.

The rank/weight elicitation literature contains no studies that have attempted to validate a rank/weight elicitation technique across as many as four sessions. Stillwell, Seaver and Edwards (1981, p. 71) discuss work by Otway and Edwards (1977) in which the latter elicited importance weights on two occasions and obtained a mean test-retest correlation of 0.93. The studies that have investigated and compared various elicitation methods within a single session, have typically used stimuli with fewer attributes than the 25 available here. Hence, it would be a worthwhile methodological finding if the Q-Sort procedure used here were to yield high Spearman correlations.

Each subject provided 4 sets of attribute ranks as elicited in each of the 4 sessions. As such, six Spearman rank correlations were calculated for each subject, corresponding to all possible pairwise correlations. The rank correlations for the following six pairings of sessions were obtained: 1-2, 1-3, 1-4, 2-3, 2-4, 3-4. The mean Spearman rank correlation (across the six pairings) was calculated for each subject. The latter ranged from 0.853 to 0.962 across the 22 subjects. These extremely high correlations demonstrate the reliability of the Q-Sort procedure in yielding consistent rankings.

The means across subjects) of the Spearman ranks for 1-2, 1-3, 1-4, 2-3, 2-4 and 3-4 were 0.886, 0.860, 0.863, 0.940, 0.932 and 0.949 respectively. The latter were calculated to see whether there existed any systematic differences in the Spearman correlation means across sessions. Two apparent trends were found. First, there seemed to be a temporal effect whereby the magnitude of the means depended on the temporal proximity of the sessions. The means corresponding to the rank correlations of 3-4, 2-3 and 1-2 were the first, second and fourth largest respectively. Secondly, of the 3 "temporally consecutive" correlations (i.e., 1-2, 2-3, 3-4), a strict ordering was found, namely the "later" pairs of consecutive sessions yielded larger rank correlations. This suggested that subjects’ attribute ranks stabilized as they became more knowledgeable and familiar with this particular stimulus.

3.2 Acquisition Order (H1 and H2)

Recall that the first hypothesis proposed that the ordinal ranking of attributes would be a good predictor of the acquisition sequence. A perfect validation of the latter postulate would imply that if x attributes were acquired in some given choice, the acquisition order should be R1, R2,...,Rx, where Ri corresponds to a subject’s ith most important attribute (i.e., Ri=i, i=1,...,x). Thus, in the above example, the difference between the expected ranks and the actual ranks would be zero for each of the x acquired attributes. Similarly, suppose that a subject had the following acquisition order: O1, O2,..., Ox, where Oj (j=1,...,x) corresponds to the subjects’s ranking of the jth acquired attribute. One can calculate the absolute value of (RiBOi), for i=1,...,x, i.e., the deviation between the expected rank and observed rank for each of the x acquired attributes. Note that the value of x changes across the 14 choices given the subject’s control of the stopping point. For example, two separate trials might have resulted in the acquisition of x and y attributes respectively. As such, in the former case, abs(Ri-Oi) would be calculated for i=1,...x, while in the latter case, the same difference is calculated y times, i.e., for i=1,...,y.

Each subject’s data were analyzed separately. The absolute value of (RiBOi) was calculated for each acquired attribute, across all 14 choices. For example, if a subject acquired a total of 150 attributes across the 14 choices, 150 rank differences were correspondingly calculated. Subsequently, the median deviation was obtained for each of the 22 subjects, in each of the 4 sessions. The median was used here instead of the mean for it is less sensitive to potential outliers. As such, a total of 88 (22 subjects x 4 sessions) [Recall that the session 1 data for subjects 3 and 14 were lost. As such, the median MAVD of sesion 1 (using the remaining 20 subjects) was used as their MAVDs.] median absolute value deviations (MAVDs) were obtained. The median MAVD (across subjects) for each of the 4 sessions is shown in Table 2. Note that all 4 median MAVDs are small, providing preliminary support for H1. A more formal test of H1 was conducted by performing the following simple linear regression, on each subject’s data (in a given session): O=a + bR. In other words, the observed ranks were regressed on the expected ones. Clearly, a perfect validation of H1 would yield a straight line, which passes through the origin and makes a 45 degrees angle with both the x and y axes respectively (i.e., a=0 and b=1). Thus, 86 simple regressions were performed corresponding to the data of the 22 subjects across the 4 sessions (omitting the data of subjects 3 and 14 in session 1). The median b, a and R-Squared values (across the 22 subjects in a given session) are shown in Table 2.



The 86 p-values corresponding to the b=0 test were as follows: 74 (p=0.00), 4 (p=0.001), 3 (p<0.02), 1 (p<0.05) and only 4 (p>0.05). Hence as expected, there exists a linear relationship between the expected and observed ranks. However, H1 specifically proposes that the slope of the regression should be equal to 1. As such, 86 t-tests of b=1 were performed. Fifty-two out of the 86 subjects failed to reject the null hypothesis that b=1 (p<0.05). The latter result coupled with the small MAVDs, the substntial R-squared values and the intercept values (i.e., a) being close to 0, provide strong support for H1.

H2 posited that the fit between the ordinal rankings of the attributes and their acquisition order would improve across sessions, as a result of learning. Using each subject’s MAVD score in each of the 4 sessions, a repeated-measures ANOVA revealed significant differences (F=15.22, df=3/63, p=0.00). The latter result does not identify which specific sets of means are different from each other, as is specifically postulated in H2. To determine the pattern of relationships between the means, 3 matched one-tailed t-tests were performed on the MAVD data, corresponding to the 3 adjacent means. The p-values for each of the 3 pairwise t-tests were 0.001 (session 1 with session 2) , 0.053 (session 2 with session 3) and 0.052 (session 3 with session 4) respectively. All three mean differences were in the predicted direction. In other words, the fit between attribute ranks and acquisition order improved in all 3 adjacent sessions. When conducting simultaneous pairwise t-tests, one needs to adjust the a level of each pairwise comparison in order to ensure that the familywise error rate is equal to the overall desired a level (Howell 1987). A procedure proposed by Dunn (1961) achieves this by proposing that the a level of each comparison be set to the overall a level divided by the number of comparisons made. In the current case, the desired a level is 0.05 and the number of comparisons is equal to 3. As such, in order to determine the significance of the above p-values, they were compared to 0.05/3=0.017. Hence, only the first mean difference (i.e., between sessions 1 and 2) is significant. It would appear that the majority of learning occurred between the first and second sessions with subsequent sessions yielding little improved calibration.

3.3 Reaction Times (H3 and H4)

Among its process-tracing capabilities, the SMAC interface collects reaction time data, namely the time spent on each of the relevant screens. The attribute selection screen lists all 25 attributes, from which the subject chooses the next piece to acquire. On the other hand, the information integration screen not only displays the feature values of a chosen attribute but also contains the cumulative confidence bar which is updated by the subject upon viewing the feature values. Thus, the information integration time includes both the viewing of the feature values and the subsequent updating of the cumulative confidence bar.

Each acquired attribute had two reaction times associated to it namely the number of seconds spent deciding to acquire it and the number of seconds spent integrating it. To test H3 and H4, each subject’s data were once again analyzed separately. The importance weights of the acquired attributes were regressed separately on their corresponding attribute selection and information integration times. In other words, two regression lines were calculated for each subject’s data in each of four sessions yielding 172 total regressions (86 regressions for each of H3 and H4). There were 45 (out of 86) and 25 (out of 86 [The 86 regressions correspond to 22 subjects x 4 sessions minus the lost data of subjects 3 and 14 in session 1.]) significant regression slopes respectively for H3 and H4. To gauge the statistical significance of the latter two proportions, they were compared to those which would be expected to occur by chance. To determine the number of regressions that would yield a significant slope by chance, a simulation was performed using 100 randomly drawn data sets. The random simulations for H3 and H4 respectively yielded 10 and 6 significant or marginally significant slopes. Thus, the "chance" proportion was conservatively set at 0.10. Accordingly, tests of proportions revealed that both 45/86 and 25/86 were significantly greater than 0.10 (p<0.05).

More importantly, 39 of the 45 slopes were in the direction predicted by H3 (i.e., negative slope) whereas 19 of the 25 slopes were in the direction predicted by H4 (i.e., positive slope). [The p-values for the 39 significant slopes of H3 were as follows: 24 (p<0.01, 6 (p<0.05) and 9 (p<0.10). The p-values for the 19 significant slopes of H4 were as follows: 5 (p<0.01), 10 (p<0.05) and 4 (p<0.10).] Tests of proportions revealed that both 39/45 and 19/25 were significantly greater than 0.50 (p<0.05), whereby 0.50 crresponds to the expected proportion if positive and negative slopes were equally likely to occur. Thus to the extent that a relationship exists between an attribute’s importance weight and its attribute selection and information integration times,they are to a large extent in the direction predicted by H3 and H4 respectively. [The latter regresssion analyses were also performed using separately the expected and realized ranks as the independent variable. The pattern of results was virtually the same however in one instance, regressing the information integration times on the expected ranks, yielded weaker results.] Table 3 provides a summary of the key findings.




Strong support was obtained that ranks were a good predictor of acquisition order (H1). Given that there were no other factors that could have affected the latter order, these results are not surprising. If anything they point to the robustness of the Q-Sort technique as a procedure for eliciting attribute ranks and to the subjects’ diligence and consistency in completing the experimental task. An interesting avenue for future research would be to determine the extent to which several other factors affect the order of acquired attributes. For example, suppose that an individual is using the CA heuristic (Saad and Russo 1996) as a stopping strategy. Assuming that several of the core attributes are prohibitively costly to acquire, will the individual persist in using the CA heuristic or will he/she alter the composition of his/her core set? In such a scenario, one might investigate the extent to which individuals will trade-off importance weights and acquisition costs in determining their acquisition order.

As previously mentioned, several researchers have recently proposed models whereby it is assumed that multiattribute choices are the result of a discrimination/differentiation process. In other words, information is sequentially acquired until sufficient discrimination is achieved between the competing alternatives to justify a final choice in favor of the leading alternative. Thus, this amounts to a race between the alternatives in a consumer’s consideration set to see which will be the first to reach the desired threshold of differentiation. Under such a framework, the order of acquired attributes becomes crucial for it can either hinder or promote the likelihood of a particular alternative being chosen. For example, an alternative might be superior on an attribute that is costly to acquire, hence reducing the likelihood that the attribute will be acquired. This in turn would reduce the chance of the latter alternative being first in reaching the threshold. For example, assume that a car dealer is attempting to communicate to potential customers a particular model’s road "handlibility". To reduce the acquisition costs associated with a test drive, the dealer might institute a plan whereby the car would be driven to customers’ homes, thus ensuring that this attribute information is acquired earlier than might have otherwise been the case.

H2 posited that the fit between attribute ranks and acquisition order would improve across the sessions as a result of a learning effect. Interestingly, only partial support was found here. While an improvement did occur between sessions 1 and 2, no statistically significant improvement was found between subsequent sessions. It appeared that a learning plateau had been reached upon completion of the second session. One might have expected that under time pressure, the fit would worsen if only because subjects would spend less time deliberating over the exact acquisition order. However, given that the time pressure session was the fourth one, subjects had in all likelihood fully learned the experimental task. Thus, while only speculative, it appears that the improved calibration due to learning offset the time pressure effects (if any).

H3 proposed that the decision as to which attribute to acquire next progressively becomes a more difficult one and hence a negative relationship between an attribute’s importance weight and its attribute selection time was expected. Of the regressions that yielded a significant slope, 87% (39 out of 45) were in the direction predicted by H3. Note that for a given binry choice, as more attributes are acquired, the set of yet-to-be acquired attributes progressively becomes smaller. Hence, if only for the latter reason, one might have expected that the relationship between an attribute’s weight and its attribute selection time would be positive (assuming that attributes are acquired in decreasing order of importance which is a fair assumption in light of the results of H1). Thus, obtaining a negative relationship despite the latter "set size" effect points to the robustness of the obtained results. Finally, of the regressions that yielded a significant slope between an attribute’s importance weight and its information integration time, 76% (19 out of 25) were in the direction predicted by H4.

One avenue for future research might be to investigate factors that can predict whether a subject will or will not exhibit the posited relationships of H3 and H4. As previously mentioned, Saad and Russo (1996) found that some subjects rely heavily on the Core Attributes (CA) heuristic in deciding when to stop acquiring additional information. Ceteris paribus, subjects who are strong CA users might be more likely to conform to the hypothesized relationships of H3 and H4. On a related note, one could take the data of each strong CA subject, separate his/her core attributes from the non-core attributes and perform independent regressions on each set of attributes [I am indebted to an anonymous reviewer for suggesting this possibility.] to see whether the pattern of results are different for each set. Interestingly, product expertise might very well be an important moderator here in that experts are more likely to be CA users than novices.

The use of reaction time data in process-tracing studies has been sparse at best. Recently, Saad (1998) investigated the percentage of time that individuals spent in each of the three stages of the sequential choice process (i.e., attribute selection, information integration and backtracking stages). Specifically, he showed that roughly 61.3% of the time was spent integrating newly-acquired information, 32.2% was spent deciding which attribute to acquire next and 6.5% was spent backtracking (i.e., reviewing previously-acquired information). When facing time pressure, these percentages changed to 65.1%, 31.8% and 3.1%. Thus, he demonstrated that when facing time pressure, individuals adapted their behavior via a redistribution of their time across the three stages. The current work shows a similar type of adaptation (i.e., a shift in the proportion of time that one spends in a given stage) but within a given choice rather than across conditions. Namely, the longer that a choice takes, the less time is spent integrating newly-acquired information while a greater proportion of one’s time is spent deciding which attribute to acquire next.

The attribute selection screen in the current work displayed the attributes in alphabetical order. Clearly, in the real-world, information is seldom exhibited in such an orderly and repetitive format. As such, an interesting extension of the current work would be to replicate the study using randomized listing instead. This would introducte greater mundane realism to the task thus making the results more generalizable.


Aschenbrenner, K.M., Albert, D. & Schmalhofer, F. (1984). Stochastic Choice Heuristics. Acta Psychologica, 56, 153-166.

Aschenbrenner, K. ., Bockenholt, U., Albert, D. & Schmalhofer, F. (1986). The Selection of Dimensions When Choosing Between Multiattribute Alternatives. In Current Issues in West German Decision Research, R. W. Scholz (ed.), Frankfurt: Lang, 63-78.

Beattie, J. & Baron, J. (1991). Investigating the Effect of Stimulus Range on Attribute Weight. Journal of Experimental Psychology: Human Perception and Performance, 17, 571-585.

Bockenholt, U., Albert, D., Aschenbrenner, M. & Schmalhofer, F. (1991). The Effects of Attractiveness, Dominance, and Attribute Differences on Information Acquisition in Multiattribute Binary Choice. Organizational Behavior and Human Decision Processes, 49, 258-281.

Borcherding, K., Eppel, T. & Von Winterfeldt, D. (1991). Comparison of Weighting Judgments in Multiattribute Utility Measurements. Management Science, 37, 1603-1619.

Borcherding, K., Schmeer, S. & Weber, M. (1993). Biases in Multiattribute Weight Elicitation. Fourteenth Subjective Probability, Utility and Decision Making Conference (SPUDM-14), Aix-en-Provence, France.

Brucks, M. (1988). SearchMonitor: An Approach for Computer-Controlled Experiments Involving Consumer Information Search. Journal of Consumer Research, 15, 117-121.

Busemeyer, J. R. & Townsend, J. T. (1993). Decision Field Theory: A Dynamic-Cognitive Approach to Decision Making in an Uncertain Environment. Psychological Review, 100, 432-459.

Dahlstrand, U. & Montgomery, H. (1984). Information Search and Evaluative Processes in Decision Making: A Computer Based Process Tracing Study. Acta Psychologica, 56, 113-123.

Diederich, A. (1995). A Dynamic Model for Multi-Attribute Decision Problems. In Contributions to Decision Making-I, J. -P. Caverni, M. Bar-Hillel, F. H. Barron and H. Jungermann (Eds.), Elsevier Science B.V., 175-191.

Dunn, D. J. (1961) Multiple Comparison Among Means, Journal of the American Statistical Association, 56, 52-64.

Fischer, G. W. (1995). Range Sensitivity of Attribute Weights in Multiattribute Value Models. Organizational Behavior and Human Decision Processes, 62, 252-266.

Ford, J. K., Schmitt, N., Schechtman, S. L., Hults, B. M. & Doherty, M. L.(1989). Process Tracing Methods: Contributions, Problems, and Neglected Research Questions. Organizational Behavior and Human Decision Processes, 43, 75-117.

Hagerty, M. R. & Aaker, D. A. (1984). A Normative Model of Consumer Information Processing. Marketing Science, 3, 227-246.

Howell, D. C. (1987). Statistical Methods for Psychology. Boston: PWS Publishers.

Hutchinson, J. W. & Meyer, R. J. (1994). Dynamic Decision Making: Optimal Policies and Actual Behavior in Sequential Choice Problems. Marketing Letters, 5 (4), 369-382.

Jaccard, J., Brinberg, D. & Ackerman, L. J. (1986). Assessing Attribute Importance: A Comparison of Six Methods. Journal of Consumer Research, 12, 463-468.

Jacoby, J., Chestnut, R. W. & Fisher, W. A. (1978). A Behavioral Process Approach to Information Acquisition in Nondurable Purchasing. Journal of Marketing Research, 15, 532-544.

Meyer, R. J. (1982). A Descriptive Model of Consumer Information Search Behavior. Marketing Science, 1, 93-121.

Meyer, R. J. (1997). The Effect of Set Composition on Stopping Behavior in a Finite Search Among Assortments. Marketing Letters, 8 (1), 131-143.

Otway, H. J. & Edwards, W. (1977). Application of Simple Multiattribute Rating Technique to Evaluation of Nuclear Waste Disposal Sites: A Demonstration. Research report RM-77-31, International Institute for Applied System Analysis, Laxenbury, Austria.

Payne, J. (1976). Task Complexity and Contingent Processing in Decision Making: An Information Search and Protocol Analysis. Organiztional Behavior and Human Performance, 16, 366-387.

Saad, G. (1994). The Adaptive Use of Stopping Policies in Sequential Consumer Choice. Unpublished doctoral dissertation, Johnson Graduate School of Management, Cornell University.

Saad, G. (1996). SMAC: An Interface for Investigating Sequential Multiattribute Choices. Behavior Research Methods, Instruments, & Computers, Vol. 28 (2), 259-264.

Saad, G. & Russo, J. E. (1996). Stopping Criteria in Sequential Choice. Organizational Behavior and Human Decision Processes, 67, 258-270.

Saad, G. (1998). Information Reacquisition in Sequential Consumer Choice. In Advances in Consumer Research, Joel Alba and Wes Hutchinson (Eds.), Provo, UT: Association for Consumer Research, 233-239.

Schmalhofer, F., Dietrich, A., Aschenbrenner, K. M. & Gertzen, H. (1986). Process Traces of Binary Choices: Evidence for Selective and Adaptive Decision Heuristics. The Quarterly Journal of Experimental Psychology, 38A, 59-76.

Simonson, I., Huber, J. & Payne, J. (1988). The Relationship Between Prior Brand Knowledge and Information Acquisition Order. Journal of Consumer Research, 14, 566-578.

Stillwell, W. G., Seaver, D. A. & Edwards, W. (1981). A Comparison of Weight Approximation Techniques in Multiattribute Utility Decision Making. Organizational Behavior and Human Performance, 28, 62-77.



Gad Saad, Concordia University


NA - Advances in Consumer Research Volume 26 | 1999

Share Proceeding

Featured papers

See More


Semantic Processes in Memory-Based Consumer Decision Making

Sudeep Bhatia, University of Pennsylvania, USA

Read More


Non-normative influence of self-decided prices on product-related inferences

Sudipta Mukherjee, Virginia Tech, USA
Mario Pandelaere, Virginia Tech, USA

Read More


A Phenomenological Examination of Internet Addiction: Insights from Entanglement Theory

Mohammadali Zolfagharian, Bowling Green State University
Atefeh Yazdanparast, University of Evansville
Reto Felix, University of Texas Rio Grande Valley, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.