Assessing Unacceptable Attribute Levels in Conjoint Analysis

Noreen M. Klein, Virginia Polytechnic Institute and State University
ABSTRACT - Some adaptive conjoint analysis methods reduce the attribute space by allowing the respondent to state which attribute levels are completely unacceptable. Utilities are not estimated for these levels, and it is assumed in later choice simulations that respondents would never choose alternatives that possess these levels. This procedure allows a more efficient estimation of conjoint utilities, but its value depends on whether the judgments of acceptability are consistent with respondents' behavior in later choices. In the study reported here, 15 percent of all choices contained an attribute level previously designated unacceptable, indicating some inconsistency between the judgments and choices. However, the overall accuracy of choice predictions was unaffected by the initial elimination of alternatives with unacceptable levels. The practical implications of these findings, and the relationship of judgments of acceptability to decision strategies are discussed.
[ to cite ]:
Noreen M. Klein (1987) ,"Assessing Unacceptable Attribute Levels in Conjoint Analysis", in NA - Advances in Consumer Research Volume 14, eds. Melanie Wallendorf and Paul Anderson, Provo, UT : Association for Consumer Research, Pages: 154-158.

Advances in Consumer Research Volume 14, 1987      Pages 154-158


Noreen M. Klein, Virginia Polytechnic Institute and State University


Some adaptive conjoint analysis methods reduce the attribute space by allowing the respondent to state which attribute levels are completely unacceptable. Utilities are not estimated for these levels, and it is assumed in later choice simulations that respondents would never choose alternatives that possess these levels. This procedure allows a more efficient estimation of conjoint utilities, but its value depends on whether the judgments of acceptability are consistent with respondents' behavior in later choices. In the study reported here, 15 percent of all choices contained an attribute level previously designated unacceptable, indicating some inconsistency between the judgments and choices. However, the overall accuracy of choice predictions was unaffected by the initial elimination of alternatives with unacceptable levels. The practical implications of these findings, and the relationship of judgments of acceptability to decision strategies are discussed.


Adaptive estimation techniques have greatly facilitated the use of conjoint analysis. These interactive programs customize data collection for each individual, based on an ongoing analysis of responses throughout the interview. One method that some adaptive techniques (such as Johnson's Adaptive Conjoint Analysis (ACA) program) use to reduce data collection is to ask the respondent to state which of the attribute levels in the study are completely unacceptable. Utilities are not estimated for these unacceptable levels, because it is assumed that a respondent would never choose an alternative that had one.

The advantages of this process are clear. Eliminating unacceptable levels reduces the set of parameters to be estimated for 8 respondent, allowing more efficient estimation of the utilities of remaining levels. Shorter interviews should lessen respondent fatigue and produce better data. The real question is: what effect does this procedure have on the predictive accuracy of conjoint analysis? The answer depends on two factors. The first is the decision maker's ability to identify unacceptable attribute levels, as opposed to those that are merely undesirable The second is the extent to which conjoint utilities for unacceptable levels adequately model the impact of those levels on an evaluation. These two factors are discussed in turn below.

How well can decision makers distinguish between attribute levels that are truly unacceptable and those that are simply undesirable? One difficulty is that acceptability is likely to be context dependent, rather than an inherent characteristic of an attribute level. For instance, a decision maker might refuse to consider any alternative with level (xi), when he knows that attractive alternatives without that level are available However, given a choice set where some attractive features are available only on alternatives containing x., a tradeoff might well be made. If context is critical, the decision maker's expectations about the choice context should influence what levels are judged acceptable

When respondents are asked to judge the acceptability of attribute levels in adaptive conjoint analysis, no particular choice context is provided. In some cases, the range of the other attributes' levels is not yet known. In this situation, respondents' current perceptions of marketplace alternatives should serve as the context for their judgments. The more consistent subsequent choice simulations are with these perceptions, the more appropriate the judgments of acceptability will be. However, if later simulations contain unanticipated attribute levels, unanticipated distributions of attribute levels, or unanticipated combinations of attribute levels, then eliminating alternatives based on prior judgments of unacceptability may be premature. It seems likely then that respondents' judgments of acceptability may be least consistent with later choices in decisions involving unfamiliar products, or new attributes within a familiar product class, where expectations about context are less certain.

Given that judgments of unacceptability are consistent with later choices, will a noncompensatory model that eliminates alternatives with an unacceptable level necessarily improve predictions of choice? The reason for expecting improvement is that truly unacceptable attribute levels imply a noncompensatory process. Studies of choice processes have demonstrated that noncompensatory choice strategies are common (cf. Acito and Olshavsky 1980; Payne 1976). A choice simulation that eliminates alternatives with unacceptable attribute levels may better represent the actual choice process than the compensatory conjoint model, which allows an unfavorable attribute level to be offset by other, desirable attribute levels.

On the other hand, a closer representation of the actual choice process doesn't guarantee a better prediction of the choice outcome. Research has often shown that the results of one evaluation process may be well predicted by the model of a second, distinct process (Einhorn, Kleinmuntz, and Kleinmuntz 1979; Payne, Braunstein and Carroll 1978; Thorngate 1980), and that the linear, compensatory model is particularly robust (Dawes and Corrigan 1974). The conjoint utilities calculated for unacceptable levels may adequately represent their negative impact on an alternative. The low utilities of unacceptable levels may typically result in no alternative - that possesses one being predicted as the respondent's choice. In this case, the predictive accuracy of a choice simulation will not be improved by the immediate deletion of alternatives with unacceptable levels.

The discussion above presents several issues related to the usefulness of eliminating unacceptable levels in conjoint analysis. The study reported here presents some empirical evidence related to the following questions.

(1) Are respondents' choices consistent with their prior identification of unacceptable levels; that is, do they ever choose alternatives with unacceptable levels?

(2) Does the predictive accuracy of a simulation model that eliminates alternatives with unacceptable attribute levels improve on the predictions of the standard conjoint model?



A study was done in which unacceptable attribute levels were identified, but not excluded from the estimation of utilities and subsequent choices. Respondents were interviewed about either ballpoint pens or credit card size calculators in each of two situations: selecting the product for the respondent's own use (the SELF situation) and as a gift for another person (the GIFT situation). The attributes and levels presented to respondents are shown in Table 1.




A paid convenience sample of 120 university students was used. Respondents were selected based upon a screening questionnaire which asked about their interest in owning each of a series of products (including pens and calculators) and whether they could specify a person for whom the product would make a suitable gift

The Choice Set

The alternatives had been designed to have low inter-attribute correlations, within the constraint that they be realistic Therefore, the best brands of each product were offered only at the two highest prices. The attribute levels that had been judged a priori to be less favorable also had a slightly lower frequency in the choice set than the more favorable levels.


Each respondent participated individually in a session whose average time was one and a half hours. In the first part of the session, two pencil and paper questionnaires were administered; the first dealt with the SELF situation, and the second with the GIFT situation. In each questionnaire Johnson's (1974) matrix tradeoff technique was used to assess utilities for the product's sixteen attribute levels. After providing rank order preferences in the 12 matrices presented, respondents were asked to check which of the attribute levels were so disliked that "if a product had it, you would immediately reject it as an alternative". It was pointed out that "You may not feel that (this is true) for any attribute level. If so, just go on to the next question." Above the column in which the unacceptable levels would be checked, it was restated that the levels "would usually cause me to reject (the product), NO MATTER WHAT ELSE IT HAD TO OFFER." The emphasis on and repetition of the instructions was intended to prevent respondents from simply checking the least liked levels, as opposed to truly unacceptable ones. The two questionnaires differed only in terms of the situation under which the judgments were made and the order in which the matrices were presented.

After filling out the questionnaires and taking a short break, the respondent made several choices. Only the two relevant to the issue at hand are described here. First, the respondent was asked to choose the product he would buy for his own use from a set of 18 product descriptions typed on index cards. As the choice was made, a concurrent protocol was collected by asking the respondent to articulate what he was thinking about during the decision task. This was tape recorded with the respondent's permission. The respondent was then asked to rate the difficulty of the choice and his or her confidence in its being the best alternative in the set.

The choice procedure was repeated, this time with the respondent selecting the product as a gift for the person designated earlier as a possible recipient The alternatives were identical in the two decisions, except for the order in which the attributes were listed on the index cards, and the order in which the cards were stacked when presented to the respondent

The Choice Models

Two models were used to predict choice The first is the standard additive, compensatory model typically used in conjoint analysis (COMPENSATORY) The second (ELIMINATION) model eliminates any alternative with an unacceptable level, then uses the COMPENSATORY model to predict which of the remaining alternatives will be chosen. For cases in which all alternatives have an unacceptable level, the COMPENSATORY choice is predicted. This assumes that the best of a bad lot is chosen, as opposed to a no-buy dec is ion.



Utilities were estimated using Johnson's (1975) pairwise monotone regression program. A set of utilities consists of one respondent's preferences for the product attributes in one situation. In this study, there were 120 sets of utilities for each product (60 respondents in each of two situations: SELF and GIFT). A Kendall's tau correlation of the predicted and actual matrix rank orders was used to test the utilities' goodness-of-fit; any utility set with a tau less than .70 was discarded from subsequent analyses Three of the pen utility sets (2.5 percent) and fifteen of the calculator sets (12.5 percent) failed to meet this criterion, leaving a sample of 117 utility sets for Dens and 105 utility sets for calculators


Each verbal protocol was transcribed and analyzed for evidence of noncompensatory eliminations, which were defined as the rejection of an alternative on the basis of only one of the attribute levels it possesses. There was agreement between two independent coders in 86.3 percent of the cases where a noncompensatory elimination was identified. For details on the coding procedure and results see Klein (1986).

Unacceptable Levels

Respondents indicated that an average of 2.3 of the 16 attribute levels were unacceptable across all decisions; this is broken down by product and situation in Table 2. There was no significant difference between products in terms of the number of unacceptable levels identified. However, for calculators significantly fewer unacceptable levels were stated for the GIFT situation than for the SELF situation (p<.05). This may indicate less certainty about the recipient's preferences for calculator features. Line 2 of Table 2 shows the effective set size for the ELIMINATION model; that is, the average number of alternatives in the choice set with no unacceptable levels. On the average, removing alternatives with unacceptable attribute levels reduced the set by half. In 4 percent of all decisions, there were no acceptable alternatives at 811 (line 3).

Do respondents actually follow through with their assessments of unacceptable levels when they make choices? Line 4 of Table 2 shows that 15 percent of all decisions were inconsistent; that is, the chosen alternative possessed an unacceptable level. In 4 percent of the decisions, this was unavoidable, since all Alternatives had some objectionable level However, in 11 percent of the decisions respondents chose an alternative with an unacceptable level even though there were acceptable alternatives in the choice set.



If these respondents who chose inconsistently had considered carefully their judgments of acceptability, then their eventual choices must have required a serious reassessment of priorities. One would expect that in this case the decision would be more difficult than for respondents whose choices were consistent with their previous judgments. However, respondents who made inconsistent choices did not rate the decision as more difficult, or have any less confidence that they had made the best choice. It seems likely that respondents whose choices were inconsistent with their earlier judgments had simply overstated the strength of their dislike for the levels they had rated as unacceptable. The ELIMINATION model will fail to predict such choices, which leads to the question of how incorporating judgments of acceptability affects the predictive success of conjoint analysis

Accuracy of Choice Predictions

How good are the predictions of the ELIMINATION model compared to the COMPENSATORY model usually used in conjoint Analysis? The accuracy of the two 'Models' individual choice predictions is shown in Table 3; they predict individual choice equally well. [Only about half of the first choices were predicted by either model; but with 18 alternatives there may have been several close contenders for the best alternative Another test of predictive validity was carried out by measuring how much utility the actual choice had compared to the alternative with the highest utility. This measure is expressed as the ratio:

Ud = (Ucomp-Uactual) / (Ucomp-Uminimum)


Ucomp = the utility of the COMPENSATORY choice

Uactual = the utility of the actual choice

Umin = the lowest utility of any alternative in the set

The difference ratio will be zero if the actual choice is also the COMPENSATORY prediction of the standard conjoint model. The ratio is 1.0 if the worst alternative is chosen. The ratio Ud averaged .07 across all decisions, ranging from .05 to .10 for the four product-situation combinations. Therefore, while the first choice predictions were accurate only half the time, the actual choices were among the highest in utility in the choice set.] The results of a first choice simulation for the 18 alternatives in each product-situation are shown in Table 4 Neither model has consistently superior predictions, and the two models' errors in predicting aggregate choice tend to be in the same direction. In total, neither model has an edge in predictive accuracy for these choices.





Both models correctly predict half of the individual choices Are they making fundamentally the same predictions, or are they predicting correctly for the same number of different individuals? This distinction is important because if the COMPENSATORY and ELIMINATION models predict different individuals' choices well, it would be advantageous to search for ways to identify the more appropriate model a priori. However, if the two models made essentially the same predictions, the choice of a model for conjoint analysis is less critical. As mentioned earlier, the two models will make the same prediction when the conjoint utilities for unacceptable levels are low enough to prevent the alternatives that possess them from having the highest overall utility. The first line of Table 5 shows that the two models predicted the same choice for 82 percent of all decisions. Even though the ELIMINATION model rejects more than half of the alternatives out of hand, in only 18 percent of the decisions does it eliminate the alternative with the highest overall utility. The two models' predictions generally agree, despite the fact that the ELIMINATION ode: sharply reduces the choice set. Thus the models of two distinct processes produce the same outcomes, as earlier studies have shown. The accuracy of each model's predictions reveals nothing about the underlying decision process.



Despite the small sample size (N=39), it's tempting to investigate the decisions for which the two models made different Predictions. Is the accuracy of a model's prediction an indicAtor of the decision processes that were actually used in these choices? The predictive accuracy of both models deteriorates when their predictions diverge, as shown in Table 5. The COMPENSATORY and ELIMINATION models, respectively, predict only 28 percent (N=10) and 36 percent (N=11) of the choices correctly when their predictions differ, compared to 51 percent correct when their predictions converge What perceptions and processes are associated with the decisions each model predicts correctly? If the model fit reflects the choice strategy used, then when the COMPENSATORY model's prediction is accurate fewer noncompensatory eliminations will be found in the protocols than when the ELIMINATION model's prediction is accurate. Since compensatory processing involves more extensive evaluation of information, it's also expected that the respondent will rate the decision as more difficult when the COMPENSATORY model predicts well. The extensive evaluation is also predicted to result in greater confidence that the best choice was made.

An analysis of the protocols shows an average of 3.1 different attribute levels eliminated noncompensatorily in decisions the COMPENSATORY model predicts accurately, as opposed to 3.4 levels when the ELIMINATION model predicts correctly. The difference between these means is not significant. Although the power of the test is low, it's still clear that the underlying process for both motels has a strong noncompensatory component. Even when the models made different predictions, there is little evidence that the accuracy of their predictions indicates much about the processes used in making the choice. It's interesting that the decisions predicted by the COMPENSATORY model were perceived as significantly more difficult, with a mean of 4.6 on an 8-point scale, as opposed to 2.3 for the decisions predicted by the ELIMINATION model (p=.02). Respondents whose choices were predicted only by the COMPENSATORY model were also significantly less confident about having chosen the best alternative (5.5 as opposed to 7.0 on an 8-point scale, p=. 14). More detailed analysis of the protocols may provide some hypotheses about the relationship of these variables to the models' fit.


Conclusions and Recommendations

The elimination of unacceptable attribute levels from conjoint analysis is an attractive procedure for the many reasons cited earlier. The fact that in this study eliminating alternatives with an unacceptable level did not affect the accuracy of first choice predictions is encouraging. The utilities calculated for unacceptable levels effectively remove the same alternatives from contention as the ELIMINATION model does. The benefits of simplified data collection should ordinarily be sufficient justification for reducing the attribute space in this way. Two additions to the procedure are recommended. First, respondents should be given information about all attribute levels in the study before they make any evaluations of acceptability. For example, the rank ordering of attribute levels to provide a starting solution for the analysis might precede the judgments of acceptability. Second, the effects of eliminating unacceptable levels should be analyzed, including how often each attribute level is judged unacceptable, and the number of alternatives eliminated from contention in subsequent choice simulations. Both of these analyses will help to convey the impact of the procedure on the choice simulations.

There are some cases in which the risk of inaccurate identification of unacceptable levels may be greater, and a more conservative approach is desirable. For instance, a new product or an important competitor may possess an attribute level likely to be rated unacceptable, or respondents may be unfamiliar with the combinations of attribute levels available in the marketplace. Before eliminating alternatives from the analysis, more probing should be done to verify that respondents' judgments of acceptability are consistent with their willingness to make tradeoffs in later choices. For.instance, the respondent could be asked to make a pairwise choice for each attribute level they had judged unacceptable. The first alternative of the pair would be composed of the unacceptable level and the respondent's most preferred level of every other attribute. The second alternative would consist of the least preferred acceptable level of each attribute. (It is assumed that both alternatives are plausible combinations of attributes.) The choice of the first alternative would indicate that the respondent will actually allow other attribute levels to compensate for the unacceptable level, and the level can be retained in the conjoint analysis. The choice of the second alternative indicates that the unacceptable level is unlikely to ever be chosen, and it can be deleted from subsequent analysis. This verification procedure should increase confidence in the appropriateness of deleting attribute levels from the conjoint analysis. It would be interesting to see if forcing such choices showed that many fewer attribute levels are truly unacceptable than are stated as such. With better screened input, the ELIMINATION model might improve on the COMPENSATORY model's predictive accuracy.

Study Limitations

There are several limitations to these research findings. First, both products are relatively low involvement purchases, although choices of gifts in these product classes are arguably less so. It may be that when the consequences of the purchase are slight, as with a low involvement product, fewer levels are truly unacceptable, and tradeoffs are made more readily. Also, when involvement is higher preferences may be better thought out and the identification of acceptability perhaps more consistent with subsequent choices.

It has been postulated that context has a strong effect on both judgments of acceptability and the consistency of those judgments with later choices. Context was not investigated here, and these findings may not generalize to other choice contexts (e.g. with a greater number of levels per attribute, stronger correlations among attributes, or a less even distribution of an attribute's levels in the choice set). A systematic examination of the effects of both particular choice contexts and the accuracy of the decision maker's expectations about context is desirable. The procedure followed in this study allowed respondents substantial experience with the attribute levels before they judged the acceptability of each one. The quality of the acceptability judgments in this study may be better than for studies where respondents judge without knowing the levels of all other attributes.

The use of verbal protocols during the decision could be a problem if they were intrusive, altering the nature of the decision processes and outcomes. However, Ericsson and Simon's (1980) review of the available evidence on how concurrent protocols change the process being reported concluded that there seemed to be no effect on the process when two conditions are satisfied. The first is that the information reported is already encoded verbally; the second is that there is no instruction for the respondent to monitor the processes for specific events. Both of these conditions held in the current study.

Finally, only first choice predictions were made in this study. In marketing research, utilities are sometimes rescaled to reflect strength of preference. A respondent's "share of preference" for each alternative is then calculated, based on the amount of overall utility for each alternative. Predictions of share of preference should be more sensitive than first choice predictions to the elimination of unacceptable levels, since first choice predictions are affected only if the alternative with the highest utility has an unacceptable level.

Further Research

The elimination of unacceptable attribute levels from conjoint analysis was evaluated by the consistency of judgments of acceptability with later choices, and by the procedure's impact on predictive accuracy. There's no evidence here that these eliminations worsened prediction. However, up to 21 percent of the choices made in an specific decision (calculator - SELF) did contain an unacceptable level. It's also possible that respondents whose choices were consistent with their judgments might not be so given sufficient incentive: an attractive alternative with an unacceptable level. There should be further testing of this procedure with various products and choice sets.

The verbal protocols may also yield more information about the reliability of the judgments. For instance, the protocols might show whether an unacceptable level was deliberately traded off, regardless of the eventual decision. It would also be possible to assess whether the treatment of an attribute level was consistent throughout the decision, or seemed contingent on the other levels it was paired with in an alternative.

From a practical perspective, it's sufficient to ask whether the predictive accuracy of conjoint analysis is changed by the process of eliminating unacceptable attribute levels. However, there are many intriguing questions about how a decision maker's anticipated choice strategy relates to the actions taken in a choice task. Are the levels identified as unacceptable those actually eliminated in a noncompensatory choice process? An initial examination of the protocols shows that 32 percent of the respondents eliminated alternatives due to attribute levels they had previously judged acceptable, while at the same time retaining alternatives with supposedly unacceptable levels. How strongly do noncompensatory decision strategies reflect judgments made prior to the choice, as opposed to dynamic responses to a particular choice context that is encountered? A better understanding of such decision processes may eventually result in a model combining compensatory and noncompensatory choice processes that improves our ability to predict choices.


Dawes, Robyn M. and Bernard Corrigan (1974) "Linear Models in Decision Making" Psychological Bulletin. 81, 95-106.

Ericsson, R. Anders and Herbert A. Simon (1980) "Verbal Reports As Data", Psychological Review. 87,215-251.

Johnson, Richard M. (1975) "A Simple Method for Pairwise Monotone Regression", Psychometrika. 40, 163-168.

Johnson, Richard M. (1974) "Trade-off Analysis of Consumer Values", Journal of Marketing Research. 11, 121-127.

Klein, Noreen M. (1986) "An Investigation of Utility Directed Cutoff Selection", unpublished working paper, Virginia Tech University, Black burg, VA 24061.

Olshavsky, Richard W. and Franklin Acito (1980) "An Information Processing Probe into Conjoint Analysis", Decision Sciences, 11 (July), 451-470.

Payne, John W. (1976) "Task Complexity and Contingent Processing in Decision Making: An Information Search and Protocol Analysis", Organizational Behavior and Human Performance. 16, 366-387.

Payne, John W., Myron L. Braunstein, and John S. Carroll (1978) "Exploring Predecisional Behavior: An Alternative Approach to Decision Research", Organizational Behavior and Human Performance. 22, 17-44.

Thorngate, Warren (1980) "Efficient Decision Heuristics-, Behavioral Science. 25, 219-225.