# A Composite Mci Model For Integrating Attribute and Importance Information

^{[ to cite ]:}

Lee G. Cooper and Carl T. Finkbeiner (1984) ,"A Composite Mci Model For Integrating Attribute and Importance Information", in NA - Advances in Consumer Research Volume 11, eds. Thomas C. Kinnear, Provo, UT : Association for Consumer Research, Pages: 109-113.

^{[ direct url ]:}

http://acrwebsite.org/volumes/6225/volumes/v11/NA-11

For a single aggregate choice situation, in which there are not enough degrees of freedom to estimate market share model parameters in a typical fashion, a composite MCI model is used to integrate consumers' ratings of the importance of each attribute with their ratings of the possession of each attribute by each brand. The composite MCI is compared with three alternative models including the traditional linear additive composite. All composite models have two parameters and (m-2) degrees of freedom for m brands. The composite MCI produces the only model with statistically significant calibration fit. The composite MCI produces the forecast of actual test market share with the smallest root mean squared error and the closest forecast of the test brand share. A three parameter, composite MCI model is presented which provides the most accurate forecast of test market shares.

INTRODUCTION

Multiplicative competitive interaction (MCI) models attain the degrees of freedom needed for estimating parameters primarily by the existence of many choice situations (e.g. homogeneous groups or geographic regions) in which choice probabilities are influenced by the same set of explanatory variables. A standard MCI model postulates that the choice probabilities have constant sensitivity to the explanatory variables over choice situations, so that a single set of parameters is all that is required. If many choice situations exist, extended versions of MCI models can be specified in which the explanatory variables have differential effect on choice alternatives or parameters can be made specific to particular choice situations.

The concern of this analysis is with the other extreme. What can be done when there is only a single aggregate choice situation and there are approximately as many relevant explanatory variables as there are choice alternatives? In such situations there are few, if any, degrees of freedom available for estimating parameters. Even if parameters can be estimated, the model is likely to be so under-determined that researchers should have little confidence that results will cross-validate or be useful in practical forecasts. The basic alternative available in a consumer research situation is to use subjective judgments of the importance of brand attributes instead of statistical weights.

The notion of using subjective judgments in the place of statistical weights immediately raises a series of issues. Nisbett and Wilson (1977) question whether individuals have reliable access to information on their personal affective states. The important and persuasive demonstration by Wright and Rip (1981) that individuals have the access which is required, allows us to address other aspects of the issues. Are models which ignore importance ratings altogether better than those which incorporate subjective judgments of importance (cf. Bass and Wilkie, 1973 and Wilkie and Pessimer, 1973)? This sometimes gets confused with issues of whether unit weighting schemes are just as useful in forecasting as statistical weighting schemes (Wainer, 1976). Schmitt and Levine (1977) and Summers, Taliaferro and Fletcher (1970) discuss issues surrounding statistical and subjective weights. Schoemaker and Waid (1982) provide an excellent summary of many of the issues as well as valuable comparisons of different methods for determining weights for additive utility models Their discussion serves also to underscore that the issues of subjective versus statistical weights has been debated primarily with respect to the use of linear additive models, with dependent measures which were attitudinal, evaluative or preferential. Dependent measures such as choice probabilities or market shares are rarely, if ever, employed. Multiplicative models have not been used for comparison.

We believe the task of the researcher in this area is to propose a model for the integration of judgments of attribute possession with judgment of attribute importance. The implicit assumption of past research that the integration is accomplished by linear additive models combined with reliance on dependent measures less demanding than choice probabilities, has led to an undervaluing of subjective reports of what consumers feel is important.

A Composite MCI Model

A multiplicative, competitive interaction model for a single choice situation is:

where pj, is the probability of choosing alternative j, f(X_{hj}) ]is a positive ratio scale function of explanatory measure h on choice alternative j, and b_{h} is a parameter for the sensitivity of pj to this explanatory variable. This is a deterministic version in that no specification errors or sampling errors have yet been incorporated. For the interval scale measures we typically collect in consumer research there is the additional consideration of how to transform such measures into positive ratio scales. Cooper and Nakanishi (1983) advocate the use of zeta-squared.

where z_{hj} is the traditional standard score for X_{hj}. In the currant analysis we are taking the next step and substituting a function of the subjective report of the importance of an explanatory variable (e.g. an attribute) for the parameter b_{h} one would estimate in many other contexts.

For the initial composite model we are advocating here, we propose that the relative importance, expressed in standard score terms, be the function of subjective importance we use in the place of b_{h}. This strong assumption, which we relax later, has not been considered in the research on multiattribute attitude or choice models. If we let bh be a measure of the importance of an attribute, then the relative importance used in the composite can be represented as

where Sr is the standard deviation of the subjective report of attribute importance over the attributes considered.

There is one consideration when forming a composite MCI which does not arise when estimating b_{h}. It deals with whether zeta-squared or its square root, zeta, is the best transformation of the interval scale measures. As mentioned in Cooper and Nakanishi (1983) zeta refers more directly to the scale of measurement of the original variables, just as standard deviations are in the units of the original measures and variances are in squared units. When parameters are estimated there is no distinction to be drawn since the parameter values estimated for a model using zeta will just be half the parameter values for a model using zeta-squared. In a composite model, however, we can introduce a rescaling term a_{1} and determine from tests on its value whether zeta or zeta-squared is more appropriate.

As with all MCI models we have a choice whether to compute geometric means or estimate an intercept. Noting that the observed choice probability, p_{j}, differs from the population value, pr, and using the transformations discussed in Nakanishi and Cooper (1982) the composite form

can be estimated from the regression form

The parameter a_{0} replaces the need to log-center all measures -- a_{0} is minus the log of the denominator in (4). The stochastic disturbance term e_{j} represents specification error, measurement error and multinominal sampling error. For any set of in brands in a single choice situation equation (5) represents a model with (m-2) degrees of freedom. This model is referred to as Model 1.

One might reasonably ask if the subjective importance ratings add anything to the explanation. This suggests a composite MCI model with uniform weights instead of b_{h}. In estimation form this would be

This is called Model 2.

We can construct two linear additive composites for comparison. The first incorporates the standardized importance ratings, as in Model 1, with the structural form of the linear additive model.

We can call this Model 3. It is included as a benchmark to help-assess the utility of the multiplicative formulation of Model 1 and the utility of the e2 transformation, without the confounding influence created by changes in the form of the importance ratings. The other linear additive formulation is Model 4:

Model 4 is the more traditional formulation. Bass and Wilkie (1973) and Wilkie and Pessimer (1973) discuss research on normalized importance ratings (equivalent to a rescaling as in Model 4), but do not mention any research using the standardization of importance ratings suggested in Model 3. It should be remembered, however, that most of the research they review relates to models at the individual level, rather than the aggregate analysis reported here.

Calibration Using the at Model

To obtain estimates of the parameters in the above models (i.e., to calibrate these models), we need values for the dependent variables, p_{j}. Proportions of individuals reporting a recent choice of each alternative can be used when all the choice alternatives are available to the individuals. However, there is a very common and very important context in which MCI models should be used when such is not true; namely, when the researcher is attempting to predict choice probabilities for a new test product based on data from use-testing. In this context, to calibrate the models we can estimate choice probabilities based on overall acceptability ratings and use these estimates as values for p_{j}. A key feature of this estimation is that the choice probabilities which result apply to the situation in which individuals can choose the test product as well as established brands.

For purposes of simplicity, this model is referred to as the Acceptability Choice (AC) model. The basic model is formally similar to the MCI model of equation (1). The difference is that p_{j} is obtained by aggregating choice probabilities across individuals (subscript i) and that acceptability replaces the numerator in equation (1). The basic at model is:

and where A_{ij} is a non-negative, ratio scale value of the true amount of acceptability of product j for individual i. (A_{ij} is taken to be 0 when individual i is not familiar with product j.) Category ratings of acceptability are known to be approximately linearly related to true acceptability (cf Jones, 1959 1 and so

R_{ij} is an observed category rating of acceptability and e_{ij} is the error term. The coefficients a and b in (11) would be estimated by simple least squares regression if we had A_{ij}.

We require only the assumption that EQUATION is a constant (call it c) across all individuals. (A_{i} is the mean time acceptability for individual i across tie products with which that panelist is familiar.) We then can rewrite (9) as the following simple linear expression:

EQUATION (12) , (13) , (14) , (15)

Note that g_{j} and h_{j} are observed data and so our task is to estimate a single parameter, c, in (12).

For available brands (i.e., not the test product), c can be estimated by a least squares fit of p_{j} in (12) to q_{j} (the proportion of individuals reporting a recent choice of available brand i):

The data which go into this estimation (q_{j}, o_{ij} and R_{ij}) must be obtained prior to exposure of the individuals to the test product. To obtain choice probability estimates of p_{j} in the desired context (namely, after exposure to the test product), we obtain re-ratings of acceptability of the available brands and ratings of the test product after exposure to the test product. The terms g_{j} and h_{j} are then re-calculated using this post-exposure 3 data.0 We assume that the same c value calculated on the pre-exposure data applies in the post-exposure context and obtain the desired choice probabilities from (12). These are used for p_{j} in calibrating Models 1-4.

It is worth discussing why we use the at model just described in calibrating Models 1-4 in the new test product context. If we had been seeking to predict acceptability (R) from some function of attribute measures (X), it would be natural to regress R on that function of X. In the present case, it is best to think of the at model as applying a complex transformation of R (implicit in equation (12)) to rescale it to yield values in the metric of probabilities. This transformed R is then regressed on the function of X as indicated in equations (5)-(8).

ANALYSIS AND RESULTS

The data were collected as part of a home use test of a new, frequently purchased good just prior to its market introduction. Focus group sessions and prior research provided 21 attributes for the product class. 130 respondents were provided samples of the new product for home use. They reported market brand last used, overall acceptability and attribute ratings of market brands they felt they were familiar enough to rate, and they provided a rating of how important each attribute was in their decision to purchase a particular brand. Overall acceptability ratings were obtained both before and after home use of the test product. All brands were from the same product category. Overall acceptability and attribute ratings for the test product were also obtained.

Factor analysis of the attribute ratings using each panelists' rating of a product as a separate observation was used to select five common factors and two specific factors for further study. Average scores for the "markers" of the factors and the scores on the attributes for the specific factors were used to represent the attributes. The same composites were formed for the subjective ratings of importance. The subsequent analyses were performed on the means (over 130 respondents) of these seven composite attribute measures and composite importance measures.

The at model was used to provide calibration probabilities. The estimation equations (5), (6), (7), and (8) represent the forms in which each model is supposedly linear. The results of the linear regressions are presented in Table 1. Only the composite MCI with the cognitive algebra represented in (4) is statistically significant in its linear form. If one ignores the differences in mean importance ratings (i.e. uses the equal weights of Model 2), but 2 maintains both the interactive cognitive algebra and the C, transformation, only 40 percent of the variation in the linear form is explained. The root mean squared error is an appropriate measure of badness of fit. Since it assesses the discrepancies between estimated probabilities and calibration probabilities, RMSE is comparable over models of widely differing form. In this case the RMSE for Model 1 is .050 and for Model 2 is .073. So the inclusion of standardized importance ratings increases substantially the accuracy of the MCI in matching the calibration probabilities.

It is not only the inclusion of standardized importance ratings, but also the interactive form and the 4 transformation which contribute. Model 3 provides the evidence for this. It has an r2 of .50, is not statistically significant in its linear form and the RMSE is .061. While this is better than Model 2, the superiority of the composite MCI in Model 1 is still maintained. The traditional model (i.e. model 4) accounts for 34 percent of the variation in its linear form, which is not statistically significant. The RMSE is .070.

To summarize the results of the calibration phase; use of a) relative, interval scale measures of the importance of each attribute, b) the ratio scale, relative measures of the importance of each attribute, b) the ratio scale, relative measures of the possession of each attribute, and c) the fully interactive cognitive algebra and "share of the pie" competitive form all contribute to the superiority of the composite MCI over the traditional linear Compensatory Model 4.

Once all the models have been calibrated they may be used to predict test market share. The test group has been familiarized with the new brand to an extent far beyond what can be expected in a test market exposure. The company sponsoring the test product made an estimate of the proportion of the test market which would eventually become familiar with the new brand. This estimate is called the penetrated portion of the test market. The penetrated portion was represented by the probabilities developed from each model. For the unpenetrated portion of the market the average of last use and usual use reported by the respondents was used as the probability of purchase. The weighted average of these probabilities was used as the forecast of test market share. For Models 1 and 2 the estimates were rescaled to sum to the 89.1 percent of the market captured by the seven brands under investigation. Models 3 and 4 are insensitive to considerations of logical consistency in market share estimated and therefore did not require adjustment.

Table 2 presents the discrepancies between the forecasts and the test market shares (displayed as probabilities of purchase or percent of test market), and the root mean squared error (RMSE) as a summary. All models had difficulty forecasting a very small share for brand F, but Model 1 was the closest. All models had modest difficulty forecasting brands A and D. Model 1 was by far the most accurate in forecasting brand E, a leading brand in the market. Model 1 was also closest in its forecast of the test brand G. Overall Model 1 provided the best forecast with a RMSE of .042.

DISCREPANCY BETWEEN FORECAST AND PERCENT OF TEST MARKET MODEL

Note that for Model 1 the slope parameter is .57 with a standard error of .22. One would reject the hypothesis that the true slope is 1.0. This means that the exponent on the composite has to be scaled down from its squared value to fit best. If one runs the same test on a composite using zeta instead of zeta-squared the slope parameter is twice as large and so is the standard error. One can not reject the hypothesis that the true slope was 1.0 for the composite using zeta. There is, as a result, support for the contention that, in composites, it is zeta rather than its square which most accurately reflects the comparative process.

We can consider models analogous to Models 3 and 4, but using standard scores for the attribute measures rather than the raw scores. The model parallel to 3 has an r2 of .25 (p<.25) and a forecast RMSE of .050. The model parallel to Model 4 has an r2 of .49 (p<.08) and a forecast RMSE of .046. If one drops the importance ratings totally out of the linear model and forms a composite with the standard scores, one obtains an r2 of .18 (p<.35) and a forecast RMSE of .049.

We can also consider the use of zeta-squared in a linear additive composite analogous to Model 3. Such a model produces the best calibration fit (i.e. r2 = .71, p<.02), but without the logical consistency of the MCI composite the advantage in calibration does not carry over to the forecast. The forecast RMSE is .043 and the forecast of test market percent for the test brand is off by .034. In linear additive composites the difference between the use of zeta and zeta-squared is not merely a matter of exponential rescaling. The corresponding model using zeta in a linear additive composite does not calibrate quite as well (r2 = .60, p<.04). But the forecast RMSE is slightly better at .042, and the forecast of the test brand is off by .030. It seems that zeta or zeta-squared could have applications in linear additive models. It might be particularly interesting to use zeta in forming interaction terms in linear models. But it seems that the logical consistency properties of MCI models will lead to superior forecasts.

One very interesting model which has not previously been considered liberalized the specification in equation (3). Instead of insisting on using standardized importance ratings, one could allow a general linear transformation of the interval scale importance ratings. This will result in a three parameter model which is estimated from

This model has a calibration fit of r2=.59, Due to the additional parameter, the model is not statistically significant (p<.17). The forecast from such a model has the smallest RMSE of .041. The test brand forecast is right on target. While the three parameter model is not practical with a great scarcity of degrees of freedom, it represents a much more generally reasonable assumption on the proper function of importance to use in composite MCI models. It has a great deal of promise for future applications.

DISCUSSION

Despite the superiority of the composite MCI model to the comparison models in this example, we do not expect marketing managers or researchers to feel very confident in any decisions they base on analyses with only five degrees of freedom. The value of this analysis is four-fold. First it does provide a demonstration of what can be done with very little data. With mean ratings on brand attributes and attribute importances we can develop accurate forecasts of market share for test brands. Hybrid utility models (Green, Goldberg and Montemayor 1981; Green, Carroll and Goldberg 1981) require more degrees of freedom than available in our situation. Holbrook's (1977) optimal scaling model is a four parameter model, leaving only three degrees of freedom for testing in the current situation.

Second, our approach indicates that the proper diagnostic use of self-report importance ratings is as coefficients in a multiplicative model. The cognitive algebra represented in Model 1 is of consumers forming profiles in which all brands compete on each attribute and the overall impression of the attractiveness of each brand is formed from the interactions of these competitive evaluations. The greatest weakness of Model 1 is that the standardization of importance ratings could be construed to mean that brands which do poorly on the least important attributes are better off. It is probably more accurate to interpret this as meaning that the brands which do well on the most important attributes dominant regardless of their standing on the attributes of lesser importance. Such an interpretation receives some support from Slovic's (1975) finding that the most important attributes dominate choice from sets of equally valued alternatives. The three parameter model in (17) could avoid the awkward interpretation of negative importance weights. But, as is also the case for future work on Holbrook's (1977) four parameter, optimal scaling model, the parameter estimates must be inspected to see if they imply negative importance weights.

The third aspect of this analysis is its implications for benefit segmentation. Benefit segments value different things. Thus it would be improper to use benefit segments as the multiple choice situations in a standard MCI model. The assumption of constant parameters over the choice situations would be counter to what is known about these segments. However a composite of the form of Model 1 could be developed in each segment. One would only need to estimate a parameter for the geometric mean in each segment and a single parameter for exponential rescaling to calibrate the model. An overall forecast could be composed by aggregating over segments. Further, the three parameter model which allows for a general interval scale transformation of the importance ratings could be extended to apply to this kind of analysis. One could choose between a model which allows for a single linear transformation of importance ratings or one which allows the linear transformation to be specific to each benefit segment. In either case the existence of multiple benefit segments will create sufficient degrees of freedom to estimate these models with confidence

The final point has to do with the use of a composite MCI model in market simulation. Once we build confidence in the composite MCI model's ability to integrate attribute perceptions and attribute importance ratings, we can begin to ask "what if" questions. What are the consequences for market share of changes in attribute perception and/or attribute importances? It is not prudent to simulate large changes outside the scope of the model. But simulation of the kinds of changes in consumers' perceptions and values which can be brought about by marketing instruments can lead to new understanding and new directions for research. The direct connection between the internal states of consumer and the behavior o ff economic markets is an important step for the understanding of consumer behavior.

REFERENCES

Bass, Frank M. and Wilkie, William L. (1973), A comparative analysis of attitudinal predictions of brand preferences. Journal of Marketing Research, 10, 262-269.

Cooper, Lee G. and Nakanishi, Masao (1983), Standardizing variables in multiplicative choice models. Journal of Consumer Research, 1, (June), 96-108.

Green, Paul E., Goldberg, Stephen and Montemayor, Mila (1981), A hybrid utility estimation model for conjoint analysis. Journal of Marketing, 45 (Winter), 33-41.

Green, Paul E., Carroll, J. Douglas and Goldberg, Stephen (1981), A general approach to product design optimization via conjoint analysis. Journal of Marketing, 45 (Summer) 17-37.

Holbrook, Morris B. (1977), Comparing multiattribute attitude models by optimal scaling. Journal of Consumer Research. 4, 3 (December), 165-171.

Jones, Lyle V. (1959), Some invariant findings under the method of successive intervals. American Journal of Psychology, 72, 210-220.

Nakanishi, Masao and Cooper, Lee G. (1982), Simplified estimation procedures for MCI models. Marketing Science, 1, 3, 314-322.

Nisbett, Richard E. and Wilson, Timothy DeCamp (1977), Telling more than we know: Verbal reports on mental processes. Psychological Review, 84, 231-259.

Schmitt, Neal and Levine, Ralph L. (1977), Statistical and subjective weights: Some problems and proposals. Organizational Behavior and Human Performance, 20, 15-30.

Schoemaker, Paul J. H. and Waid, C. Carter (1982), An experimental comparison of different approaches to determining weights in additive utility models. Management Science, 28, 182-196.

Slovic, Paul (1975), Choice between equally valued alternatives. Journal of Experimental Psychology: Human Perception and Performance, 1, 3, 280-287

Summers, David A., Taliaferro, J. Dale and Fletcher, Donna J. (1970), Subjective vs. objective description of judgment policy. Psychonomic Science, 18, 249-250.

Wainer, Howard (1976), Estimating coefficients in linear models: It don't make no nevermind. Psychological Bulletin, 83, 213-217.

Wilkie, William L. and Pessemier, Edgar A. (1973), Issues in marketing's use of multi-attribute attitude models. Journal of Marketing Research, 10, 428-441.

Wright, Peter and Rip, Peter D. (1981), Retrospective reports on the causes of decisions. Journal of personality and Social Psychology. 40, 601-614.

----------------------------------------

Tweet
window.twttr = (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0], t = window.twttr || {}; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://platform.twitter.com/widgets.js"; fjs.parentNode.insertBefore(js, fjs); t._e = []; t.ready = function (f) { t._e.push(f); }; return t; } (document, "script", "twitter-wjs"));