A Hybrid Conjoint Model With Individual-Level Interaction Estimation



Citation:

Paul E. Green, Abba M. Krieger, and Catherine M. Schaffer (1993) ,"A Hybrid Conjoint Model With Individual-Level Interaction Estimation", in NA - Advances in Consumer Research Volume 20, eds. Leigh McAlister and Michael L. Rothschild, Provo, UT : Association for Consumer Research, Pages: 149-154.

Advances in Consumer Research Volume 20, 1993      Pages 149-154

A HYBRID CONJOINT MODEL WITH INDIVIDUAL-LEVEL INTERACTION ESTIMATION

Paul E. Green, University of Pennsylvania

Abba M. Krieger, University of Pennsylvania

Catherine M. Schaffer, University of Denver

With the advent of larger-scale industry applications, there has been a corresponding need to develop conjoint modeling methods that can cope with large numbers of attributes and levels. The authors describe a hybrid model that estimates individual-level interactions and smooths parameter estimates by empirical Bayes methods.

INTRODUCTION

In their recent review, Green and Srinivasan (1990) report that one of the most active research areas in conjoint analysis involves the development of part-worth estimation methods designed to increase reliability and predictive validity. The need for such methods has become acute as conjoint applications include ever larger numbers of attributes and levels.

Hagerty (1985) has outlined several classes of part-worth estimation methods; the taxonomy of Figure 1 is partially based on his earlier remarks. The left-most branch denotes traditional full-profile-only analysis; the principal parameter estimation methods are MONANOVA (Kruskal 1965), LINMAP (Shocker and Srinivasan 1977), and, increasingly, OLS dummy variable regression.

More recently, however, researchers (Pekelman and Sen 1979; Krishnamurthi and Wittink 1989) have augmented traditional part-worth modeling with mixtures of linear, quadratic, and part-worth parameters. Gains in reliability/validity may also be obtained by constraining part-worths to respect within-attribute monotonicity (Srinivasan, Jain, and Malhotra 1983), or by various aggregation methods, such as those proposed by Hagerty (1985), Kamakura (1988), and Green, Krieger, and Zelnio (1989).

If the researcher also collects self-explicated data on individual attribute-level desirabilities and attribute importances, further improvements are possible, as illustrated by the Bayesian-like method of Cattin, Gelfand, and Danes (1983) and the parameter constrained approach of van der Lans and Heiser (1990). In both cases, considerably more data collection is entailed since each of these methods assumes that a large enough set of full profiles is obtained to estimate part-worths from either profile or self-explicated data.

In contrast, the hybrid models (Green, Goldberg, and Montemayor 1981; Green 1984) and the ACA model (Johnson 1987) collect a limited number of full or partial profiles which serve largely as either a "polishing" operation to refine self-explicated part-worths (ACA), or as a way to estimate additional group-level parameters (hybrid models). Given their reduced data demands, these latter approaches have received extensive commercial application.

Finally, in the right-most branch, we note that in CASEMAP (Srinivasan 1988; Srinivasan and Wyner 1989) there are no profile data at all. The entire exercise consists of self-explicated data collection.

To date, extensive empirical comparisons across classes of the models have been few. In a comparison of Hagerty's and Kamakura's models with traditional conjoint, Green and Helsen (1989) found no improvement in internal validity for the newer approaches. Traditional conjoint also appears to outperform hybrid models and ACA, at least in cases involving sufficient degrees of freedom for error estimation. Hybrid models, in turn, tend to outperform self-explicated models (Green 1984); that is, even a limited number of full profiles adds something in terms of predictive ability.

Features of The Proposed Model

The model proposed here is part of the hybrid model family. In addition, it employs features that are analogous to the Hagerty approach. In contrast to previously published methods, the proposed hybrid model:

1. Employs a convex combination technique that optimally weights self-explicated attribute importances with group-level, conjoint-derived importances, so as to maximize the correlation of the resulting composite with the individual's (holdout) sample of profile evaluations.

2. Uses empirical Bayes procedures to "smooth" individual-based parameters in accord with information obtained from the full sample.

3. Fits selected two-way interaction terms on a disaggregate basis. This is accomplished by the use of Tukey's one-degree-of-freedom procedure (Tukey 1949) in which two-way interactions are linear functions of previously computed individual main effects.

4. Contains a built-in cross-validation procedure that helps the user select an appropriate number of two-way interaction effects to fit on a stagewise basis.

THE MODEL

The proposed model collects information on respondent:

1. Self-explicated attribute level desirabilities (typically expressed on a 0-10, equal interval rating scale).

2. Self-explicated attribute importances (typically expressed in terms of a constant sum, 100 point allocation scale).

3. Likelihood-of-purchase ratings (0-100 scale) of a limited set of full profiles, drawn from a much larger master design of orthogonally constructed profiles.

These steps are similar to the procedures followed in most hybrid models (Green 1984).

Main Effects Estimation

The first phase of the analysis entails estimating main effects parameters at the individual-respondent level. First, we assume that the best estimate of the "true" attribute-level desirabilities is found in the self-explicated desirabilities. There are reasonable grounds for this assumption. Our own research (Green, Krieger, and Agarwal 1992) has found very high test/retest reliabilities for attribute-level desirabilities (on average, 0.90 in a sample of 51 subjects). In contrast, the test/retest reliability of self-explicated importances was only 0.48 for the same group of subjects.

Subjects' conjoint profile evaluations are then separately used to obtain group level attribute importances. These group level importances are optimally combined with each individual respondent's self-explicated importances to obtain a set of weighted importances that (along with the respondent's self-explicated desirabilities) maximally correlate with the subject's actual conjoint profile evaluations. [Details of the weighting procedure can be obtained from the authors.]

FIGURE 1

A TAXOMONY OF PART-WORTH ESTIMATION METHODS

At this point the EMBAY procedure has estimated a main effects, part-worth model for each respondent. [The preceding method also estimates an idiosyncratic intercept term for each respondent, using that subject's own full profile evaluations.] A set of residuals are then obtained by subtracting the respondent's predicted profile evaluations from his/her actual profile evaluations. These sets of respondent residuals become dependent variables for the next phase of model fitting.

Interaction Estimation

EMBAY fits selected two-way interactions to each subject's residuals, using Tukey's one-degree-of-freedom method (Tukey 1949). The two-way interactions are selected in a stepwise manner, according to highest accounted-for variance in the residuals across all respondents. [Whichever two-way interaction that is selected by the stepwise procedure is assumed to be relevant for all respondents.] All arguments continue to be the individual's main effects parameters.

The Tukey procedure estimates a single slope parameter at each stage of the two-way interaction fitting. Each time an interaction is fit, it is internally cross validated, subject by subject. The average cross-validations are used diagnostically to stop the fitting process. Several sets of descriptive statistics (including cross-validated R2) are computed to see if it is worthwhile continuing the "extraction" of two-way interactions.

We next provide a more formal elaboration on the topics of Tukey's one-degree-of-freedom method and the empirical Bayes procedure.

Tukey's One-Degree-of-Freedom Interaction

Tukey's one-degree-of-freedom interaction model can be written as follows:

"ij = m + xi + yj+ lxiyj + eij (1)

where the usual assumptions:

eij = NID(0,s2); Sxi = Syj = 0,

are assumed to hold. We note that the single interaction term is expressed by the slope parameter l, where the arguments xi and yj are previously estimated main effects, expressed as deviations around the grand mean.

Our model computes R2's for all two-way interactions and selects the pair of attributes with the highest R2. A cross-validated R2 is also fit. New residuals are computed and the program continues the approach at the user's discretion.

Empirical Bayes

The OLS estimation problem, generally framed, assumes that

Yij = ai + biXij + eij,(2)

where i varies over individuals and j over profiles. We assume that the eij are independent and identically distributed normal random variables. This model applies at the self-explicated stage, where Xij represents the predicted utility for the jth profile for individual i and Yij denotes the corresponding actual score given to this profile. The model also applies at the stage of fitting interaction terms; see equation (1). In this latter case, however, Yij denotes the residuals after the self-explicated fitting stage; Xij is the product of the part-worths (after mean centering) for the two-way interaction of interest.

As noted earlier, each intercept ai is fitted at the individual level; once we estimate bi then ai = yi - bi xi where and are the respective means for X and Y, averaged over the profiles. We estimate bi separately for each individual; this is tantamount to running OLS for each individual. As noted above, we denote this estimate by b.

We can also estimate bi by assuming that the slopes are equal across individuals. This implies that the common slope is:

EQUATION (3)

An intermediate approach is to use a Bayesian framework. We assume the bi are generated independently from a common normal distribution with mean bo and variance a2. It follows from standard Bayesian analysis that the posterior distributions of the bi are independent normals with means:

EQUATION (4)

and variances:

EQUATION (5)

where EQUATION   is the variance of the OLS regression coefficient.

We follow the approach employed by Rubin (1980); that is, we use empirical Bayes to estimate bo and a2 in equations (4) and (5). We define s2i to be the estimate of Var (Bi).

We find the bo and a2o that maximize the likelihood, given Bi and s2i, after integrating over the random parameters, Bi. This likelihood cannot be maximized directly and so we use an iterative approach (also followed by Rubin) that is based on the EM algorithm of Dempster, Laird, and Rubin (1977).

PILOT STUDY

Our pilot test of the EMBAY model uses data obtained from a hybrid conjoint study involving student evaluations of apartment descriptions; details of the experiment can be found in Green and Schaffer (1991).

Table 1 shows the list of attributes and levels. The sample size is 177. Self-explicated desirabilities were rated on a 0 - 10 equal-interval scale. Attribute importances were obtained from a 100-point allocation (constant sum) procedure. In the calibration stage, each respondent received 18 full profiles, designed according to an orthogonal array. The respondent rated each profile on a 0 - 100 likelihood-of-renting scale. After some demographic data were collected, each respondent was shown 16 holdout apartment descriptions, utilizing levels 1 and 3 of the attributes shown in Table 1. These stimuli were also designed by an orthogonal array. The same 0 - 100 likelihood-of-renting scale was used for the holdout sample as well.

TABLE 1

ATTRIBUTES AND LEVELS USED IN PILOT STUDY

Testing the Model

Previous experience with the data set suggested that self-explicated models would probably fit the full profile calibration data well. Hence, we would not be surprised if the residuals from the self-explicated main effects had relatively little signal left for two-way interaction estimation.

Four different models were fit to the data:

1. A main effects, part-worth model that used only self-explicated importances (i.e., OLS derived importances were not employed).

2. The same model as above, with the addition of two interaction terms.

3. A main effects, part-worth model that employed an optimally weighted composite of self-explicated and the group-level, conjoint-derived importances.

4. The same model as above, with the addition of two interaction terms.

Each of the models was evaluated in terms of its cross-validation, subject by subject, with the 16 holdout profiles. Table 2 summarizes the results.

Descriptive Results

As we anticipated, fits of the two main effects models to the 18 calibration profiles were very good. Table 2 shows that the convex combination model (of self-explicated and derived importances) fits the subjects' calibration profile response somewhat better than the self-explicated importances alone. This also holds true for the calibration model that incorporates two additional interaction terms.

As it should, we note that the addition of two interaction terms increases the calibration model fitsCcorrelations of 0.811 (versus 0.776) and 0.790 (versus 0.752). However, the increases are not dramatically large. Figure 2 suggests why this is so. This chart shows plots of the two average two-way interactions. As noted, no cross-over interaction effects are found. While the line segments are not parallel, the interaction effects do not appear to be extreme.

Cross Validation

Table 2 also summarizes the results of correlating predictions of the four calibration models with actual responses to each subject's 16 holdout profiles. We note that the main effects model cross validates better than the model that also includes interaction terms. For the convex combination model the correlation is 0.731 for main effects versus 0.696 for main effects plus interactions. Counterpart results for the self-explicated importances model are 0.712 and 0.663.

First-choice validations also show the same pattern in which the main-effects-only model out-predicts main effects plus interactions. This finding is consistent with those of Green (1984) in which hybrid models, with and without interaction terms, were also compared. Significance tests for main effects only versus main effects plus interaction indicated that one could not reject the null hypothesis of no difference (alpha level of 0.05) for the correlation results, but one could reject the null hypothesis for the first-choice predictions. Insofar as the convex combination versus the self-explicated model alone is concerned, differences in correlations and first-choice predictions are not significant at the 0.05 alpha level.

CONCLUSIONS

The pilot study provides some support for the value of the convex combination model over the use of self-explicated importances alone in fitting the main effects model. The differences are not dramatic, however, in terms of either cross validation correlation or first-choice hit incidence.

TABLE 2

SUMMARY OF CORRELATION RESULTS FOR PILOT STUDY

Somewhat more surprising is the finding that the simpler main effects model appears to cross validate at least as well as the more general main effects plus interactions model; however, see Hagarty (1985) and Green (1984). Other data sets may exhibit a greater incidence of stable two-way interaction effects. In any case, the proposed model provides a way to measure these interactions (if they exist) and to examine how well they hold up in cross validation.

When will the empirical Bayes aspect of the model prove useful? We surmise that the empirical Bayes procedure will be most useful when the data are heterogeneous in the sense that a subset of the subjects shows highly reliable fits while another subset does not. In the first (reliable) subset, the empirical Bayes weighting parameter would give virtually its entire weight to the subject's own data (particularly if the subject differs from the rest of the sample). In the second (unreliable) subset, the group's results would receive high weight relative to the individual's and hence would "smooth" out that subject's parameter values.

The question of interaction measurement by this (or other) conjoint models is still wide open. Clearly, one would expect interactions in the case of sensory or esthetic product classes. However, little is currently known about the reliability with which interactions can be measured and, particularly, their degree of homogeneity across respondents.

FIGURE 2

TWO-WAY INTERACTION EFFECTS IN PILOT STUDY

REFERENCES

Cattin, Philippe, Alan E. Gelfand, and Jeffrey Danes (1983), "A Simple Bayesian Procedure for Estimation in a Conjoint Model," Journal of Marketing Research, 20 (February), 29-35.

Dempster, A. P., N. M. Laird, and Donald B. Rubin (1977), "Maximum Likelihood from Incomplete Data Via the EM Algorithm," Journal of the Royal Statistical Society, Series B, 39, 1-38.

Green, Paul E. (1984), "Hybrid Models for Conjoint Analysis: An Expository Review," Journal of Marketing Research, 21 (May), 155-169.

Green, Paul E., Stephen M. Goldberg, and Mila Montemayor (1981), "A Hybrid Utility Estimation Model for Conjoint Analysis," Journal of Marketing, 45 (Winter), 33-41.

Green, Paul E. and Kristiaan Helsen (1989), "Cross-Validation Assessment of Alternatives to Individual-Level Conjoint Analysis: A Case Study," Journal of Marketing Research, 26 (August), 346-350.

Green, Paul E., Abba M. Krieger, and Manoj K. Agarwal (1992), "Man Versus Model of Man: When Do Conjoint Models Out-Predict the Decision Maker?," Working Paper, University of Pennsylvania, June.

Green, Paul E., Abba M. Krieger, and Robert N. Zelnio (1989), "A Componential Segmentation Model with Optimal Design Features," Decision Sciences, 20 (Spring), 221-238.

Green, Paul E. and Catherine M. Schaffer (1991), "Importance Weight Effects on Self-Explicated Preference Models," in R. H. Hulman and M. R. Solomon (eds.), Advances in Consumer Research, Provo, UT: Association for Consumer Research, 476-482.

Green, Paul E. and V. Srinivasan (1990), "Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice," Journal of Marketing, 54 (October), 3-19.

Hagerty, Michael R. (1985), "Improving the Predictive Power of Conjoint Analysis: The Use of Factor Analysis and Cluster Analysis," Journal of Marketing Research, 22 (May), 168-184.

Johnson, Richard M. (1987), "Adaptive Conjoint Analysis," Sawtooth Software Conference on Perceptual Mapping, Conjoint Analysis, and Computer Interviewing, Ketchum, ID: Sawtooth Software, 253-265.

Kamakura, Wagner A. (1988), "A Least Squares Procedure for Benefit Segmentation for Conjoint Experiments," Journal of Marketing Research, 25 (May), 157-167.

Krishnamurthi, Lakshman and Dick R. Wittink (1989), "The Part-Worth Model and Its Applicability in Conjoint Analysis," Working Paper, College of Business Administration, University of Illinois (September).

Kruskal, Joseph B. (1965), "Analysis of Factorial Experiments by Estimating Monotone Transformations of the Data," Journal of the Royal Statistical Society, Series B, 27, 251-263.

Pekelman, Dov and Subrata K. Sen (1979), "Improving Prediction in Conjoint Analysis," Journal of Marketing Research, 16 (May), 211-220.

Rubin, Donald B. (1980), "Using Empirical Bayes Techniques in the Law School Validity Studies," Journal of the American Statistical Association, 75 (December), 801-816.

Shocker, Allan D. and V. Srinivasan (1977), "LINMAP (Version II): A FORTRAN IV Computer Program for Analyzing Ordinal Preference (Dominance) Judgments via Linear Programming Techniques for Conjoint Measurement," Journal of Marketing Research, 14, 101-103.

Srinivasan, V. (1988), "A Conjunctive-Compensatory Approach to the Self-Explication of Multiattributed Preferences," Decision Sciences, 19 (Spring), 295-305.

Srinivasan, V., Arun K. Jain, and Naresh K. Malhotra (1983), "Improving Predictive Power of Conjoint Analysis by Constrained Parameter Estimation," Journal of Marketing Research, 20 (November) 433-438.

Srinivasan, V. and Gordon A. Wyner (1989), "CASEMAP: Computer-Assisted Self-Explication of Multi-Attributed Preferences," in W. Henry, M.-Menasco, and H. Takada (eds.), New Product Development and Testing, Lexington, MA: Lexington Books, 91-111.

Tukey, John W. (1949), "One Degree of Freedom for Additivity," Biometrica, 5 (September), 232-242.

van der Lans, Ivo A. and Willem J. Heiser (1990), "Constrained Part-Worth Estimation in Conjoint Analysis Using the Self-Explicated Utility Model," Working Paper, University of Leiden, The Netherlands.

----------------------------------------

Authors

Paul E. Green, University of Pennsylvania
Abba M. Krieger, University of Pennsylvania
Catherine M. Schaffer, University of Denver



Volume

NA - Advances in Consumer Research Volume 20 | 1993



Share Proceeding

Featured papers

See More

Featured

When Perceiving Oneself as a Spender Increases Saving

Emily Garbinsky, University of Notre Dame, USA
Nicole Mead, University of Melbourne, Australia

Read More

Featured

Institutional Influence on Indebted Consumers’ Understanding of Wants and Needs

Mary Celsi, California State University Long Beach, USA
Stephanie Dellande, Menlo College
Mary Gilly, University of California Irvine, USA
Russ Nelson, Northwestern University, USA

Read More

Featured

Cultivating Collaboration and Value Cocreation in Consumption Journeys

Melissa Archpru Akaka, University of Denver
Hope Schau, University of Arizona, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.