# Re-Assessing the Generalizability From Artificial to Real: Another Look At the Predictive Validity of Conjoint

ABSTRACT - Although several studies have indicated that conjoint models predict holdout samples or first choices of profiles rather well, one subset of studies has found conjoint to be weak in its ability to predict preferences for "real" objects. In this study, involving fast food restaurants, a conjoint model was estimated from preference ratings for profiles of hypothetical, artificial restaurants. A similarly constructed multiattribute model was estimated from ratings for existing, real restaurants. High convergent validity (generalizability) was found between preferences imputed from the conjoint model and observed preferences for real restaurants (R=.78). We demonstrate that discrepancies between affect imputed from the conjoint model and affect predicted from the multiattribute model can provide useful marketing information about the real restaurants, rather than cause for discounting the generalizability of the results.

##### Citation:

*
W. Steven Perkins and Daniel R. Toy (1997) ,"Re-Assessing the Generalizability From Artificial to Real: Another Look At the Predictive Validity of Conjoint", in NA - Advances in Consumer Research Volume 24, eds. Merrie Brucks and Deborah J. MacInnis, Provo, UT : Association for Consumer Research, Pages: 259-266.
*

Although several studies have indicated that conjoint models predict holdout samples or first choices of profiles rather well, one subset of studies has found conjoint to be weak in its ability to predict preferences for "real" objects. In this study, involving fast food restaurants, a conjoint model was estimated from preference ratings for profiles of hypothetical, artificial restaurants. A similarly constructed multiattribute model was estimated from ratings for existing, real restaurants. High convergent validity (generalizability) was found between preferences imputed from the conjoint model and observed preferences for real restaurants (R=.78). We demonstrate that discrepancies between affect imputed from the conjoint model and affect predicted from the multiattribute model can provide useful marketing information about the real restaurants, rather than cause for discounting the generalizability of the results.

In their recent review of conjoint analysis, Green and Srinivasan (1990) examine a number of studies involving assessments of model validity. They summarize these studies by saying that "the empirical evidence points to the validity of conjoint analyis as a predictive technique (1990, p. 13)." There are other studies, however, such as Holbrook and Havlena (1988), that have not found a high correspondence between consumer preferences predicted from conjoint methods and preferences for real objects. In what might loosely be described as "generalizability" research models estimated in the domain of artificially manipulated stimuli (i.e., conjoint analysis) have often provided weak predictions of preferences for real products.

The purpose of this project was to re-assess the finding that models of affect for artificial stimuli do not generalize well to affect for real stimuli. (We adopt the terms used by Holbrook and Havlena, with the term "artificial" meaning affect for hypothetical objects designed by the researcher as in conjoint, and the term "real" meaning existing objects experienced by consumers.) First, we demonstrate that a reasonable level of generalization can be achieved. Preference models were estimated from consumers’ ratings of conjoint profiles and ratings of existing fast food restaurants, artificial and real stimuli, respectively. Using a hybrid conjoint model and a relatively tangible, familiar product category, a higher level of generalizability from artificial to real was obtained than in many previous studies. Second, we take the perspective that discrepancies between the models estimated from artificial and real stimuli may be seen as useful marketing signals, rather than strictly as a cause for rejecting the generalizability of the models. Contrary to previous research which has expressed concern over the lack of convergence between conjoint and multiattribute models, we argue that this divergence can be exploited. Specifically, preferences calculated from conjoint utilities can serve as a standard of comparison for the preferences for real restaurants. Any difference between the two sets of estimated preferences may reveal the effects of the brand itself.

The paper is organized as follows. First, previous research investigating preference generalizability will be briefly reviewed. Next the models to be estimated will be outlined. Then a study involving preferences for real, existing restaurants and artificial, hypothetical restaurants is described. After examining the results, the implications are discussed.

PREVIOUS RESEARCH ON GENERALIZING FROM ARTIFICIAL TO REAL

A Framework for Assessing Generalizability

Studies addressing generalizability across domains have modeled affect in the domain of artificial objects, denoted Y_{a}, and affect in the domainof real objects, denoted Y_{r}. Often conjoint models are estimated in the artificial domain and multiattribute models in the real domain. Each model could then be used to calculate a predicted value, Y, in its respective calibration domain. Typically, though, the interest is in the ability of the model to capture affect outside of the calibration domain, using the model to calculate an imputed value, Y. In essence, imputing is accomplished by applying the weights estimated in the calibration domain to the corresponding independent variables in the opposite domain.

Table 1 outlines the four values which could be calculated from the models. Correlating observed and predicted values within each domain results in a measure of internal predictive validity: the observed affect for the artificial stimuli, Y_{a}, with the conjoint predictions, Y_{a}, and the observed affect for the real stimuli, Y_{r}, with the multiattribute predictions, Y_{r}. The multiple R for a regression equation represents internal predictive validity. Correlating observed and imputed values across domains provides a measure of external predictive validity: the observed affect for the artificial stimuli, Y_{a}, with the multiattriute imputations, Y_{a}, and the observed affect for the real stimuli, Y_{r}, with the conjoint imputations, Y_{r}. Imputing preferences from the conjoint model is in effect the same process as estimating preferences for new products in the simulation stage of conjoint studies. Research on generalizability typically focuses on external predictive validity, and considers internal predictive validity as simply a standard of comparison.

Relevance and Previous Research Regarding Generalizability

In discussing the relevance of the generalizability of models, Holbrook and Havlena (1988) state that the issue is of critical importance to marketing researchers. For example, the validity of research evaluating new product design using conjoint procedures prior to market launch would be impacted by such findings. If the models are generalizable then marketing managers should have confidence in predictions generated from conjoint analysis of preferences for artificial stimuli. In recent review articles, both Green and Srinivasan (1990) and Wittink and Cattin (1989) state that there has been little published evidence of the predictive validity of conjoint except for cross validation to holdout sets of profiles.

Holbrook, Moore, Dodgen, and Havlena (1985) review the few existing pieces of research directly addressing the issue of generalizing from artificial to real. In addition, Holbrook and Havlena (1988) review the complementary issue, generalizing from real to artificial. Thus, previous research will be only briefly considered, followed by a discussion of unresolved issues. Note that precise comparisons across studies are difficult because they differ greatly in terms of models employed and product categories investigated.

DOMAINS AND MODELS INVESTIGATED

Green, Rao, and DeSarbo (1978) devised the procedures which have been followed in most of these studies. To compare affect across domains, in their 1978 study, 54 students rated seven actual vacation sites (e.g., Disney World) on six attributes, with each attribute at three levels; they then rated 18 profiles of hypothetical vacation sites constructed from those same six attributes and same three levels. Subjects also rank ordered the vacation sites. Individual level conjoint equations were then estimated for the profile ratings. Preferences were imputed to the real vacation sites by multiplying the group level attribute ratings for the sites by the estimated utilities.

Correlating the imputed values with the observed rankings, r {Y_{r}, Y_{r}}, resulted in a median Kendall tau of .73. The model predicted significantly better than chance for 74 percent of the subjects. Most subsequent studies have not achieved even this moderate level of correlation.

Following a similar procedure, Moore and Holbrook (1982) found correlations of only r=.55 on average between preferences imputed from conjoint models and observed preferences for real stimuli. In this study, 67 students rated preferences and attributes for real dogs (e.g., beagle) and rated hypothetical dogs presented as conjoint profiles constructed from the same attributes. Interestingly, the conjoint model imputed preferences to a holdout sample of real dogs as accurately as a model which had been calibrated in the real domain.

In a detailed follow up study, Holbrook, Moore, Dodgen, and Havlena (1985) attained an even lower average correlation of r=.52 between preferences imputed with conjoint weights and observed preferences for musical recordings (e.g., Barry Manilow). That is, only about 25 percent of the variance in the observed preferences could be explained with the imputed preferences. The authors concluded that poor generalizability to real products occurred because the 20 subjects weighted the relevant dimensions in the artificial world differently than the same dimensions in the real world. This low level of convergence was particularly disconcerting because the individual level conjoint equations predicted the artificial stimuli well, producing an average R=.81 coefficient of determination. And the multiattribute equatons calibrated on real recordings predicted the real stimuli well, averaging R=.82.

In addition, Holbrook and Havlena (1988) re-analyzed the same data used in Holbrook, et al. (1985) to look at the generalizability from real to artificial, specifically r {Y_{a}, Y_{a}}. Predicting affect for artificial objects from models developed on real objects was equally unsuccessful, resulting in a group level correlation of r=.56 between observed and predicted affect for profiles.

In contrast, strong external validity between predictions derived from surveys and actual choices has been demonstrated in several studies reviewed by Levin, Louviere, Schepanski and Norman (1983). Choices related to transportation mode, store patronage, and residential location were predicted quite well, with correlations above .90 in some cases. Several of these studies differ from those noted earlier in that the models were calibrated and tested on different respondent samples. Again, the stimuli were relatively tangible, everyday items which may improve the generalizability from artificial to real. They conclude that external validity can be improved by taking greater care in the design of the respondents’ task and in the estimation of the model parameters.

In sum, the ability of models estimated from preferences for artificial stimuli to predict preferences for existing objects is crucial to the usefulness of conjoint analysis. Yet there has been little published research on this issue. And the extant research in marketing has produced mixed results about the ability of conjoint models to generalize to the "real" world.

Our study re-examined this apparent lack of generalizability from artificial to real. There were two goals: first, to see if a reasonable level of convergence across domains could be achieved, and second, to study what the gap between artificial and real may have to tell us. The published research which has found lower external predictive validity has dealt with categories not often considered in applied marketing research settings (e.g., dogs and music) as compared to the research finding higher external predictive validity (e.g., vacations and transportation mode). Our study concerns fast food restaurants, a relatively familiar, common category which may improve generalizability. In addition, in contrast to previous research which has regarded the lack of generalizability as a problem, it could in fact offer an opportunity to understand consumers’ preferences better. The lack of generalizability may itself be seen as valuable information. This research addresses the possibility of exploiting the information revealed by the gap between artificial and real.

MODELS

Hybrid Conjoint Model of Affect for Artificial Stimuli

One modeling approach which could help bridge the gap between artificial and real stimuli is hybrid conjoint (Green 1984). In hybrid, respondents provide self-explicated desirabilities for the levels of each attribute and self-explicated importance weights for each attribute. Combining these utilities and weights produces a compositional utility for every attribute-level combination. Respondents then rate the desirability of a limited number of conjoint profiles drawn from a larger master design. As in traditional conjoint, consumer preferences are decomposed from their ratings by partitioning out the variance due to the levels of the attributes in the profiles, but in hybrid the self-explicated utility for the profile is also included in the model.

The hybrid conjoint model can be formulated several ways as shown in Green (1984). The notation for the model has been adapted from Moore and Semenik (1988). First, the expected affect for hypothetical, "artificial" products, is calculated from the self-explicated data for an individal as follows:

where

Y_{a} = expected utility for artificial stimulus a, calculated from self-explicated data

w_{j} = self-explicated importance of attribute j

u_{jk} = self-explicated desirability of level k of attribute j

x_{ajk} = dummy variable indicating whether stimulus a possesses the k th level of the j th attribute

The self-explicated utility value is then included as one term in the model representing affect for the conjoint profiles:

where

Y_{ia} = i th respondent’s observed affect for artificial stimulus a

Y_{ia} = expected utility for stimulus a, calculated from self-explicated data for the i th respondent

G = estimated intercept term

Y = estimated coefficient for self-explicated utility

b_{ia} = estimated coefficient for level k of attribute j

e_{ia} = error in predicting respondent i’s affect for stimulus a

The weights are estimated by one OLS regression run across all respondents.

Multiattribute Model of Affect for Real Stimuli

Affect for "real" products can be represented with an identically formulated multiattribute model:

where all terms are as defined before and

Y_{ir} = i th respondent’s affect for real stimulus r

Y_{ir} = expected utility for real stimulus r, calculated from self-explicated data for the ith respondent

The x_{rjk} represent the respondents’ judgment that stimulus r has level k on attribute j. These ratings of the attributes can be expressed as dummy variables, making the model parallel to hybrid conjoint (see Green, Rao, and DeSarbo 1978 for a similar method).

Use of the Models to Generalize Affect Across Domains

All of the values in Table 1 could then be calculated after estimating equation [2] and equation [3]. Assessing the generalizability from artificial to real has typically focused on the correlation between observed affect for real products and affect imputed from the conjoint model, r {Y_{r}, Y_{r}}. The relationship between these two sets of preferences can be defined more clearly by examining the mathematical models used to represent them. Specifically, equation [3] expresses observed preferences; the imputed preferences come from estimating equation [2] and then applying the weights to the independent variables in [3]. Observed and imputed affect for the real objects differ according to:

Differences between observed and imputed can be stated in terms of the weights estimated from the conjoint model, the weights which would be estimated for the multiattribute model, and the error in the multiattribute model.

An additional point to consder in assessing the ability of a model calibrated in the artificial domain to generalize to the real domain is the relationship between affect imputed from the conjoint model and affect predicted from the multiattribute model. The multiattribute model must predict preferences for real stimuli at least as well as the conjoint model imputes those preferences. Therefore, the conjoint model could be assessed relative to the multiattribute model. In other words, how does r {Y_{r}, Y_{r}} compare to r {Y_{r}, Y_{r}}? The difference between predicted and imputed results can be seen by substituting Y_{ir} - Y_{ir} for e_{ir} in equation [4], resulting in:

This model says that any difference between affect imputed from the conjoint model, Y_{r}, and affect predicted from the multiattribute model, Y_{ir}, is due to discrepancies in the two sets of regression weights. Regressing Y_{ir} on Y_{r} brings out more clearly the discrepancies between the predicted and imputed results. The regression equation itself will simply remove the scale effects between predicted and imputed, but its residuals capture the total effect of the differences in the estimated weights. As the differences in the weights estimated in the two domains increase, the amount of unexplained variance (1 - R^{2}) increases. This unexplained variance might be seen as a useful indication of the differences between consumers’ utilities for artificial and real stimuli.

RESEARCH DESIGN AND DATA COLLECTION

This study assessed the ability of models of affect calibrated on artificial stimuli to impute affect to real stimuli. For the study, the product category of fast food restaurants was selected because a) it entailed the evaluation of relatively tangible, objective, simple attributes and b) it was familiar and meaningful to the (student) subjects. This product category may enhance the likelihood of generalizing between artificial and real domains.

FAST FOOD ATTRIBUTES AND LEVELS

Stimuli

Restaurant attributes and their levels, as well as the "real" stimuli, were derived from focus groups conducted with students, a report on fast food restaurants in Consumer Reports and the corporate management of one fast food chain. The final list of six attributes, each at three levels, appears in Table 2. In addition, the nine fast food restaurants which were selected also appear in Table 2.

Subjects

For this study, 150 undergraduate marketing students completed the tasks during class. (Because the conjoint design requires an equal number of respondents per subset of profiles, six randomly chosen subjects were dropped to produce three balanced blocks of 48 subjects each.) Out of the nine restaurants, most of the 144 remaining students had eaten at all nine. The median number was eight. Every respondent ate at least one meal a month at one of these restaurants, with an average of at least once a week.

Design of the Respondents’ Tasks

First, subjects completed the self-expicated utility tasks following the steps outlined in HYCON (Green and Toy 1985). For the three levels on each attribute, they rated whether the level was best, acceptable, or unacceptable; ratings were later coded as 1.0, .5, and 0, respectively. These serve as the u_{jk} in equation [1]. Subjects then rank ordered the six attributes by importance; the reflected rankings were normalized by dividing each one by the sum of the ranks. These serve as the w_{j} in equation [1].

For the second task, a fractional factorial design of 18 full profile conjoint combinations was selected from an orthogonal main effects only design (Hahn and Shapiro 1966, Plan 6). Restaurant brand name was not included in the profile. The x_{ajk} in equations [1] and [2] represent the dummy coded variables from these profiles. Three subsets of profiles were developed such that every subject responded to only six profiles, with one-third of the subjects receiving each subset. All subjects also responded to the same three holdout profiles. Respondents rated their preference for each profile on a 7 point equal-interval scale from very undesirable up to very desirable.

In the third task, each real restaurant was presented to the respondent with scales for the same six attributes at the same three levels used in the conjoint profiles. Respondents circled the one level which matched their perception of that restaurant on each attribute. Respondents rated, for example, their perception of McDonald’s on cleanliness. These ratings become the x_{rjk} in equation [3]. Then they rated the desirability of each restaurant on a seven point scale, exactly as in the conjoint task.

Model Fitting and Analysis

Determining which real restaurants to include in the model estimation and which to include in the holdout sample occurred after gathering the data. The three holdout restaurants were chosen to match the preference ordering of the three holdout profiles compared to the other profiles. That is, one holdout profile ranked near the bottom of all the profiles, one above the middle, and one near the top. In the same relative positions were Arby’s, McDonald’s, and Ponderosa which became the holdout sample for testing the predictive validity of the model estimated on the other six restaurants.

Two models were run: equation [2] estimated the preference structure in the artificial domain and equation [3] estimated the preference structure in the real domain. In both cases, one OLS regression was run across the 144 respondents and their six ratings. In addition, the self-explicated utility was calculated for each profile and restaurant using equation [1].

After estimating the two models, predicted and imputed affect values were calculated, then correlated with the observed ratings at a total group level, to match the entries in Table 1. Then, following equation [5], the predicted affect for the real restaurants was regressed on the imputed affect to investigate the consequences of the differences in the estimated weights across domains.

RESULTS

Model Estimation

Table 3 presents the parameters for the two estimated models. Both capture the majority of the variance, but the multiattribute model produces a higher multiple R value of .83 compared to .73 for the conjoint model. In the conjoint model, the self-explicated utility was significant while in the multiattribute case it was not. Rank ordering the attributes by the magnitude of the coefficients, the food quality attribute had the largest impact on preference and the variety of food the least significant impact in both domains. Compared to the coefficients based on the real restaurants, subjects appear to be more price sensiive, but less influenced by atmosphere in rating the profiles.

Group Level Correlations

To examine the generalizability of the models across domains, the predicted and imputed ratings were correlated with the observed ratings as presented in Table 4. Reading across the first row in the table, the group level conjoint model attained a .73 correlation between the predicted and observed ratings for the artificial stimuli. Imputed and observed ratings for the real stimuli correlated at the .78 level. Surprisingly, the conjoint model produced a somewhat higher external validity score, r {Y_{r}, Y_{r}} , than internal validity score, r {Y_{a}, Y_{a}}. The same pattern occurs with the holdout profiles and holdout restaurants, though as might be expected the correlations are lower.

In the second row, the group level multiattribute model yields a .67 correlation between the observed preferences for the profiles and those imputed by the model. This compares to a correlation of .83 between the observed preferences for the real restaurants and the model predictions. The multiattribute model was very successful at the internal prediction of the restaurants, r {Y_{r}, Y_{r}}, but it dropped off considerably when predicting the profiles, r {Y_{a}, Y_{a}}. The same pattern is seen in predicting the holdouts, again at a lower level of correlation.

For comparison, the self-explicated results appear in the third row. The conjoint and multiattribute models out perform the self-explicated model in every case, except when the multiattribute model is used to predict the holdout conjoint profiles. In both models, there was a significant increase in the variance explained by adding the dummy variables (i.e., Equations 2 and 3) compared to simply including the self-explicated utility (Equation 1). For artificial stimuli, the amount of variance explained improved from .64 to .73 (F (12,850)=18.05, p=.01) but it improved even more for real stimuli from .69 to .83 (F (12,850)=48.19, p=.01).

Generalizing from Artificial to Real

These results indicate that generalizing from the hybrid conjoint model to real affect is relatively reliableCimputed and observed ratings correlated .78. As expressed in equation [5], differences between the preference structure (captured by regression weights) estimated in the domain of real objects and that estimated in the domain of artificial objects can be modeled through Y_{r} and Y_{r}. Regressing affect predicted from the multiattribute model on the affect imputed from the conjoint model, results in the following equation (only the six restaurants in the original estimation of the multiattribute model are included):

Y_{r} = .644 + .970 Y_{r}

With an R^{2} of .87, the equation leaves 13 percent of the variance unexplained. Applying the above equation to each of the predicted and imputed values for the nine restaurants, the residuals were computed, and tested against the expected residual of 0 for each restaurant. These results appear in Table 5. An alternative approach would be to regress the observed ratings for the real restaurants on the imputed ratings. While this would also allow us to detect differences by restaurants, it would not focus on the differences due to the estimated weights as shown in equation [5].

PREDICTIVE VALIDITY OF GROUP LEVEL MODELS

Arby’s, Burger King, Hardee’s, and McDonald’s received significantly lower predicted values from the multiattribute model than might be expected. In other words, if we ignore the intercept of .644, then for these restaurants the preference value predicted from the multiattribute model was less than the preference inputted from conjoint, Y_{r}<Y_{r}. The negative residual for Hardee’s, for example means that a conjoint profile with the characteristics of a Hardee’s would have received (on average) a higher rating than it did in the "real" world. Conversely, Hoss’s received higher than expected multiattribute predictions, Y_{r}>Y_{r}. The positive residual for Hoss’s implies that a conjoint profile that "looked like" a Hoss’s would have received a lower rating. For the remaining four restaurants, the residuals did not differ significantly from 0.

One explanation for these results could be that the residuals of the multiattribute model itself differ by restaurant, possibly due to positive or negative halo. Following the same steps, the residuals of equation [3] were calculated and tested against the expected residual of 0 by restaurant. None of the six restaurants in the calibration set differed from 0, even at a p<.10 level. Thus, the multiattribute model captured the preferences equally well for each real restaurant.

DISCUSSION AND CONCLUSIONS

There are two important findings from our research on the generalizability from artificial to real. First, preferences imputed from the hybrid conjoint model, Y_{r}, converged well with the observed preferences, Y_{r}, for the real, existing restaurants. Almost two-thirds of the variability (R=.78) in the restaurant preferences was captured by the conjoint model which had been calibrated in the artificial, hypothetical domain. This level of generalizability from artificial to real is higher than that found in most previous research in this area. Our results suggest that models based upon artificially manipulated stimuli can provide reasonable predictions of behavior in the world of real products. Interestingly, the hybrid model performed as well in the domain of real restaurants as it did in the domain of artificial stimuli. Moore and Holbook (1982) also found that a conjoint model predicted well on holdouts in both domains.

This study begins to address the lack of research on the ability of conjoint models to predict actual preferences, choices, sales, or market shares. In recent review articles, both Green and Srinivasan (1990) and Wittink and Cattin (1989) state that there has been little published evidence of the predictive validity of conjoint except for cross-validation to holdout sets of profiles. In our study, cross-validation resulted in a total group level correlation of .63 between predictions from the hybrid model and holdout profiles. But the more interesting result was the .78 correlation between predictions from the hybrid model and preferences for the real restaurants. These results provide some confidence in the predictive validity of conjoint models relative to real stimuli as well as relative to holdout profiles.

The second finding of this research is that the differences between preference structures estimated in the two domains may provide useful marketing information. The effects of the differences in the estimated weights of the conjoint and multiattribute models were brought out more clearly by regressing predicted preferences, Y_{r}, for real restaurants on imputed preferences, Y_{r}. Examining the regression residuals provides a measure of how well each restaurant would fare according to the conjoint equation compared to the multiattribute equation. Applying the conjoint weights to the real attribute ratings finds that consumers would be expected to prefer McDonald’s, for instance, to a greater extent than that which was predicted from the multiattribute model. The imputed preferences represent a benchmark, "objective" level of preference which the product "should" be able to command, ceteris paribus. As measured in the conjoint task, consumers say they would like a restaurant with the characteristics of a McDonald’s. But when rating McDonald`s in the multiattribute task, consumers could draw upon their experience of actually going there (all 144 subjects had eaten at McDonald’s). Thus the discrepancies between the preference structures estimated in the two domains point out the value of the image of the real restaurant itself. In this instance, the effect of being "McDonald’s" appears to lower the preference compared to an identical but unnamed restaurant profile.

TESTING RESIDUALS OF PREDICTED AFFECT REGRESSED ON IMPUTED AFFECT

These residuals could also be thought of as a measure of brand equity. As defined in Farquhar (1989), brand equity is the "added value with which a given brand endows a product" (p. 24). One approach to measuring brand equity has been to ask consumers’ their preferences for branded and unbranded versions of the same product, such as colas or cereals. Any difference in the two sets of preferences is the effect of the brand name, the brand’s equity (Chay 1991). This "residual analysis" approach is quite similar to our analysis of the differences between imputed and predicted preferences for real restaurants. We do not use a direct measure of preference as that obtained from a taste test, but infer preferences through the estimated models. As a result there is the possibility that the residuals we examine are correlated with omitted variables. The apparent negative equity for McDonald’s could be due to a poor location for instance. (In this study, the residual amounts were not related to demographic or usage variables.)

Future research could map out the boundary conditions for generalizing from artificial to real. Holbrook, et al (1985) may have identified a lower bound where the estimated models predict well within their respective domains, but predict poorly across domains. In fact the series of studies from Holbrook (1981) to Holbrook and Havlena (1988) serve as a warning that consumers’ preferences cannot always be validly predicted from conjoint alone. Is there also an upper limit to how well conjoint models can predict preferences for real products? We have speculated that one factor affecting generalizability is the product category itself. When preferences for real products are determined by relatively tangible, simple attributes, a conjoint model based upon those attributes might be expected to predict well across domains. Although it may seem ironic, a real brand that is predicted well with conjoint in effect has no brand equity, following the logic of residual analysis as applied in this paper. Such a brand is nothing more than the sum of its attribute parts. On the other hand, in product categories where brand name has a larger influence on preferences, we would expect generalizability from artificial to real to decrease.

REFERENCES

Chay, Richard F. (1991), "How Marketing Researchers Can Harness the Power of Brand Equity," Marketing Research, 3 (June), 30-37.

Farquhar, Peter H. (1989), "Managing Brand Equity," Marketing Research, 1 (September), 24-33.

Green, Paul E. (1984), "Hybrid Models for Conjoint Analysis: An Expository Review," Journal of Marketing Research, 21 (May), 155-169.

Green, Paul E., Vithala R. Rao, and Wayne S. DeSarbo (1978), "Incorporating Group-Level Similarity Judgments in Conjoint Analysis," Journal of Consumer Research, 5 (December), 187-193.

Green, Paul E. and V. Srinivasan (1990), "Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice," Journal of Marketing, 54 (October), 3-19.

Green, Paul E. and Daniel R. Toy (1985), HYCON: Conjoint Analysis and Buyer Choice Simulation. Palo Alto, CA: The Scientific Press.

Hahn, G.J. and S.S. Shapiro (1966), "A Catalog and Computer Program for the Design and Analysis of Symmetric and Asymmetric Fractional Factorial Experiments," Technical Report No. 66-C-165, General Electric Research and Development Center, Schenectady, NY.

Holbrook, Morris B. (1981), "Integrating Compositional and Decompositional Analyses to Represent the Intervening Role of Perceptions in Evaluative Judgments," Journal of Marketing Research, 18 (February), 13-28.

Holbrook, Morris B. and William J. Havlena (1988), "Assessing the Real-to-Artificial Generalizability of Multiattribute Attitude Models in Tests of New Product Designs," Journal of Marketing Research, 25 (February), 25-35.

Holbrook, Morris B., William L. Moore, Gary N. Dodgen, and William J. Havlena (1985), "Nonisomorphism, Shadow Features and Imputed Preferences," Marketing Science, 4 (3), 215-233.

Levin, Irwin P., Jordan J. Louviere, Albert A. Schepanski, and Kent L. Norman (1983), "External Validity Tests of Laboratory Studies of Information Integration," Organizational Behavior and Human Performance, 31, 173-193.

Moore, William L. and Morris B. Holbrook (1982), "On the Predictive Validity of Joint-Space Models in Consumer Evaluations of New Concepts," Journal of Consumer Research, 9 (September), 206-210.

Moore, William L. and Richard J. Semenik (1988), "Measuring Preferences with Hybrid Conjoint Analysis: The Impact of a Different Number of Attributes in the Master Design," Journal ofBusiness Research, 16, 261-274.

Wittink, Dick R. and Philippe Cattin (1989), "Commercial Use of Conjoint Analysis: An Update," Journal of Marketing, 53 (July), 91-96.

----------------------------------------

##### Authors

W. Steven Perkins, M/A/R/C

Daniel R. Toy, California State University at Chico

##### Volume

NA - Advances in Consumer Research Volume 24 | 1997

##### Share Proceeding

## Featured papers

See More#### Featured

### R12. Brand Primes Can Satiate (Important) Consumer Goals

Darlene Walsh, Concordia University, Canada

Chunxiang Huang, Concordia University, Canada

#### Featured

### Q11. The Effect of Message Ephemerality on Information Processing

Uri Barnea, University of Pennsylvania, USA

Robert Meyer, University of Pennsylvania, USA

Gideon Nave, University of Pennsylvania, USA

#### Featured

### Cultivating a Network of Trust: Exploring The Trust Building Agency of Objects in Home Sharing

Marian Makkar, Auckland University of Technology, New Zealand

Drew Franklin, Auckland University of Technology, New Zealand