A Comparison of Linear and Nonlinear Transformations of the Dependent Variable in Conjoint Analysis


Marcus Schmidt (1995) ,"A Comparison of Linear and Nonlinear Transformations of the Dependent Variable in Conjoint Analysis", in E - European Advances in Consumer Research Volume 2, eds. Flemming Hansen, Provo, UT : Association for Consumer Research, Pages: 310-319.

European Advances in Consumer Research Volume 2, 1995      Pages 310-319


Marcus Schmidt, Southern Denmark Business School

The paper provides an example of a simple conjoint analysis using a mail panel approach. The panel is a nationwide representative panel operated by a polling firm and used for measuring opinions on political issues. First, Part-worth utilities were estimated using a transformational regression procedure. Next, utilities were computed using OLS. Finally, the results were compared. Although the parameter estimates were roughly similiar the fit was far from perfect. The study emphasizes the importance of not using OLS uncritically when analyzing conjoint data. [The author would like to thank GfK Denmark for kindly providing the data which are used for analysis in this paper.]

Since the early 1970s, conjoint analysis has received considerable academic and industry attention as a major set of techniques for measuring buyers' tradeoffs among multiattributed products and services (Green and Srinivasan 1990; Vriens and Wittink 1994). Much research effort has focused on investigating and solving technical questions regarding the conjoint model. Table 1 gives an overview of alternate methods across the different steps involved in a typical conjoint study.This overview is not complete, since it omits mixed methods like mixed preference models and mixed media for collecting the data (i.e. the telephone-mail-telephone TMT-approach). Likewise table 1 is not exhaustive with respect to stimuli construction sets, estimation methods etc.

Nevertheless, it is easy to see that one might carry out conjoint analysis in many, many ways that do differ in some way or another regarding how the experimental design has been precisely defined. If we assume...:

3 preference models x 2 collection methods x 3 types of collecting media x 5 design methods x 4 forms of stimulus presentation x 5 ways of scaling the dependent variable x 3 estimation methods, there will be 5400 different ways of operationally defining a design for a conjoint study.

Instead of being occupied with (a) simple evaluation studies or (b) complex comparison studies based on the issues addressed in table 1, research frontiers seem to proliferate into new areas like those mentioned in table 2. Thus at least some relatively simple, though important research questions are left not completely solved.The size of this problem becomes obvious when one thinks about the mass of methodological conjoint studies carried out during the recent decade: The degree of mathematical content regarding theoretical algorithms and the use of artificial data ( i.e. MontT Carlo) for evaluating the model has been growing. During the seventies and the early eighties many conjoint papers were published in marketing journals like JMR, Journal of Marketing and Journal of Consumer Research. In recent years most papers on multiattributive models like conjoint analysis are published in journals like Psychometrica, Decision Science, European Journal of Operations Research, and Management Science. Of course there are plenty of exceptions to this rule, but the trend is unmistakable.

It is the aim of this paper to address two important issues which still deserve some attention in conjoint analysis: (A.) Is it possible to use mail interviews as a data collecting media in conjoint analysis? (B.) Is it justified to use only an OLS-model for computing parameter estimates?


The face-to-face approach (computerized or using paper-and-pencil) prevails in both academic and commercial studies. A paper by Wittink, Vriens, and Burhenne (1992) summarizes approximately one thousand commercial conjoint studies carried out in the US (between 1981 and 1985) and Europe (1986-91) respectively. Only a few percent of the studies reported by the research agencies were using other interviewing instruments like the telephone (8% US-sample and 7% European-sample) and mail (9% US-sample and 3% European-sample).While telephone approaches like the TMT-method and the locked box are discussed in the academic literature (Schwartz 1978, Levy, Webster, and Kerin 1983, Green and Srinivasan 1990) the academic interest regarding the mail technique has been limited, indeed. According to Vriens and Wittink (forthcoming, 39):

"A mail interview does not appear to be attractive for several reasons. One, it is impossible to tell who provided responses. If the instrument is sent to a household and the head of the household is asked to respond, we can only hope that the instructions were followed. Two, there is no opportunity for the respondent to obtain assistance (unless a phone number is provided). Three, respondents may differ greatly in when they choose to answer the questions and how much time they spend on the task."

Although these arguments seem plausible they need not be valid, especially not regarding the first two of the above objections. While the objections may hold when using a simple random sample that is only contacted once, they need not hold when one is using a well-established panel of respondents. When using a mail panel (with a phone number provided) the researcher can check whether the questionnaire has been filled in by the appropriate respondent (assumed that the respondent is not lying). He or she can also check whether the conjoint questionnaire is filled in as intended by the researcher (however, she or he is not in control of the amount of time spent for filling in the questionnaire). Thus, it is argued that a mail interview can be used for data collection in the field, if one is using the panel approach for administering the field interviews.


A frequently used way to analyze conjoint data is to formulate a model that treats the attribute levels involved asCeffects typeCdummy coded explaining variables while treating the rank score (: the indicator of preference) as a linearly transformed dependent variable.

Studies by Wittink and Chattin (1989), and Wittink, Vriens, and Burhenne (1992) show a growing popularity of using rating scales and OLS models in commercial settings. See table 3. During the seventies only every third conjoint study used rating scales and only 16% were based on OLS-algorithms. When comparing this finding with a similar study conducted among European companies in the late eighties and early nineties it becomes clear that the measurement approach has changed remarkably. Today almost 60% of companies report to apply OLS-based software and more than two out of three commercial studies are using rating scales (these figures are affected by the increasing popularity of ACA across time). Thus, it remains an important methodological question whether it is justified to use ordinary least squares (OLS) for analyzing preference based or choice based conjoint data.





An OLS model assumes that the dependent variable is ratio scaled or at least interval scaled (metric) while the empirical data (input) is based on observations which quite often will be based on a ranking scale (nonmetric). Several studies i. e. Chattin and Wittink (1977), and Carmone, Green, and Jain (1978) indicate that the interpretation is not violated seriously when using OLS-models for analyzing ranking (choice) data. However, the first study does not present primary data while the latter one is based on Monte-Carlo results. For a brief discussion see also Green and Shrinivasan (1990, 8). [Curry and Rogers (80) investigate what happens if one is using non-metric algorithms for analyzing metric input.]





An alternate way of aggregating conjoint data is to proceed iteratively. When using this approach the part-worths are estimated using a nonlinear (monotonic) transformation of the dependent variable. According to this approach it is assumed that the part-worths might be more successfully estimated using a nonlinear (monotone) transformation of the dependent variable.

A typical monotone transformation decomposes rank ordered evaluation judgements of objects into components based on qualitative object attributes. For each attribute of interest a numerical utility value is estimated. The goal is to compute utilities such that the rank ordering of the sums of each object's set of utilities is the same as the original rank ordering or violates the ordering as little as possible (For a somewhat more technical description see below).

The present study compares parameter estimates generated using the latter approach with OLS.


The study was carried out in Denmark during 1994. GfK Denmark, a major research agency in the country, has been operating a nationally representative mail panel of respondents for 27 years continuously. Of course the panel is operated and administered according to the "basics" of traditional panel surveys: Respondents are exchanged after some years and new ones are recruited. This is done while considering the distribution of the populous across a set of demographic and social background criteria (age, sex, household size, urbanization, income etc.). Respondents do not get any incentives. However, they automatically take part in a lottery where they can win moderate gifts like a bottle of red wine, chocolate etc. The approximately fourteen hundred respondents of the panel once a month receive a questionnaire by mail. The questionnaire typically consists of four to eight pages of questions dealing with political issues, social phenomena, and economic aspects. If (a) the wordings of the questions are held in a simple prose, (b) technical instructions are provided in a pedagogical way, (c) the scaling is easy to administer for the respondent (Yes/No, Like/Don't like etc.) and (d) the questionnaire is not too longCthen the return rate usually will be in the upper eighties. Considering the somewhat narrow objectives of the present panel [The panelists are used to dealing with political issues only, they are never asked questions related to marketing, branding, etc.], it was decided to use a political environment when defining the attributes and levels for the study. The attributes of the study are shown in table 4.

As a matter of pure coincidence the study was carried out a few months before the elections for national parliament. Therefore, two polls dealing with the most important campaign issues were published by two rivaling research agencies [The polls were carried out by the Danish Gallup Institute and the Sonar Institute, respectively.] prior to the present study. The purpose of these two commercial studies was almost the same: producing a "hitlist" of the most important campaign issues, seen from the voters' point of view.





Though the survey instruments, questionnaire design, and scaling technology were not completely comparable it was nevertheless possible to carry out a limited comparison of the two studies (table 5). In spite of the difference in methodology the two studies were quite similar in their findings regarding the most important campaign issues. The main results of these two commercial studies were used as input when defining the attributes for the present study (table 4). Due to technical problems (compatibility) only the first five issues could be used as attributes in the present study. The remaining issues of the two studies did not make sense as attributes for the conjoint study. [The published "hitlists" contained data on nineteen topics (Gallup) and sixteen topics (Sonar) respectively. However, topics like "The Economy", "Balance of Payments", "Others", etc. were not found suitable as attributes in the present context.] Therefore, other attributes had to be defined for the design. This was done by the researcher who had some prior knowledge of politics (He had recently published a book on political analysis in the native language). Since the design had to be quite simple and easy to handle by the panel respondents it was decided to limit the number of attributes for the conjoint study to ten (three levels each). Regarding each attribute respondents were supposed to use the present amount of public spending on the political attributeCas perceived by the respondentCas point of reference. For each attribute three levels were defined (1) spend more/increase expenditures on the attribute, (2) spend as present/amount of expenditures unchanged, (3) spend less/decrease expenditures.The design profiles (political issues) were selected using the Sawtooth ACA program. However, the ACA interview was only used for constructing the questionnaire. The data collecting facilities of ACA could not be used in the present study. Using this facility would imply to mail field-diskettes to all respondents of the mail panel. This did not make sense since only 20% of the respondents reported having easy access (: at home) to a computer. The "unacceptables" question (ACA option) was not asked for. With respect to the ranking section of ACA it was assumed that all attributes were scaled according to a linear (ratio) scale within the range of the study. Thus, it was assumed that Spending more > Spending as present > Spending less, where ">" = preferred to.While the rationale of this assumption can be questioned, indeed, it corresponds with the conventional view held by native political experts regarding the overall preferences of the voters. During the importance section of ACA the "spend more" level was continuously regarded "very important" when compared to the "spend less" level. [Again this procedure can be questioned. However, the researcher found that a standardized procedure was necessary at this phase of the analysis.] Next ten paired comparisons were selected, only two attributes per profile. Since the questionnaire scaling had to be kept extremely simple, the scaling used was a categorical assignment (prefer left, prefer rightCwith no don't know-/can't decide-boxes). The calibrating concepts section of ACA was not used.

Three months after the study was carried out a national election took place. As it turned out the last pre-election poll carried out by the GfK company (using the present mail panel) came quite close to the outcome of the election. See table 6.

Based on table 6 it seems reasonable to conclude that the mail panel instrument of GfK Denmark during the present election was quite suitable for measuring the political party preferences of the voters on a categorical scale (vote for party x/non-vote for party x). Since the voters had to choose between political parties and not between political issues (or between conflicting combinations of political issues or priorities) [The country is a representative democracy and not a direct democracy allowing for votes on issues.], it was obviously not possible to compare utilities estimated using the panel and the population respectively. This would, indeed, have been a splendid opportunity to empirically validate the conjoint model. But since the mail panel instrument proved to be a quite valid tool for estimating political party preferences on a categorical scale, it is assumed that the same holds regarding an estimation of the unknown but "true" preferred political issues. It is further assumed that the panelists are exposed to enough "relevant and important" political profiles (: issue-combinations or packages). Finally, it is assumed that a successful completion of the involved data gathering task is facilitated supposed that the profiles are displayed to subjects in form of simple pairwise comparisons (See bellow).



There is at least one significant advantage involved by choosing a political environment for model-validating purposes: The existence of a considerable body of empirical scientific knowledge (secondary data) addressing political preferences (even some kind of trade-off study has been published in the native language).

In the present study 1357 questionnaires were mailed out of which 1181 were returned before the deadline (Five days after receipt by the respondent). However, 82 respondents did not fill in the question covering the conjoint task, thus reducing the number of usable observations for the conjoint analysis to 1099 (return-rate of 81%). Table 7 displays the ACA-design used in the present study. Every respondent was asked to choose one of two profiles, each having two attribute levels. Regarding the first comparison one had to choose between the two following profiles using a categorical assignment:


When analyzing the data, choice of profile one implies the data value "2", while profile two get the value "1" (Symbolizing higher preference for profile one when compared to profile two). While this coding procedure might be questioned, there was no other way of doing it that seemed more appropriate. One should remember that the conjoint analysis was kept extremely simple thus leaving only sparse data for analysis (nominal scales regarding the explaining variables and a categorical scale respecting the dependent variable). As one can see from table 7, the design consists of ten pairwise comparisons. Since the study involved ten attributes with only two attributes appearing on each profile, it was assumed that the respondents were expecting an "all other things being equal" state regarding the remaining attributes. An alternative to the chosen ACA design might have been to use some kind of powerful incomplete block design. Cochran and Cox (1957, 475) present a design (Plan 11.14) which one could use with ten treatments and two units per column and row. Unfortunately, the design involves 45 blocks (and then one would still have to use at least several cards within each block). When one uses the pairwise comparisons approach in the way it is administered here, this implies the use of several hundred profiles. The exact number of total profiles necessary depends on the design used within each block. An appropriate design proposed by Conjoint Designer (Bretton Clark) with two features and three levels per feature suggests nine cardsCwithin each block! Even when one does allow for pairwise comparisons with different attributes i.e.: Profile 1: Less on environment + More on refugees versus Profile 2: More on culture + As present on trafficCit is still required to have each respondent evaluate more than a hundred profiles instead of twenty like in the present study. The task does not become easier when respondents are asked to rate each profile individually instead of performing pairwise comparisons. Using much more than twenty profiles was regarded prohibitive and would strongly increase the amount of poor and missing data.




First, the data were analyzed using a transformational regression procedure included in the SAS/STAT software package (PROC TRANSREG). This procedure extends the ordinary general linear model by providing variable transformations that are iteratively derived using the method of alternating least squares suggested by Young (1981). The alternating least-squares algorithm adds one additional capability to the general linear model; it allows variables whose full representation is a matrix consisting of more than one vector to be represented by a single vector, which is an optimal linear combination of the columns of the matrix. For any type of linear model, an alternating least squares program can solve for an optimal vector representation of any number of variables simultaneously.

Because the alternating least squares algorithm can replace a matrix with a vector, it can be used for fitting a linear model for many types of variables, including nominal variables and ordinal variables (with or without category constraints). [According to SAS User's Guide (1513-14). It should be stressed, that PROC TRANSREG also can perform metric conjoint analysis using an ordinary least square method. This feature was not used here. Instead the author was using the traditional regression model PROC REG, since it was the aim of the paper to compare a "tailormade" monotonic conjoint model like the alternating least squares model with the "classic OLS model.]

Table 8 (columns 2 and 4) display the estimated part-worth utilities regarding all 1099 respondents. First the utilities were estimated on an individual level basis and then they were aggregated across respondents. In column 6 the importance of each attribute has been computed as suggested in Green and Tull (1978, 485-86). Since the second level of each attribute was the "as present" level, it was not possible to estimate part-worth utilities regarding the second level. The reason is that level two ("same as present") could not be technically separated from "other things being equal" in any meaningful way. When using an incomplete design (i.e. when using blocks) one is forced to assume an "other things being equal" perceptual state regarding the remaining attribute not presented during an individual profile or comparison (The rational of this assumption can be questioned). Therefore, table 8 only presents coefficients for the first and third level, but not for the intermediate level. Perhaps one could make the assumption that the second level is to be regarded the point of reference (+/-0).

When defining the set-up for the ACA-design it was decided that each attribute was scaled according to an interval scale with a natural direction of preference ("the more the better").

This assumption was confirmed with respect to five of the attributes (Unemployment, pensioners, education, social & health, and police). However, regarding three attributes (refugees, culture, and traffic) the trend was reversed. Finally, concerning the remaining two attributes (Environment and defense) both estimated parameters (levels) contained negative signs, thus showing that the present level of spending is being preferred as compared with spending more and spending less, respectively. [However, this is only an assumption. Due to technical considerations "0" was used as the level of highest preference when computing the importance (numerical addition of top-level minus bottom-level computed across all attributes) regarding the first and the last attribute (environment and defense respectively), since both levels/coefficients were negative. When performing these computations differently the ranking of importances (table 8, columns 6-9) would be different.] Columns 9 in table 8 can partially be compared with the "self explicated" importance rankings of table 5. Though there is some correspondence between the importance ranking computed using the conjoint model (GfK) and the two simpler approaches (presented by Gallup and Sonar), differences remain: The conjoint design appears to (1.) upgrade the importance of education as compared to Gallup and Sonar (This tendency is intensified when one looks at the appropriate OLS-figure-see column 9 in table 8), (2.) upgrade the importance of the attribute refugees (Again OLS supports this finding and the trend is much more significant), and (3.) downgrade the importance of the attribute unemployment (according to OLS this attribute is almost without importance).



While the first finding seems reasonable and the second seems partially reasonable, the last finding does not at all seem reasonable. Indeed, it is in conflict with what one would expect to find. [These are subjective evaluations, based on the present researcher's knowledge of several polls in Denmark.] Presently there are no obvious explanations to these contradictions or inconsistencies to common-sense knowledge based on polls (there are no data proving the existence of some kind of contradiction).

Next, the author was interested in evaluating the "convergent" validity of the conjoint model. Therefore, the estimation phase was repeated using the 1099 respondent data as input, this time running the analysis using a "classic" OLS model (SAS PROC REG). The results are displayed in Table 8 (cols. 3, 5, 7, and 9). These figures (OLS-columns) are to be compared with the corresponding columns displaying the results of the nonmetric conjoint analysis (NMC-columns 2, 4, 6, and 8). Though many estimates are similar across methods, differences remain. In two cases (4. Unemployment-attribute/spend more-level, and 9. Police-attribute/spend more-level) the sign differs. [However, the zero-point is arbitrary!] Out of twenty comparisons of estimates more than half (eleven) were statistically significant. However, one should remember that the data matrix was 'big' and that in such cases it becomes quite easy to reject null hypotheses regarding similarities of estimates. See Green (1978, 170 and 338). Basically this is due to what economists call the "law of great numbers": If the sample sizes are approaching infinity all test-results become significant. If in the present analysis one would have been using only, say, 100 respondents, then fewCif anyCdifferences in estimates would have turned out to be significant. Moreover, the empirical sample at hand was very heterogeneous (it was a nationwide representative sample). From a pure subjective point of view the NMC-estimates seem to be better in agreement with common sense than the OLS-utilities (Indeed some of the OLS-estimates look strange). Presently there is no obvious explanation to the modest convergent validity.

Another way of evaluating the convergent validity is to correlate the utility values across all 1099 respondents. This has been done in table 9. An inspection of table 9 shows a high correlation. Only one correlation is less than .50 and only two are less than .70. In half of the twenty cases the correlation was beyond .90 and in several cases the fit was practically ideal (.97-.99). The average correlation is .85 which seems rather encouraging.

It is possible, however, carry out several kinds of tests and follow-up analysis with regard to performing further investigations and explorations:

1. Computation of a split-half reliability coefficient based on items (: Cronbachs alpha)

2. Split-half tests based on subjects

3. Test-retest reliability: Repeat the conjoint questionnaire using the same panel of respondentsCeither using an identical questionnaire (straight test-retest) or using the alternative forms approach (i.e. with attributes randomized). This test involves gathering of new data.

4. While the OLS-model is an optimizing model, monotonic models proceed iteratively. Therefore, results may differ, depending on the options chosen. Thus one might repeat the monotonic conjoint analysis while varying options in systematic ways i.e. using a statistical design like the one reported in Umesh and Mishra (1990).

Tests according to (2.) and (3.) have been caried out and are reported elsewhere, while (1.) and (4.) are in preparation. One could think about other ways of evaluating the results, but this would imply a considerable expansion of the research design (i.e. using a multiple of profiles while presenting the stimuli), thus introducing new problems.


The paper provides an example of a simple conjoint analysis using a mail panel approach as measurement instrument and a transformational regression procedure (monotonic alternating least squares) and OLS respectively, for estimating part-worth utilities. An inspection of part-worth utilities and the connected computation of importance seemed only partially to agree with two comparable commercial studies (A straight face-to-face test could not be carried out, however, because the commercial studies did not provide part-worth utilities and were using a different research design). When trying to explain this one should bear in mind that the selected research design was not a powerful one: It was not at all an ordinary statistical design (i.e. incomplete block design). The sparse number of profiles exposed to respondentsCwhen combined with the categorical measurement scaleCcould imply an interaction between a given attribute (i.e. pensioners) and an other given attribute (i.e. refugees). One attribute (refugees) only appears with two other attributes (environment and pensioners). The modest convergent validity across conjoint models may be partially explained with: (1.) An extremely sparse research design. (2.) A questionable use of the ACA-setup. (3.) Estimation-related problems for both algorithms caused by inferior input data (explaining variables were categorical-like). (4.) An inappropriate way of coding the dependent variable (5.) Conjoint results were not robust but sensible with respect to the actual research design! The somewhat frightening consequences of an acknowledgement of the final (5.) argument is straightforward. The background of the present study was facilitated thanks to the existence of a considerable body of empirical scientific knowledge (secondary data) addressing the environment of the study. Furthermore the research instrument (a representative mail panel) proved to be quite suitable for measuring aggregated preferences (: political parties) of the respondents (voters) on a categorical scale (vote for party x/non-vote for party x). However, it cannot be ruled out that the poor test statistics are caused by phenomenons that were beyond the control of this researcher and the applied design.

Finally, the paper shows that the uncritical use of only a linear OLS-model might be questioned. The research environment of the study (political issues) was a simple one: it required no specified knowledge to manage the conjoint task. Almost everyone has at least some knowledge about political issuesCas opposed to i.e. specified product/brand related issues. In acknowledgment of the somewhat contradicting findings of the present study it seems justified to carry out more large scale conjoint studies with "universal validity" within relatively well-known research environments: [While Monte Carlo studies are highly useful in many settings (see i.e. Green and Jain, 1978) it is not recommended to use artificial data only. Large scale empirical studies are also needed.] Automobiles/vans, refrigerators, TVs and content of programs, video tape recorders, washing machines, personal computers, newspapers, shampoos, detergents, coffee, financial services, apartments or homes, retailing choices, holiday preferences, etc. [Colvin, Heeler, and Thorpe (1980) provide an example of a large scale study regarding automobiles while Huber et al. (1991) are using refrigerators in another large scale study. In both studies respondents were selected from a relatively well defined target group.] Almost every respondent has some knowledge regarding the above categories of products/services. Furthermore, there is often plenty of secondary data dealing with these categories. Finally, the conjoint researchers and practioners should be aware of that estimated utilities may be quite sensitive to the chosen research design. The problem seems especially worrying if one is using many attributes combined with few profiles (:sparse design). Although this seems evident to everyone engaged in conjoint analysisCpractioners as well as academiciansCone often forgets it in everyday life. [According to Green and Srinivasan (1990) "the average commercial study has used 16 stimuli evaluated on eight attributes at three levels each. Taken literally, such a design leads to no degrees of freedom for the commonly used part-worth function model."]


Agarval, Manoj K., and Paul E. Green. 1991. "Adaptive Conjoint Analysis Versus Self-Explicated Models: Some Empirical Results." International Journal of Research in Marketing 8:141-46.

Akaah, Ishmael. 1988. "Cluster Analysis Versus Q-Type Factor Analysis a Disaggregation Method in Hybrid Conjoint Modeling: An Empirical Investigation." Journal of the Academy of Management Science 16 (summer): 11-18.

Carmone, Frank J., Paul E. Green, and Arun K. Jain. 1978. "Robustness of Conjoint Analysis: Some MontT Carlo Results." Journal of Marketing Research 15 (May): 300-303.

Cattin, Philippe, and Dick R. Wittink. 1977. "Further Beyond Conjoint Measurement: Towards a Comparison of Methods." In Advances in Consumer Research, edited by William D. Perrault, Jr., Chicago: American Marketing Association, 41-5.

Cochran William G., and Gertrude M. Cox. 1957. Experimental Designs. New York: John Wiley and Sons.

Colvin, Michael, Roger Heeler, and Jim Thope. 1980. "Developing International Advertising Strategy." Journal of Marketing 44 (fall): 73-79.

Curry, David, and William Rogers. 1977. "Aggregating Responses in Additive Conjoint Measurement", In Advances in Consumer Research, edited by William D. Perrault, Jr., Chicago: American Marketing Association, 35-40.

Darmon, RenT Y., and Frantois Coderre. 1991. "Objective Versus Perceived Attribute Level Distances: Their Effects on the Predictive Validity of Conjoint Data." In EMAC Proceedings. Dublin: Michael Smurfit Graduate School of Business, no. 1:21-37.

Darmon, RenT Y., and Dominique RouziFs. 1989. "Assessing Conjoint Analysis Internal Validity: The Effect of Various Continuous Attribute Level Spacings." International Journal of Research in Marketing 6:35-44.

Dobson, Gregory, and Shlomo Kalish. 1993. "Heuristics for Pricing and Positioning a Product-line Using Conjoint Cost Data.." Management Science 39 (February): 160-75.

Green, Paul E. 1977. "A New Approach to Market Segmentation." Business Horizons 20 (February): 61-73.

Green, Paul E. 1978. Analyzing Multivariate Data. Hinsdale, Ill.: The Dryden Press.

Green, Paul E. 1984. "Hybrid Models for Conjoint Analysis: An Expository Review." Journal of Marketing Research 21 (May): 155-69.

Green, Paul E., and Wayne S. DeSarbo. 1979. "Componential Segmentation in the Analysis of Consumer Trade-Offs." Journal of Marketing 43 (fall): 83-91.

Green, Paul E., Stephen M. Goldberg, and Mila Montemayor. 1981. "A Hybrid Utility Estimation Model for Conjoint Analysis." Journal of Marketing 45 (winter): 33-41.

Green, Paul E., and Kristiaan Helsen. 1989. "Cross-Validation Assessment of Alternatives to Individual-Level Conjoint Analysis: A Case Study." Journal of Marketing Research 26 (August): 346-50.

Green, Paul E., and Abba M. Krieger. 1990. "A Hybrid Conjoint Model for Price-Demand Estimation." European Journal of Operations Research 44:28-38.

Green, Paul E., Abba M. Krieger, and Manoj K. Agarwal. 1991. "Adaptive Conjoint Analysis: Some Caveats and Suggestions." Journal of Marketing Research 28 (May): 215-22.

Green, Paul E., Abba M. Krieger, and Pradeep Bansal. 1988. "Completely Unacceptable Levels in Conjoint Analysis: A Cautionary Note." Journal of Marketing Research. 25 (August): 293-300.

Green, Paul E., Abba M. Krieger, and Catherine M. Schaffer. 1993. "An Empirical Test of Optimal Respondent Weighting in Conjoint Analysis." Journal of the Academy of Marketing Science 21 (fall): 345-51.

Green, Paul E., Abba M. Krieger, and Robert N. Zelnio. 1989. "A Componential Segmentation Model With Optimal Design Features." Decision Sciences 20 (spring): 221-38.

Green, Paul E., and V. Shrinivasan. 1990. "Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice." Journal of Marketing 54 (October): 3-19.

Green, Paul E., and Donald S. Tull. 1978. Research for Marketing Decisions, Englewood Cliffs, NJ: Prentice-Hall.

Hagerty Michael R. 1985. "Improving the Predictive Power of Conjoint Analysis: The Use of Factor Analysis and Cluster Analysis." Journal of Marketing Research 22 (May): 168-84.

Huber, Joel C., Dick R. Wittink, John A. Fiedler, and Richard L. Miller. 1991. "An Empirical Comparison of ACA and Full Profile Judgements." In Sawtooth Software Conference Proceedings, Ketchum, ID: Sawtooth Software, 189-202.

Johnson, Richard M. "Comment on Adaptive Conjoint Analysis: Some Caveats and Suggestions." Journal of Marketing Research 28 (May): 223-25.

Kamakura, Wagner. 1987. "A Least Squares Procedure for Benefit Segmentation With Conjoint Experiments." Journal of Marketing Research 25 (May): 157-67.

Kohli, Rajeev. 1988. "Assessing Attribute Significance in Conjoint Analysis: Nonparametric Tests and Empirical Validation." Journal of Marketing Research 25 (May): 123-33.

Kohli, Rajeev and Ramesh Krishnamurti. 1987. "A Heuristic Approach to Product Design." Management Science 33 (December): 1523-33.

Kohli, Rajeev 1989. "Optimal Product Design Using Conjoint Analysis: Computational Complexity and Algorithms." European Journal of Operation al Research 40:186-95.

Kohli, Rajeev, and Vijay Mahajan. 1991. "A Reservation Price Model for Optimal Pricing of Multiattributive Products in Conjoint Analysis." Journal of Marketing Research 28 (August): 347-54.

Kohli Rajeev, and R. Sukumar. 1990. "Heuristics for Product-Line Design Using Conjoint Analysis." Management Science 36 (Decem ber): 1464-78.

Levy, Michael, John Webster, and Roger A. Kerin. 1983. "Formulating Push Market Strategies: A Method and Application." Journal of Marketing 47 (fall): 25-34.

McBride, Richard D., and Fred S. Zufryden. 1988. "An Integer Programming Approach to the Optimal Product Line Selection Problem." Marketing Science 7 (spring): 126-40.

Mehta, Raj, William L. Moore, and Teresa M. Pavia. 1992. "An Examination of the Use of Unacceptable Levels in Conjoint Analysis." Journal of Consumer Research 19 (December): 470-76.

Moore, William L., and Richard J. Semenik. 1988. "Measuring Preferences with Hybrid Conjoint Analysis: The Impact of Different Number of Attributes in the Master Design." Journal of Business Research 16:261-74.

Ogawa, Kohsuke. 1987. "An Approach to Simultaneous Estimation and Segmentation Conjoint Analysis." Marketing Science 6 (winter): 66- 81.

SAS/STAT User's Guide Version 6, vol. 2. SAS Institute, Cary, NC.

Schwartz, David. 1978. "Locked Box Combines Survey Methods, Helps End Woes of Probing Industrial Field." Marketing News 27 January, 18.

Srinivasan, V., Peter G. Flachsbart, Jarir S. Dajani, and Rolfe G. Hartley. 1981. "Forecasting the Effectiveness of Work-Trip Gasoline Conservation Policies Through Conjoint Analysis." Journal of Marketing 45 (summer): 157-72.

Srinivasan, V., Arun K. Jain, and K. Malhotra. 1983. "Improving Predictive Power of Conjoint Analysis by Constrained Parameter Estimation." Journal of Marketing Research 20 (November): 433-38.

Steckel, Joel H., Wayne S. DeSarbo, and Vijay Mahajan. 1991. "On the Creation of Acceptable Conjoint Analysis Experimental Designs." Decision Sciences 22:435-42.

Tantiwong, Duangtip, and Peter C. Wilson. 1985. "Understanding Food Store Preferences Among the Elderly Using Hybrid Conjoint Measurement Models." Journal of Retailing 61 (winter): 35-64.

Umesh U. N., and Sanjay Mishra. 1990. "A MontT Carlo Investigation of Conjoint Analysis Index-of-Fit: Goodness-of-Fit, Significance and Power." Psychometrika 55 (March): 33-44.

Vriens, Marco, and Dick Wittink. Forthcoming. "Data Collection." In Conjoint Analysis in Marketing, edited by M. Vriens.

Wind, Jerry, Paul E. Green, Douglas Shifflet, and Marsha Scarbrough. 1989. "Courtyard by Marriott: Designing a Hotel Facility with Consumer-Based Marketing Models." Interfaces 19 (1): 27-45.

Wittink, Dick R. 1990. "Attribute Level Effects in Conjoint Results: The Problem and Possible Solutions." In Advanced Research Techniques Forum Proceedings. Chicago: American Marketing Association.

Wittink, Dick R., and Philipe Cattin. 1989. "Commercial Use of Conjoint Analysis: An Update." Journal of Marketing 53 (July): 91-96.

Wittink, Dick R., Joel C. Huber, John A. Fiedler, and Richard L. Miller. 1992. "Attribute Level Effects in Conjoint Revisited: ACA Versus Full Profile." In Second Annual Advanced Research Techniques Forum Proceedings. Chicago: American Marketing Association.

Wittink, Dick R., Marco Vriens, and Wim Burhenne. 1992. "Commercial Use of Conjoint Analysis in Europe: Results and Critical Reflections." International Journal of Research in Marketing 11:41-52.

Young, Forrest W. 1981. "Quantitative Analysis of Qualitative Data." Psychometrika 46:357-88.



Marcus Schmidt, Southern Denmark Business School


E - European Advances in Consumer Research Volume 2 | 1995

Share Proceeding

Featured papers

See More


Corporate Social Responsibility and Dishonest Consumer Behavior

In-Hye Kang, University of Maryland, USA
Amna Kirmani, University of Maryland, USA

Read More


Trusting the data, the self and “the other” in self tracking practices

Dorthe Brogård Kristensen, University of Southern Denmark, Denmark

Read More


L14. Christmas Decorations in September – What Happened to Halloween? The Effect of Prospective Event Markers on Time Perceptions and Attitudes Towards Promotions

Chaumanix Dutton, University of Southern California, USA
Kristin Diehl, University of Southern California, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.