Compromising Between Information Completeness and Task Simplicity: a Comparison of Self-Explicated, Hierarchical Information Integration, and Full-Profile Conjoint Methods

ABSTRACT - This paper compares hierarchical information integration (HII), full-profile (FP) conjoint and self-explicated (SE) approaches to preference measurement in terms of equality of preference structures, predictive abilities, and task load. HII is a method to accommodate larger numbers of attributes in conjoint tasks by structuring the task in a hierarchical fashion. The three approaches are compared in a residential preference study that involves thirteen attributes. The results confirm that conjoint approaches result in better choice predictions than self-explicated measures. No significant differences in performance are found between FP and HII with this number of attributes though there are indications that HII can outperform FP if a suitable hierarchical structure is selected. Finally, it is found that SE is the most quickly completed task but only if it is the first task that a respondent encounters.


Harmen Oppewal and Martijn Klabbers (2003) ,"Compromising Between Information Completeness and Task Simplicity: a Comparison of Self-Explicated, Hierarchical Information Integration, and Full-Profile Conjoint Methods", in NA - Advances in Consumer Research Volume 30, eds. Punam Anand Keller and Dennis W. Rook, Valdosta, GA : Association for Consumer Research, Pages: 298-304.

Advances in Consumer Research Volume 30, 2003     Pages 298-304


Harmen Oppewal, Monash University

Martijn Klabbers, NIPO


This paper compares hierarchical information integration (HII), full-profile (FP) conjoint and self-explicated (SE) approaches to preference measurement in terms of equality of preference structures, predictive abilities, and task load. HII is a method to accommodate larger numbers of attributes in conjoint tasks by structuring the task in a hierarchical fashion. The three approaches are compared in a residential preference study that involves thirteen attributes. The results confirm that conjoint approaches result in better choice predictions than self-explicated measures. No significant differences in performance are found between FP and HII with this number of attributes though there are indications that HII can outperform FP if a suitable hierarchical structure is selected. Finally, it is found that SE is the most quickly completed task but only if it is the first task that a respondent encounters.


Conjoint measurement approaches have been popular techniques for preference modeling since the late >70s and a vast literature is available on the theory of conjoint measurement and the issues involved in the application of conjoint approaches (Green, Krieger and Wind 2001). One recurring theme in the literature is the added value of conjoint relative to the typically more easy to implement self-explicated approaches. The early literature already suggested that self-explicated (or compositional) methods may result in unreliable measurement scales and, more importantly, lead to biased results, as they typically overestimate the importance of less important attributes and underestimate the importance of the most important attributes (Slovic and Lichtenstein 1971) and are more sensitive to social biases (Green and Srinivasan 1990). Research by Srinivasan c.s. suggests that the self-explicated approach is able to yield reliable results in specific applications (Srinivasan 1988; Srinivasan and Park 1997), however in other recent research it was found that self-explicated tasks perform less well than full profile conjoint tasks (Pullman, Dodson and Moore, 1999).

There are at least two major reasons why conjoint approaches can be expected to perform better than self-explicated approaches: 1) conjoint approaches force respondents to make trade-offs between attributes, and 2) conjoint approaches allow more control over the inferences that respondents make about the remaining attributes of an alternative (cf. Johnson, 1987). Given these arguments there is surprisingly little insight in the conditions under which full-profile (FP) conjoint results are superior to self-explicated (SE) approaches, especially when it comes to assessing the total benefits and costs of research designs (cf. Leigh, Mackay and Summers 1984; Huber et al. 1993; Srinivasan and Park 1997). One condition that is of particular interest is when many attributes are relevant. A common belief among researchers is that profiles with more than, say, ten attributes are too difficult to handle for respondents (e.g., Green and Srinivasan 1990). Respondents cannot oversee and trade-off so many attributes and/or they become tired and thus will ignore and attend attributes in random and uncontrolled ways, or they will tend to use heuristics that lead to biased preference measures.

To solve this problem researchers generally rely on self-explicated methods or hybrid combinations of self-explicated and conjoint approaches. These methods often involve the use of partial profiles and some bridging mechanism to combine the partial profile results (e.g., Chrzan and Elrod 1995). When partial profiles are used, the breakdown of attribute sets into smaller sets is often done on a fairly ad hoc basis that does not control for possible inferences that respondents make about the non-displayed, or missing attributes. An approach to the creation and administering of partial profiles that in principle avoids this problem is the method of Hierarchical Information Integration (HII). Originally proposed by Louviere (1984) this approach uses theory and other insights from the field of interest to divide the set of attributes into smaller groups, for each of which a partial profile conjoint task is designed. In the original HII approach respondents rate the partial profiles on summary constructs (or dimensions) and in addition receive a bridging task that consists of designed profiles describing alternatives in terms of how they score on these dimensions. In the extended HII approach (Oppewal, Louviere and Timmermans 1994) the bridging factors are administered as attributes in the partial attribute profiles, which makes a separate bridging task superfluous. In each case the total data collection allows estimating preference functions 'as if’ one full profile design had been administered without the information load of such a large FP task.

The idea underlying HII is that a proper hierarchical structure of the task helps the respondent to cope with complex alternatives and that the resulting increase in reliability and validity outweighs the increased number of responses that need to be collected. In fact, the HII approach is based on two assumptions: 1) that by organizing the attributes into meaningful subsets the respondent can produce more consistent and hence, more reliable profile judgments, and 2) respondents can combine dimension judgments into overall profile evaluations, such that these evaluations reflect the respondents’ true preference structures.

Though HII has been around since the eighties, no tests of the ideas underlying HII seem to have been performed. Pullman et al. (1999) recently compared various methods for handling larger numbers of attributes in conjoint tasks, but they did not include HII as a possible method. Molin et al. (1999) compared the predictive ability of FP and two versions of HII and found that the HII version using integrated experiments outperformed the other two methods. This study however focussed specifically on the measurement of group preferences. Van de Vijvere et al. (1999) compared residential preferences obtained from HII and FP choice tasks but found no significant differences in preference functions. They however did not compare different HII structures. Neither of the studies investigated task load or compared FP with HII and SE in one study. None of the previous studies has compared the effectiveness of different HII structures for the same set of attributes.

The purpose of this study is therefore to compare SE, FP and HII in terms of predictive ability and task load. We assume that the three formats (SE, FP, and HII) all measure the same theoretical construct (preference) and focus our analysis on the following questions:

1) How well do the tasks measure preferences that people express when making choices;

2) How well does each of the formats perform in terms of predicting choices;

3) How do these methods compare with respect to task difficulty as perceived by the respondent and as measured in terms of the time it takes a respondent to complete the task?

To answer these questions we implement the three task formats as preference tasks. Their order is systematically varied and they are compared on their ability to predict holdout choices observed in the experiment. Because the holdout choices resulted from a separate design, we are able to estimate a choice model and compare its results with the methods. This study involves thirteen attributes, a number that most researchers will consider as fairly large but not impossible to present in one full-profile. All implementations incur the same set of attributes and attribute levels. The application is about residential preferences.

In the following section we detail our hypotheses. Then we present the empirical study. The paper ends with a section in which we discuss the results and their further implications.


SE versus FP

Over the years no conclusive evidence seems to have been provided for the claim that conjoint approaches in general predict consumer behavior better than self-explicated approaches. In some cases conjoint approaches were found to predict consumer behavior better than self-explicated methods, in other studies it was vice versa. Leigh, McKay and Summers (1984) found no evidence that conjoint has better reliability or validity than SEBhowever this may be an artifact as the conjoint task always preceded the self-explicated task. Huber et al. (1993) demonstrated that self-explicated methods can perform almost equally well as conjoint approaches and found that the best results are obtained if both approaches are combined. They however did not control for task order effects. Results by Srinivasan and Park (1997) suggest that SE can perform better than conjoint but in this study different methods of administration were used. However, Pullman et al. (1999) find evidence that FP performs better than SE.

The literature is therefore not conclusive. However, as outlined, FP requires trade-offs and specifies the remaining attributes, where SE does not. We therefore posit that FP preference models perform better than SE in terms of predicting a set of holdout choices. We also expect that preference structures estimated from FP tasks better represent preference structures as derived from a choice model than those based on SE tasks (Hypothesis 1).

HII versus FP

The essential assumption underlying all HII-applications is that in the sub-experiments part-worth estimates are obtained 'as if’ all attributes were presented in one full profile and 'as if’ the respondent were able to respond without the limitations associated with information overload. In the HII approach all sub-experiments are independent and self-contained conjoint experiments; the various sub-experiments are only combined after the data has been collected. When the HII approach is employed that uses integrated experiments (Oppewal et al., 1994), there is not any principal difference between HII sub-experiments and full profile conjoint experiments because each of the constructs is 'just another attribute’. This attribute is however selected and defined to capture as much as possible a set of remaining determinant attributes in the choice problem. Each profile in the sub-experiments essentially is a complete (full-profile) description of the alternative. So, the conventional and HII-based conjoint tasks are all assumed to measure the same theoretical construct of preference. We therefore expect that in a task with a moderately large number of attributes, HII and FP conjoint tasks will result in similar preference structures and levels of predictive ability (Hypothesis 2).

Differences between HII structures

The hierarchical structure in HII tasks preferably is natural and easy to understand for respondents, such that it helps them cope with complex alternatives. Defining the hierarchical structure involves the allocation of attributes to subsets and labeling the subsets in terms of a construct or generalized attribute. The allocation of attributes is based on literature research, pilot work and more of less extensive pretests. One can also perform post hoc tests of the hierarchical assumptions underlying the experiment (Oppewal et al. 1994).

Thus, the purpose of HII is to guide the respondent through the decision process by structuring the task while avoiding errors that might otherwise arise due to missing attributes or task overload problems. Given this, we expect that the performance of HII-models is dependent on how the hierarchical structure is defined. Hierarchical structures that are more intuitive and natural for respondents and that, hence, are perceived as more easy, will better support the decision-making process than structures that are less natural. We therefore posit that the structure of the HII task affects the predictive ability of HII-based models such that a hierarchical task structure that is perceived as easier to do results in a preference structure that more closely corresponds to the 'true’ preference structure as derived from a choice task, and also better predicts choice responses (Hypothesis 3). .

To test these hypotheses we implement two different structures on the same set of attributes and collect respondents’ task impressions, and in addition collect data for the conventional FP format. This is a novel approach as to our knowledge previous applications have always involved the use of only one hierarchical structure.

Task load

The above three hypotheses focus on the model performance in terms of predictive abilities and equality of preference structures. In deciding which method to employ one should however also look at the cost of data collection associated with each of the methods. To get further insight into this matter we investigate the task load of each method. For each method we collect data on how much time it takes respondents to complete the task and we measure how they perceive the task load in terms of task difficulty and interestingness.

Expectations with respect to completion times are that SE tasks can be completed most quickly because they require very little reading per stimulus. FP and HII tasks both require much reading but FP requires fewer responses, so we also expect that respondents will complete FP tasks more quickly than HII tasks. Our hypothesis is therefore that SE tasks are completed more quickly than FP tasks, which in turn are completed more quickly than analogous HII tasks (Hypothesis 4).

With respect to perceived task difficulty we expect that the SE tasks are perceived as easiest to do, again because they require little reading. We expect that HII is perceived as easier than FP because HII is assumed to support the decision making process and to help respondents to cope with the large number of attributes. So, we posit that SE tasks are perceived as easier to do than HII tasks, which in turn are found easier to do than FP tasks (Hypothesis 5). Finally, we will also explore task interestingness, however we do not have clear expectations about differences between methods in terms of interestingness.



Participants were seventy architecture students in a course on computer-assisted design methods. Students had to complete the experimental tasks before they could start on their architectural assignment, in which they had to design a semi-detached house for a 'standard’ family with one child. We thus measured students’ perceptions of the preferences of the family for whom they wanted to design their dwelling. Students had no knowledge about methods of preference measurement and were unaware of our research goals; they were debriefed in a later session. The students could complete the experimental tasks at their own time and speed at any available personal computer that was linked to the campus network.


Based on the literature we generated thirteen housing attributes that appeared as the most relevant for architects and future residents. We defined three levels as shown in the right-hand columns of Table 1 for all attributes except 'position of hallway’, which had two levels. Rent was not included, as students had to focus on the design quality of the dwelling.

Full Profile and HII Conjoint designs

We designed three conjoint experiments that all involved this same set of thirteen attributes. One experiment was a regular conjoint preference experiment in which respondents rated full profile descriptions of houses one at a time. The other two were HII implementations in which the attributes were organized according to hierarchical structures as shown in the left-hand column of Tables 1a and 1b. In the FP tasks we used the same attribute order as in HII-1. Figure 1 shows an example HII-1 profile. The figure shows a composite of four HII partial profiles that together constitute one total full profile. Note this layout is not typical for HII; in most applications of HII the subtasks are separately administered and sometimes even different respondents receive different subtasks. In our application for each partial profile a rating of the corresponding construct is collected, as is standard in the conventional approach to HII (Louviere 1984); we used a 9 point rating scale for each construct. The HII bridging task is similar in layout except that no attribute levels are shown. Instead, construct scores (levels '2’, '5’ or '8’) are presented that inform respondents how they would rate the partial profiles if they had been displayed. In each conjoint task a rating out of one hundred is obtained for the total profile. Under the assumptions of HII these latter ratings are, directly or indirectly, based on the same attribute information as the ratings obtained in the FP tasks. Indeed, the screen layout in the FP tasks was identical to the HII screens, only the information about constructs was not present in the FP task.

For all three experiments, 81 attribute-profiles were generated from a main-effects fraction of the 315 full factorial design. Each of the thirteen attributes constitutes one factor; the two additional factors were used as blocking factors to split the set of 81 profiles into nine blocks of nine profiles. Each participant was randomly assigned to one of these blocks. HII bridging tasks were designed consisting of nine-treatment fractions of the 34 full factorial. In these designs we used the levels 2’, 5’, and '8’ for all constructs.

Self-explication of preferences

The fourth task type was a SE task in which respondent received the attributes one at a time on their screen. They were allowed but not obliged to browse through the screens before giving their answers. For each attribute all levels from the design were shown and respondents were asked for each level to evaluate a dwelling with this level on a scale with extremes '1 (very poor) and '9 (very good). They rated the importance of the total attribute on a rating scale with extremes '1 (very unimportant) and '7 (very important). The order of appearance of the attributes was randomized for each respondent.

Choice tasks

The choice tasks constituted pairs of alternatives that were randomly selected from design fractions that were not part of the respondent’s experimental HII and FP treatment blocks. Respondents had to indicate which alternative they expected their 'client’ would choose. The design ensured that across respondents choices were observed for all 81 profiles, which would allow the estimation of a discrete choice model. The order of the attributes in these choice tasks was the same as in FP and HII-1.

Between-subjects Master Design

Because it was not feasible to have respondents participate in all different task types, we used another experimental design to allocate tasks types and task orders to subjects. By design each respondent received three out of the four task types (HII-1, HII-2, FP, SE). The order of these tasks was systematically varied. Respondents always started with a choice task and received an additional choice task after each task. Choice tasks consisted of two choice sets, hence each respondent completed eight validation choice sets.

Task evaluations

After completing a task from the design respondents were asked to rate on 9-point category rating scales the easiness (1=very difficult, 9=very easy) and interestingness (1=very boring, 9=very interesting) of the completed task. The computer program recorded the start and completion time of each task without respondents being aware.


We used OLS regression to estimate aggregate preference functions from the FP and each of the bridging and sub-experiments in the two HII implementations. The model fit (adjusted R2) of the FP model was .38. For HII-1 the fit of the bridging model was .67 and the range of model fits for the sub-experiments was between .21 and .54. For HII-2 the fit of the bridging experiment was lower (.55) but the range of fits of the sub-experiments was fairly similar to the HII-1 setup (between .24 and .52). We next substituted the predictors in the two bridging functions with the regression functions from the corresponding sub-experiments to obtain one preference function for each HII implementation (Louviere 1984).

To analyze the self-explicated data we multiplied the level evaluation scores with the attribute importance scores to obtain SE-derived part worth estimates. Though this procedure is not well underpinned with theory it is a common way of handling compositional measures. The choice data were used to estimate an MNL choice model. The fit of this model was satisfactory (McFadden’s Rho-square .23).

For each model we next derived the relative importances of the attributes in the conventional way, that is, by calculating each attribute’s maximum part worth difference and expressing these differences as a percentage of the sum of differences across all attributes. The importances derived for each method are displayed in Figure 2. We next used the estimated preference functions to predict the choices that were made in the choice tasks by using the highest-utility-is-choice rule and calculated the hit-rate for each model as the percentage of choices that was correctly predicted.





Similarity of utilities

To investigate the similarity of the utilities derived from the SE, FP, HII-1, HII-2 and choice setup, we first calculated the product-moment correlations between the (mean) part-worths as derived from the different tasks. The FP part-worths are more similar to the choice-derived part-worths (r=.90) than the SE part-worths (r=.44). They are however less similar to the choice part-worths than those derived from HII-1 (r=.95), which is the hierarchical task that had an identical ordering of attributes than the FP task. The HII-2 based model performs less good than HII-1 or FP as it displays a p.m. correlation of .85 with the part-worths derived from the choice model.

To further investigate the equality of the FP, HII-1 and HII-2 based utilities, we estimated one regression model across these tasks from the FP and partial profiles HII screen ratings (as in the bottom of Figure 1, hence excluding the bridging tasks) and added attribute by task type interactions. No significant improvement was observed when interactions for differences between HII and FP were added (F(25,1328)=1.354, n.s. ), which supports the idea that there is no significant difference between the FP and HII-based preference structures. Similarly we find no differences between the HII-1 and HII-2 task structures (F(25,1328)=1.226, n.s.), which suggests there is no difference between the HII-1 and HII-2 based parameters.

Predictive abilities

The comparison of the predicted and observed choices resulted in the following hit rates: SE .61; FP .77; HII-1 .81; HII-2 .82. So, the FP-model predicts the choices substantially better than the SE-based model and the HII-based models predict the choices better than the FP-model. The FP versus SE difference is statistically significant (t99=1.81, p<.05, one-sided), however not the HII versus FP difference (t109=0.35, n.s.).

Task load

To see whether and how the methods differ in terms of respondent time, we derived the mean and median seconds per response in each task across the respondents who received this task as their first task and, separately, across all task orders (see Table 2). We next derived the completion times for one complete replication of each design type.

In an ANOVA we find the mean response times to the first task to be significantly different (F5,260=19.807, p<.001). The quickest responses were obtained in the SE tasks; the median time to complete all attributes and levels in the SE condition was 10 minutes and 16 seconds, however across all task orders the median time for SE was the highest of all (18:50). In the FP tasks, the median time to complete all nine profiles was 5:07 when FP was the first task. Note though that these profiles involve only one-third of the total FP design. If the remaining profiles had also been administered and if we assume that each of these profiles on average would require the same time, then the total time required for one replication of the FP design would be 15:21, which is much more than the SE time. It should however also be noted that additional profiles may be completed more quickly and that respondents who received the SE tasks as their second or third task required much more time to complete this task. Caution is therefore required regarding this result.







It took respondents more time to complete the HII-1 partial profiles (13:22) than to complete the HII-2 partial profiles (9:14). The completion of the HII-1 bridging task took the median respondent 4 minutes and 39 seconds; unfortunately, due to a recording error this data is not available for HII-2. Assuming equivalent completion times for the HII-1 and HII-2 bridging tasks, the data thus suggest that the HII-2 format resulted in quicker completion of one replication than the FP and HII-1 tasks and that, as expected, FP tasks are completed more quickly than HII tasks that have an identical attribute order. Note however that this may not be true if the task follows upon other tasks.

The final columns in Table 2 present the averages of the ratings that respondents provided for how easy and interesting they found the task. The means that are presented are based on the ratings that were given immediately after completion of the first task. An ANOVA reveals that the easiness scores are significantly different between the conditions (F5,296=3.456, p<.01). Respondents find SE tasks easier than the FP tasks, but they do not find HII-1 easier than FP. HII-2 is perceived as easier than FP or HII-1, which suggests that the attribute order and hierarchical structure in HII-2 is more natural to the respondents. There were no significant differences between the task means for interestingness (F2,296=.805, n.s.).


We compared SE preference measurement with HII and FP conjoint methods in terms of similarity of derived preferences, ability to predict choices, and task loads. The findings confirmed our first hypothesis that FP predicts choices better than SE and that the preferences derived from FP are more similar to choice-based preferences than SE based preferences. Conjoint thus outperformed the self-explicated method. Moreover, whereas we expected that respondents complete SE tasks much more quickly than conjoint tasks, we only find mixed evidence for this. When SE tasks are administered prior to other tasks, they are completed more quickly but when they are administered after one or more conjoint tasks, respondents require more time to complete a SE than a conjoint task that measures the same preferences. A possible explanation is that the conjoint task makes respondents more aware of the trade-offs that underlie the assessment of attribute importance (cf. Huber et al. 1993).

Our second hypothesis was based on the idea that FP and HII measure the same underlying construct, hence there will be no difference in preference except for measurement error. This was confirmed. The estimates for FP utilities were not significantly different from the estimates for HII utilities, nor were there significant differences in predictive ability. FP tasks were completed more quickly than HII tasks with a similar attribute order but less quickly than HII tasks with an alternative, more easy to process attribute order.

Our third hypothesis concerned the performance of HII for two different hierarchical structures that were defined on exactly the same attributes. We expected no difference in the preference structures derived from two different HII structures. This idea was confirmed. We also found no difference in predictive ability. Though the easier HII task displayed the highest hit rate, the difference with the predictive ability of the other HII version (and also FP) was not significant, hence the methods seem fairly robust for differences in presentation format and hierarchical structure. We therefore conclude that the differences in hierarchical structure can result in task load differences but that these differences do not necessarily result in measurement or model performance differences.

These conclusions are derived from a study that used 'only’ thirteen attributes, which is generally considered as large but not impossible for FP but is a relatively small number for HIICother HII studies have used up to forty attributes. Hence, HII provides estimates that are at least as good as FP for a case where FP can be expected to still perform reasonably well. When HII is used to study larger numbers of attributes the same principles of 'decision support’ apply. We therefore suggest that researchers consider using HII instead of FP for cases with larger numbers of attributes than studied here. The results also reconfirm the importance of proper pretests and efforts to develop the tasks such that they maximally correspond to the respondents’ perception of the decision problem.

To our knowledge, this is the first study to compare FP conjoint and different HII structures. Despite the encouraging results this is clearly only a first test and more work needs to be done to assess in which situation SE, FP or HII is the most proper method to use. Future studies could focus on testing the effects of differences in the number of attributes (cf. Pullman et al. 1999) and further effects of differences in the way the attributes and their hierarchical structure are presented. It would also be useful to extend this work to comparing ratings and choice data. Though we find clear evidence that conjoint tasks overall perform better then SE in terms of data quality, the choice of method will eventually depend on the total research design and budgetBand, of course, theory. Regarding this latter it should be noted that the three methods are quite different in their theoretical underpinnings and that choice models based on random utility theory often seem preferable (Louviere et al. 2000).


Chrzan, Keith and Terry Elrod (1995), "Choice-based Approach for Large Numbers of Attributes," Marketing News 29, 1 (January), 20.

Green, Paul E. and V. Srinivasan (1990), "Conjoint Analysis in Marketing: New Developments With Implications for Research and Practice," Journal of Marketing 54, 3-19.

Green, Paul E., Abba M. Krieger, and Yoram Wind, Y. (2001). Thirty years of conjoint analysis: reflections and prospects, Interfaces 31, 3, Part 2 (May-June), s56-s73.

Huber, Joel, Dick R. Wittink, John A. Fiedler, and Richard L. Miller (1993), "The Effectiveness of Alternative Preference Elicitation Procedures in Predicting Choice," Journal of Marketing Research 30 (February), 105-114.

Johnson, Richard D. (1987), "Making Judgments When Information Is Missing: Inferences, Biases, and Framing Effects," Acta Psychologica 66, 69-82.

Leigh, Thomas W., David B. McKay, and John O. Summers (1984), "Reliability and Validity of Conjoint Analysis and Self-Explicated Weights: A Comparison," Journal of Marketing Research 21 (November), 456-462.

Louviere, Jordan J. (1984), "Hierarchical Information Integration: A New Method for the Design and Analysis of Complex Multiattribute Judgement Problems," Advances in Consumer Research 11, 148-155.

Louviere, Jordan J. and Gary J. Gaeth (1987), "Decomposing the Determinants of Retail Facility Choice Using the Method of Hierarchical Information Integration: A Supermarket Illustration," Journal of Retailing 63 (1), 25-48.

Louviere, Jordan J., David A. Hensher and Joffre D. Swait (2000), Stated Choice Methods: Analysis and Application. Cambridge: Cambridge University Press.

Molin, Eric J.E., Harmen Oppewal, and Harry J.P. Timmermans (2000), "A Comparison of Full Profile and Hierarchical Information Integration Conjoint Methods to Modeling Group Preferences", Marketing Letters 11 (May), 169-179.

Oppewal, Harmen, Jordan J. Louviere, and Harry J.P. Timmermans (1994), "Modeling Hierarchical Conjoint Processes with Integrated Choice Experiments," Journal of Marketing Research 31, 92-105.

Pullman, Madeleine, Kimberly J. Dodson and William L. Moore (1999), "A Comparison of Conjoint Methods When There Are Many Attributes," Marketing Letters 10 (2), 123-138.

Scott, Jerome E. and Peter Wright (1976), "Modeling and Organizational Buyer’s Product Evaluation Strategy: Validity and Procedural Considerations," Journal of Marketing Research 13 (August), 211-224.

Slovic, P. and S. Lichtenstein (1971), "Comparison of Bayesian and Regression Approaches to the Study of Information Processing in Judgment," Organizational Behavior and Human Performance 6, 659-744.

Srinivasan, V. and Chan Su Park (1997), "Surprising Robustness of the Self-Explicated Approach to Customer Preference Structure Measurement," Journal of Marketing Research 34 (May), 286-291.

Srinavasan, V. (1988), "A Conjunctive-Compensatory Approach to the Self-Explication of Multiattributed Preferences," Decision Sciences 19 (Spring), 295-305.

Vijvere, Yves van de, Harmen Oppewal, and Harry Timmermans (1998), "Testing the Validity of Hierarchical Information Integration," Geographical Analysis 30 (July), 254-272.



Harmen Oppewal, Monash University
Martijn Klabbers, NIPO


NA - Advances in Consumer Research Volume 30 | 2003

Share Proceeding

Featured papers

See More


Brands as Mediators: A Research Agenda

Philipp K. Wegerer, University of Innsbruck, Austria

Read More


Sustainable Luxury: a Paradox or a Desirable Consumption?

Jennifer Jung Ah Sun, Columbia University, USA
Silvia Bellezza, Columbia University, USA
Neeru Paharia, Georgetown University, USA

Read More


Cultivating Collaboration and Value Cocreation in Consumption Journeys

Melissa Archpru Akaka, University of Denver
Hope Schau, University of Arizona, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.