Integrating Conjoint and Functional Measurement With Discrete Choice Theory: an Experimental Design Approach
ABSTRACT - This paper concentrates upon explaining how Conjoint/Functional Measurement experiments can be integrated with discrete choice experiments by a logical combination of interlinked experimental designs. Brief illustrations of the basic ideas are provided for choices from fixed and variable sized choice sets. The Luce or multinomial logit choice model and aggregated choice data are used as the basis for discussion and illustration of various analytic approaches. Extensions to individual level data and various limitations of the proposed approach are outlined. Comparisons with existing approaches are provided.
Citation:
Jordan J. Louviere (1983) ,"Integrating Conjoint and Functional Measurement With Discrete Choice Theory: an Experimental Design Approach", in NA - Advances in Consumer Research Volume 10, eds. Richard P. Bagozzi and Alice M. Tybout, Ann Abor, MI : Association for Consumer Research, Pages: 151-156.
This paper concentrates upon explaining how Conjoint/Functional Measurement experiments can be integrated with discrete choice experiments by a logical combination of interlinked experimental designs. Brief illustrations of the basic ideas are provided for choices from fixed and variable sized choice sets. The Luce or multinomial logit choice model and aggregated choice data are used as the basis for discussion and illustration of various analytic approaches. Extensions to individual level data and various limitations of the proposed approach are outlined. Comparisons with existing approaches are provided. INTRODUCTION The purpose of this paper is to provide an experimental basis for integrating Functional and Conjoint Measurement (FM/CM) with Discrete Choice Theory. This integration should be of interest to researchers in marketing and related fields for a number of reasons: 1. Currently the study of judgments as exemplified by methods such as Conjoint and Functional Measurement (Anderson 1981; Krantz and Tversky 1971; Green and Srinivasan 1978) are studied apart from the study of Choice Behavior as exemplified by Discrete Choice Theory (Amemiya 1981; Manski and McFadden 1981; Hensher and Johnson 1981). 2. Often the object of interest in a consumer judgment research project is the choice behavior of the consumer's), yet there is no adequate theoretical basis for relating judgments to choices, despite the growing interest in ad hoc simulation methods exemplified by the Possee System (Green, Carroll and Goldberg 1981; see also Green, DeSarbo and Kedia 1980; and Curry, Louviere and Augustine 1981, 1982). 3. Recent developments in Discrete Choice Theory, while known to researchers in marketing and consumer behavior (see, e.g., Gensch and Recker 1979; Punj and Staelin 1978; McFadden 1980), have yet to be fully integrated with work in decomposition, 1 judgment methods. The statistical methods developed from Discrete Choice Theory (see, e.g., Amemiya 1981; Manski and McFadden 1981; and Hensher and Johnson 1981) and the growing literature in limited dependent variable analysis (e.g., Bishop, Fienberg and Holland 1975) provide powerful methods for analyzing choice data which permit the development of FM/CM type models. To date, little work is available on experimental analogs to discrete choice analysis in econometrics, which would provide a basis for applying the theory under controlled, laboratory conditions and permit links with judgment methods. 4. Development of an experimental basis for studying choice behavior would permit empirical advancement to proceed more rapidly and vigorously than has been heretofore possible, particularly in the area of choice among multiattribute alternatives. 5. Integration of the judgment and choice methods would have considerable applications potential and appeal: managers and policy makers are frequently interested primarily in the effects or changing attribute configurations or market share--either in the aggregate or for t particular segments of interest; hence, development of rigorous methods for the design and analysis of choice behavior experiments should be of considerable applications interest, as well. This paper will concentrate upon explaining how discrete choice and/or resource allocation experiments can be designed and analyzed which are consistent with currently popular methods for the design and analysis of Conjoint and Functional Measurement experiments. Because the paper is designed to be an introduction to the basic ideas it will concentrate upon one class of choice models--the Luce (1959, 1977) or multinomial logit models (MNL models, see Manski and McFadden 1981; Hensher and Johnson 1981)--and aggregate choice data. The experimental methods are considerably more general ; and flexible, however, than these restrictive cases might suggest. The current paper represents a first step and its primary goal is exposition. To achieve these objectives, the remainder of the paper is organized as follows: We first pursue links between Conjoint and Functional Measurement and Discrete Choice i Theory; next we use simplified examples to illustrate some necessary experimental conditions for data collection; we then discuss some of the approaches to data analysis and provide some illustrative examples; finally we provide a discussion of some of the key notions developed in the paper, emphasizing unresolved issues and possible research implications. KEY CONCEPTS IN MULTIATTRIBUTE JUDGMENT AND CHOICE BEHAVIOR Multiattribute Judgment Methods In multiattribute judgmental research the main object of interest is the development of some algebraic description which provides insight into the judgment process under study. Typically, in marketing and consumer research one would like to interpret these judgments as estimates of levels of value, worth or utility. Hence, one postulates the existence of a utility function defined over the space spanned by the levels of the attributes, i.e. Vi = f(v(xik)), (1) where Vi is the overall value (worth, utility, etc.) of the i-th (i=1,2,...,I) multiattribute alternative (typically a treatment combination drawn from a factorial array); v(xik) is the scale value or part-worth (conditional value or utility, etc.) or the k-th (k=1,2,...,K) attribute x for the i-th alternative; f is a function or combination rule defined over the v(xik) that map the scale values into the overall values. In practice, of course, the vi are estimated from judgment data supplied by an individual (or some group of individuals). We concentrate our attention on the Functional Measurement paradigm because the choice approach outlined in a later section will provide metric data. Our aim in designing choice experiments will therefore be to design the experiment such that it can be analyzed statistically in a manner analogous to that of Functional Measurement (see Anderson 1981). Thus, Functional or Conjoint Measurement experimental design conditions must be satisfied in order to permit inferences about functional form to be drawn from studies of choice behavior. Multiattribute Choice Models Notwithstanding the work by Tversky (1972) on the Elimination By Aspects model, currently the only seriously considered empirical multiattribute choice models are variants of those developed in econometrics. Manski and McFadden (1981) recently have compiled a state-of-the-art collection of papers, Amemiya (1981) has published an excellent review, and Hensher and Johnson (1981) have produced an introductory text; hence, we will only discuss essentials necessary for exposition. In discrete choice theory attention centers upon the choice made by one or more individuals among competing alternatives. These choices are assumed to be driven by a utility function which may be characterized by a systematic or estimable component and a random or unobservable component. Applied econometric choice models typically assume that all individuals share a common or representative utility function but have idiosyncratic tastes and preferences and/or unobserved influences on choice. Such "disturbances" are assumed independent and drawn from a particular distribution. As Amemiya (1981) notes, ". . . what kind of QR [Qualitative Response] model one gets is equivalent to what distribution one assumes for [the differences in the errors]." For example, assuming a double exponential distribution leads to the MNL or Luce model while assuming a normal distribution leads to the probit or Thurstone Comparative Judgment model (Luce 1977; Amemiya 1981; Manski and McFadden 1981; Hensher and Johnson 1981). Previous applications of discrete choice models in marketing and econometrics have exhibited little interest in the choice process per se; rather, a typical application assumes a functional form for the errors and estimates the parameters of the assumed model from observed choice data (see, e.g., Gensch and Recker 1979; Hausman and Wise 1978; Westin and Watson 1975; Punj and Staelin 1978). Because choice data collected from uncontrolled field observations are subject to a variety of obvious limitations, it would be convenient to develop experimental analogs to permit the controlled study of choice behavior. In particular, the use of experimental design principles permits one to insure independence of attribute vectors and their cross-products in the FM/CM designs and can be used to guarantee satisfaction of rejection tests for the choice axiom and/or to test competing hypotheses (e.g., nested vs. non-nested processes) by the construction of appropriate choice set generating designs. Rejection of the Luce model as a first approximation would open the way for the consideration of more complicated models such as Generalized Extreme Value models or probit models (Hensher and Johnson 1981; McFadden 1980; Currim 1981). Thus, with the experimental approach we outline, one will be able to test hypotheses about process or functional form and make inferences useful for policy if the errors in the choice process conform to our distributional assumptions. In fact, as with econometric choice models, the experiments deal directly with choice behavior and permit one to make powerful inferences about the effects of policy on competing alternatives. Judgment models currently achieve this by a most indirect simulation route that has an inadequate theoretical base, cannot be tested except against actual market behavior, and is therefore of little use in studying choice processes. The methods developed in the next section attempt to blend the best of both judgment and choice techniques. We require a model for the choice process and a statistical method for analyzing the choice data to draw inferences analogous to those of interest in Functional Measurement. We concentrate upon the MNL or Luce model because of its applied importance in marketing (see, e.g., Reibstein 1978; Batsell 1980; Batsell and Lodish 1981) and in econometrics (see, e.g., Amemiya 1981; Hensher and Johnson 1981; Manski and McFadden 1981; Theil 1971). The MNL model may be written as follows: where p(a|A,VjeA) is the probability of choosing an alternative, a, from a set of competing alternatives, A, defined over all j alternatives in A; Va, Vj are the scale values or utilities of alternatives a and j respectively; e is the natural constant 2.7183, the base of the natural logarithms; and EQUATION is the summation defined over all j alternatives in A. It is necessary to impose some structure on the Vj's of equation 2. We assume the linear in the parameters and additive form of analysis of variance or multiple linear regression. That is, Vj = bkXkj, (3) where Vj is the systematic or measurable component of utility; Xkj is a vector of k (=1,2,...,K) independent attributes measured on the j alternatives; and bk is a vector of K parameters to be estimated. The bk parameters permit the analyst to measure the scale values or utilities associated with levels of each attribute. Thus, if one can design choice experiments based upon the MNL model and Functional Measurement, it is statistically possible to estimate the bk by various means discussed in a later section. In the next section we discuss experimental design considerations for integrating Functional and Conjoint Measurement and Discrete Choice Analysis. EXPERIMENTAL DESIGN CONSIDERATIONS Simple Choice Concepts Before developing the logic for Functional Measurement-like choice experiments, we consider simple choice experiments (see Louviere 1981). A "simple" choice experiment is one in which there are j (=1,2,...,J) alternatives arranged in one or more choice sets. Subjects choose one alternative from each choice set or allocate fixed resources to the alternatives according to the response instructions: the most preferred, the most likely to be purchased, etc. (Compare Batsell 1980; Batsell and Lodish 1981). Suppose there are J choice alternatives of interest. There are 2J possible choice sets (combinations of the J alternatives) because each alternative can be either in or out of a choice set. The sets of choice sets, therefore, constitute a factorial design in which each alternative is a two level factor (see Louviere 1981). If the MNL or Luce model is true, the conditional choice probabilities are sufficient to parameterize the V's. Thus, a sufficient condition to estimate the parameters of the model is to choose a main effects, fractional factorial design from the 2J factorial. Such fractions are easy to construct and design plans are readily available in published sources (e.g., Hahn and Shapiro 1966; National Bureau of Standards 1957). A necessary condition to fully test the model is the observation of choices over all choice sets because any significant Joint probabilities are contrary to the MNL model. In practice, sufficiently strong rejection tests probably can be achieved by choosing design fractions that permit the estimation of some two-way interaction effects. If necessary, blocking procedures could be employed to obtain data over all sets of choice sets. Louviere and Woodworth (1982) have investigated the efficiency of 2J fractional designs compared with the all pairs approach or the full choice set approach (e.g., Reibstein 1978). The efficiency and independence of the estimates derived from the fractional designs were found -to be superior to all pairs and virtually indistinguishable from the full choice set approach. Double Conditional Designs To integrate Functional Measurement notions with discrete choice methods, it is necessary to first develop an FM/CM type factorial or fractional factorial design. The treatment combinations generated by this first factorial design are then treated as two-level "factors" in a second choice set generating design as described in the preceding paragraph. Because there are two designs involved, we refer to the combined experimental design as a Double Conditional Design, because the choice responses are conditional on both the FM/CM treatment combinations and the choice set generating design (see Louviere 1981). Consider, for example, a simple 22 factorial as an FM/CM design (two attributes at two levels) and a 24 complete choice set generating design (each of the four profiles from the FM/CM design treated as a factor with two levels--absent or present). Which of the four treatment combinations of the FM/CM design are to be shown to the subject to make a choice or allocate resources is dictated by which treatment combinations are "present It is always possible and sometimes useful to add an additional choice alternative such as "other" or "none. This "base" alternative can be scaled to be the zero point on the utility scale and the remaining scale values will -be interpretable relative to this alternative. To see this let there be four alternatives (I, II, III, IV) plus a "none" alternative that appears in every choice set. Now, if we consider the odds of choosing alternative I relative to the alternative "none," we can simplify the algebra to obtain: where all terms are either self-evident or previously defined in equation 2. Taking logarithms to the base e of both sides of equation 4 yields: Equation 5 tells us that differences in utility are associated with log odds ratios of choice probabilities, which are relative to one or the other alternative. It is therefore convenient to select one alternative to act as a base that is meaningful to the research project and has desirable statistical properties. In many research problems and in practical applications it may be necessary to reduce the size of one or both of the designs in the Double Conditional design. Depending upon whether one wants to sacrifice information about the form of the decision model involved in choice or the nature or the choice process (and recognizing that they are interlinked), one can fractionate one or the other or both of the designs. A "base" alternative can be used to obtain as much of the FM/CM design as possible. Because one can set the origin of the utilities arbitrarily, one treatment combination (or more) can serve as the same base in each separate choice set generating design, insuring that all the separate choice results have the same common origin. This permits one to block the FM/CM design into different sets of choice sets. A word of caution is necessary for one class of FM/CM design problems: if all of the attributes are quantitative and if all are conditionally monotonically related to overall utility, there could be dominance problems. In particular, it will be the case for such designs that very few of the treatment combinations will not either dominate others or be dominated by others. Such sets of treatment combinations form a Pareto set and generally only the main effects of the attributes will be estimable. More research is necessary to develop generalizations about this class of choice problems. Other FM/CM Choice Designs There are a very large number of possibilities for constructing other types of choice designs that will permit one to integrate FM/CM notions and discrete choice methods. We will pursue these in future papers, however other types of designs which are analogous to previous econometric applications of discrete choice theory can be constructed from fractional design principles. Often, but not necessarily, such choice problems involve choice sets of a fixed size. For example, consider a consumer patronizing a sandwich shop and facing a "fixed" menu. Suppose there are many such shops with nearly identical menus but different prices for each menu item. This problem might be approached by first creating menu items with a factorial or fractional factorial design: e.g., sandwich type x drink type x side order type. To isolate and model the competitive structure of price effects, we might treat each menu item (treatment combination) as a factor with two levels of price and create a main effects plan to vary prices (e.g., see Hahn and Shapiro 1966). Subjects might indicate which item they would be most likely to purchase, or allocate some fixed set of resources to indicate choices over, say, five luncheons. As with Double Conditional designs, one might add a Base alternative, for example, "most likely go to another place for lunch." Many other examples could be provided: e.g., choices among several makes of autos in a particular-class, each of which differs in terms of cost, miles per gallon, warranty, etc. A BRIEF DISCUSSION OF ANALYTIC METHODS The designs discussed in the previous section generate choice or allocation data. Whether these data are treated at the level of the single individual, or by combining single individuals into a large data set, or by aggregating the choices of the single individuals into frequencies is a matter which depends upon requirements of the research, hypotheses to be tested, availability of computer programs, a priori information, etc. This paper concentrates upon analysis of aggregated choices or frequencies; later papers shall pursue individual level and repeated measures analyses. The analytical approach of Batsell (1980) and Batsell and Lodish (1981) for allocation data from choice experiments may be applied to the aggregated choice frequencies or allocations. However, we recommend a weighted least squares approach because the data are frequencies and therefore generally will be heteroscedastic. Louviere and Woodworth (1982) outline a simple, weighted least squares approach which can be shown to produce asymptotically consistent estimates in large samples. According to Amemiya (1981), a "large sample" for choice analysis is 30 subjects per cell. Frequency data contains more information than discrete choices and is an advantage of aggregated data. Indeed, as Amemiya states, "If a researcher had control over the values of the independent variables, he should be advised to design an experiment so that many observations per cell will be produced if possible." This is the basic idea behind the integration attempted in this paper. The Louviere and Woodworth approach employs dummy variables to represent choice set effects, which are the differences in the denominator of the MNL model in each choice set. I-1 (I is the total choice sets used in the experiment) dummies are required to uniquely absorb the variance due to these effects. In contrast, Batsell (1980) and Batsell and Lodish (1981) use the geometric mean of the frequencies in each choice set to absorb these same differences. These methods yield the same parameter estimates for the attribute utilities if the appropriate weighted least squares estimation procedure is employed. The estimates must be the same because the models are equivalent for aggregate frequency data; only the estimation procedure differs. The Batsell approach removes the denominator effects by incorporating them into the observed choice frequencies and then regresses this dependent measure (the residual after removing the denominator) against a set of orthogonal polynomial dummies constructed from the experimental design. The Louviere and Woodworth method simply uses non-orthogonal dummy variables to capture the same effects. If statistical inferences about the parameters are to be made, two cautions about the Batsell method are in order: (1) One must account for the denominator degrees of freedom even i. one absorbs the effects in the dependent measure. (2) The unweighted version of the Batsell procedure will produce less efficient estimates than the weighted version. Proofs of both these points may be obtained from the author upon request. It should be noted also that the Batsell method otherwise shares the positive virtues of the approach proposed in this paper, with the exceptions that it does not deal explicitly with the discrete choice problem or the experimental design problems posed in this paper. The Louviere and Woodworth method relies upon the denominator of equation 2 being constant for all the alternatives in the iCth choice set where Ki is the denominator of the i-th choice set (i=1, 2,..i,.A,..I); and other terms are as previously defined. Taking logarithms of both sides of equation 2, and letting the observed choice frequencies be the estimates of the probabilities. we have: ln [fij(a|A,VjEA)] = Va - ln(Ki) (7) where all terms are as previously defined. The ln(Ki) terms are accounted for by the choice set dummies mentioned above. Another approach is due to Theil (1971), (see also Hensher and Johnson 1981 and Amemiya 1981). Following the logic of equations 4 and 5, Theil uses the log odds as the dependent variable in a weighted least squares regression. That is, the observed choice frequency of alternative a in the i-th choice set is divided by the observed choice frequency of the Base alternative in the i-th choice set. This eliminates the Base alternative and the choice set dummies. and therefore reduces the number of observations required. The form of this regression equation would be as follows: where EQUATION is the natural logarithm of the odds ratio of the choices of a in choice set A to the choices of the Base alternative in set A; Xka, Xk base are vectors of attributes which describe alternative a and the base alternative; and bk are empirical parameters to be estimated. If the Base alternative has the same attributes as other alternatives, differing only in attribute levels, one could represent the differences in attribute levels as implied by equation 8. However, if the Base alternative is constant in all choice sets, one can dispense with differences. Other possible-analytical methods include weighted analysis of Variance (a Functional Measurement analog), nonlinear least-squares, maximum likelihood, etc. These : and other approaches are considered in sources such as Bishop, Fienberg and Holland (1975), Manski and McFadden (1981), Amemiya (1981), and Theil (1971). Choice of method depends upon one's a priori beliefs and information, research goals, program availability and difficulty of use? etc. The weighted least-squares approach we proposed is a modified minimum chi-square technique, which should produce consistent estimates in large samples (see above references), but the estimates may not be as efficient as those derived by other methods, such as maximum likelihood. The weighted least squares methods have the main advantages of simplicity, ease of use, and likely availability of computer programs. DISCUSSION AND CONCLUSIONS This paper attempted to integrate key ideas in Functional and Conjoint Measurement with key ideas in Discrete Choice Theory. We concentrated upon experimental design as the integrating factor because such a discussion should be familiar to FM/CM researchers and because the experimental analogs to Discrete Choice Theory have heretofore been ignored. We concentrated on the collection and analysis of aggregate choice data and the MNL model because such data can be analyzed with familiar general linear models procedures, likely to be known to most FM/CM researchers, and because it simplifies the discussion of the integration. Researchers familiar with Discrete Choice Theory should be able to make the transition to individual level analyses, to be explored in later papers. A number of example empirical applications are available in Louviere and Woodworth (1982) or in Louviere (1982), available from the present author upon request. Current statistical results for discrete choice models assume large sample requirements hold; such requirements are not satisfied for single replications of an experiment on one individual. Nor has there been much work directed toward the analysis of large samples of individuals, each of whom faced an identical set of choice set treatments. Both of the previous problems are repeated measures problems which require special care in classical multivariate analysis and undoubtedly will also require care in the analysis of choice data. To date, there have been no empirical studies which have employed such data, and to our knowledge there has been little statistical attention given to this problem. In addition, one would ordinarily wish to include measures of demographics and/or psychographics in the models to account for individual differences and no work of this kind is available for the repeated measures case, either. Present alternatives include a priori grouping of subjects and aggregation of their choices into frequency data for analysis by means of the methods described in this paper, or a posteriori grouping and aggregation into frequency data based upon some measure of similarity between individuals' choice responses. In the case of allocation data, of course, one can appeal to large sample assumptions and estimate individual level models. Such assumptions, if approximately satisfied, permit one also to satisfy repeated measures assumptions and analyze the vectors of individual parameters in a two-stage analysis; the first of which involves estimation of individual models and the second of which involves associating the derived coefficient vectors with demographic and/or other types of covariates. An advantage of the choice or allocation approach is that it simulates market behavior of direct interest to researchers and policy makers alike. Current applications of FM/CM or other multiattribute techniques require the estimation of individual judgment equations combined with an algorithm to simulate choices. Typically, the choice simulation algorithms employ ad hoc choice rules such as "highest predicted utility equals first choice," and/or other options such as transforming the predicted utilities according to a Luce Choice Model form (see equation 2) and then summing the predictions over the sample to derive market share estimates (see, e.g., Green, Carroll and Goldberg 1981; Curry, Louviere and Augustine 1981, 1982; Green, DeSarbo and Kedia 1980). Such methods are ad hoc and have a variety of important limitations which :lave yet to be adequately examined, such as the following: One is simulating a probabilistic process by means of a system of deterministic models; one cannot test any choice process models because the choice data are artifactual; one cannot test the resulting choice estimates except by recourse to real market data; hence, both internal and external validity are unknown. The choice experiments described in this paper, in contrast, directly address the problem of aggregate market share behavior, permit the observation of actual choice behavior and the testing of choice hypotheses; hence, the approach has high internal validity and can be tested for external validity at least as easily as the FM/C! simulation approach. Moreover, as has been suggested, the methods can be applied at the individual level, leading to applications similar to CM/FM. The proposed approach also has advantages over the approaches proposed by Reibstein (1978), which involve the use of resource allocations over the full choice set, or Batsell (1980) and Batsell and Lodish (1981), which suggest the use of all choice sets. With the Reibstein approach, one must assume the Luce Model to be true without the ability to test this hypothesis. If the Luce Model fails due to similarity effects or the like, the forecasts produced by the Reibstein method could be greatly in error. With the Batsell method, the use of all choice sets would be precluded in much applied work. Hence, the approach discussed in this paper represents a useful generalization of the Batsell and Reibstein ideas. The approach, however, will not be appropriate for some problems and it does not replace FM/CM methods or econometric applications of Discrete Choice Theory. Rather, it should be viewed as an alternative tool which can provide new and different insights in basic and applied research in choice behavior and which is complementary to FM/C! methods or traditional econometric methods. This approach should contribute insights into a variety o f problems not currently addressable by means of either FM/CM or econometric discrete choice techniques. REFERENCES Amemiya, T. (1981), "Qualitative Response Models: A Survey," Journal of Economic Literature, 19, 1483-1536. Anderson, N. H. (1981), Foundations of Information Integration Theory, New York: Academic Press. Batsell, R. R. (1980), "Consumer Resource Allocation Models at the Individual Level," Journal or Consumer Research, 7, 78-87. Batsell, R. R. and Lodish, L. M. (1981), "A Model and Measurement Methodology for Predicting Individual Consumer Choice," Journal of Marketing Research, 18, 1-12. Bishop, Y. M. M., Fienberg, S. E. and Holland, P. W. (1975), Discrete Multivariate Analysis, Theory and Practice, Cambridge, Mass.: MIT Press. Currim, I. S. (1982), "Predictive Testing of Consumer Choice Models Not Subject to Independence of Irrelevant Alternatives," Journal of Marketing Research, 19, 208-22. Curry, D., Louviere, J. J. and Augustine, M. J. (1981), "On the Insensitivity of Brand-Choice Simulations to Attribute Importance Weights: A Comment on a Paper by Green, DeSarbo and Kedia," Decision Sciences, 502-16. Curry, D., Louviere, J. J. and Augustine, M. J. (1982, in press), "The Aggregate Effects of Induced Changes in Consumer Decision Structures," Research in Marketing. Gensch, D. H. and Recker, W. W. (1979), "The Multinomial Multi-Attribute Logit Choice Model," Journal of Marketing Research, 16, 124-32. Green, P. E. and Srinivasan, V. (1978), "Conjoint Analysis in Consumer Research: Issues and Outlook," Journal of Consumer Research, 5, 103-23. Green, P. E., DeSarbo, W. S. and Kedia, P. K. (1980), "On the Insensitivity of Brand-Choice Simulations to Attribute Importance Weights," Decision Sciences, 11, 439-50. Green, P. E., Carroll, J. D. and Goldberg, S. M. (1981), "A General Approach to Product Design Optimization via Conjoint Analysis," Journal of .Marketing, 45 (3), 17-27. Hahn, G. J. and Shapiro, S.S. (1966), "A Catalog and Computer Program for the Design and Analysis of Orthogonal Symmetric and Asymmetric Fractional Factorial Experiments," General Electric Research and Development Center Technical Report No. 66-C-165, Schenectady. N.Y.: Research and Development Center. Hensher, D. A. and Johnson, L. W. (1981), Applied Discrete Choice Modeling, London: Croom-Helm/New York: John Wiley. Horowitz, J. (1981), "Sampling, Specification and Data Errors in Probabilistic Discrete-Choice Models," in Hensher and Johnson, op. cit., 417-36. Krantz, D. H. and Tversky, A. (1971), "Conjoint-Measurement Analysis of Composition Rules in Psychology, Psychological Review, 78, 151-69. Louviere, J. J. (1982), "An Experimental Approach for Integrating Conjoint and Functional Measurement with Discrete Choice Theory," Working Paper No. 56, Institute of Urban and Regional Research, The University of Iowa. Louviere, J. J. and Woodworth, G. (1982), "Design and Analysis of Simulated Consumer Choice or Allocation Experiments: An Approach Based on Aggregated Data," Working Paper No. 82-7, College of Business Administration, University of Iowa. Louviere, J. J. (1981), "On the Identification of the Functional Form of the Utility Expression and Its Relationship to Discrete Choice," Appendix B, in Hensher and Johnson, op. cit., 385-415. Luce, R. D. (1959), Individual Choice Behavior, New York: John Wiley. Luce, R. D. (1977), "The Choice Axiom After Twenty Years" Journal of Mathematical Psychology, 15 (2), 215-33. Manski, C. F. and McFadden, D. (1981), Eds., Structural Analysis of Discrete Data, Cambridge, Mass MIT Press. McFadden, D. (1980), "Econometric Models for Probabilistic Choice Among Products," The Journal of Business, 53(3), Part 2, 513-30. National Bureau of Standards (1957), "Fractional Factorial Experiment Designs for Factors at Two Levels," Technical Report NBS 48, Applied Math Series, Washington. D.C. Punj. G. N. and Staelin, R. (1978), "The Choice Process for Graduate Business Schools," Journal of Marketing Research, 15, 588-98. Reibstein, D. J. (1978), "The Prediction of Individual Probabilities of Brand Choice," Journal of Consumer Research, 5, 163-8. Theil, H. (1971), Principles of Econometrics, New York: John Wiley. Tversky, A. (1972), "Elimination by Aspects: A Theory of Choice," Psychological Review, 79, 281-99. Westin, R. B. and Watson, P. L. (1975), "Reported and Revealed Preferences as Determinants of Mode Choice Behavior," Journal of Marketing Research, 12, 282-9. ----------------------------------------
Authors
Jordan J. Louviere, University of Iowa
Volume
NA - Advances in Consumer Research Volume 10 | 1983
Share Proceeding
Featured papers
See MoreFeatured
E2. Donation versus Adoption: How the Mode of Helping Moderates the Effect of Emotions on Helping
Ziqi Shang, Renmin University of China
Xiuping Li, National University of Singapore, Singapore
aradhna krishna, University of Michigan, USA
Featured
What Converts Webpage Visits into Crowdfunding Contributions: Assessing the Role of Circumstantial Information
Lucia Salmonson Guimarães Barros, Universidade Federal de Sao Paulo
César Zucco Jr, Brazilian School of Public and Business Administration, Brazil
Eduardo B. Andrade, FGV / EBAPE
Marcelo Salhab Brogliato, Brazilian School of Public and Business Administration, Brazil
Featured
Stating the Obvious: How “Ugly” Labels Can Increase the Desirability of Odd-Shaped Produce
Siddhanth Mookerjee, University of British Columbia, Canada
Yann Cornil, University of British Columbia, Canada
Joey Hoegg, University of British Columbia, Canada