Aggregating Responses in Additive Conjoint Measurement

ABSTRACT - Procedures for application of additive conjoint measurement to group rather than individual data are discussed. A statistical interpretation of the model is proposed and a goodness-of-fit test examined.


David Curry and William Rodgers (1977) ,"Aggregating Responses in Additive Conjoint Measurement", in NA - Advances in Consumer Research Volume 04, eds. William D. Perreault, Jr., Atlanta, GA : Association for Consumer Research, Pages: 35-40.

Advances in Consumer Research Volume 4, 1977   Pages 35-40


David Curry, University of Iowa

William Rodgers (student), University of Iowa


Procedures for application of additive conjoint measurement to group rather than individual data are discussed. A statistical interpretation of the model is proposed and a goodness-of-fit test examined.


Additive conjoint measurement has been applied primarily to individual level data in marketing research studies. For example, Green and Rao (1971) and Green and Wind (1975) give many micro-level applications and Johnson (1974) states that since his approach "is concerned with value systems of individual consumers, the method is most appropriate for product categories where consumer desires are heterogeneous and where markets are highly segmented." These individual level applications often result in important insights but if the model is to develop its full potential for practical marketing problems then changes should be made which also allow its use on aggregate data. The purpose of this paper is to discuss several issues in aggregating responses for additive conjoint measurement. These are:

1. An aggregation procedure should retain information about the variation of subjects around a "group" rank.

2. The model presently assumes the ranking judgments are infallible (errorless) rather than fallible.

3. A parametric goodness-of-fit test is needed.

The methods proposed for dealing with these issues are illustrated using the results of a pilot study of product attributes and consumer environmental concern.

Problem Background

The technique currently employed for aggregation is to average the rankings of several individuals and submit this average ranking to a conjoint scaling algorithm, such as MONANOVA (1965). The problem with this technique is its lack of an explicit model relating the individual rankings to the aggregate. Two aspects of such a model would be whether the individual rankings are compatible with a single overall ordering and how to deal with variation in the individual ranks relative to the overall.

A serious issue here as in any model building paradigm is how to test the goodness-of-fit of the model. Using the average rank procedure the goodness-of-fit measure is the same as with a single person; i.e., stress. However the averaging process results in a considerable loss of information about the variation of individuals around an average rank position. Stress disregards this variation and hence is better suited as a goodness-of-fit index when just one ranking is analyzed.

These problems could be solved more reasonably if the additive conjoint model were statistical in nature rather than mathematical. A statistical model would allow for error in the input judgments and yield an explicit parametric goodness-of-fit test. The parametric assumptions required for a statistical model would be somewhat restrictive relative to the nonparametric version as it is presently known. However this added restriction is compensated by gains in the analyst's ability to diagnose when (end why) the model is probably false.


The conjoint measurement model is usually interpreted as deterministic. The choice alternatives are assumed to be composed of m factors with ni levels for factor i. In a complete factorial design a subject would rank order the EQUATION possible "brands" which can be generated by varying over all levels on all factors. The final ranking is assumed to be error free or "infallible."

Thurstone's judgment scaling model provides a basis for conceptualizing "fallibility" in ordinal judgments [Torgerson (1958) Ch. 8]. The statistical assumptions introduced by Thurstone can be interpreted to account for error at two levels: (1) the fallibility of an individual's choice on a single trial relative to some "true" choice over many similar trials and (2) the fallibility of an individual's judgment relative to some "true" judgment of a group.

Individual Fallibility

For case (1) assume an individual must decide which of two alternatives xi or xj is preferred. [Only the ordinal judgment in a paired-comparison need be discussed since any rank order can be decomposed into a series of binary choices. The relation "is at least as preferred" is denoted >.The tilde, e.g., y, denotes a random variable.] Thurstone assumed the judgment for this pair is not based on the subject's true perception of each alternative but on discriminal signals which are a function of the true perceptions and random error. Denoting the error associated with xi as di, the ordinal judgment is based on the signals

yi = xi + di   and yj = xj + di    (1)

then xi > xj if and only if yi > yj.

The di represent error which might be present due to momentary lapses in concentration, effects of experimental context, physiological distortions and other influences on a particular trial. This is essentially a regression model with each di assumed normally distributed with mean zero and variance si2. These assumptions along with simplifying conditions (e.g., si2 = s2 all i) lead Thurstone to the law of comparative judgment. [Details can be found in (Torgerson, Ch. 8, 9, 10).]

Group Fallibility

The second interpretation of error (in which the analyst aggregates judgments over individuals) is more pertinent for the issues discussed here. The group is assumed to have a true perception for each alternative but individuals in the group may have perceptions which vary around the true one. The error term accounts for this type of fallibility.

In either case the experimental data which summarizes the ordinal judgments is the proportion of times xi > xj. These proportions can be generated by replicating over trials for case 1, over people for case 2 or in combination. Since the emphasis in this paper is to illustrate aggregation over people the second mode is used. An advantage of using paired-comparison proportions versus average ranks in forming a single scale for conjoint measurement is that the resulting ordering can be tested for unidimensionality. [see Torgerson (1958, p. 185)]


Using the judgment scaling assumptions in the context of additive conjoint measurement requires certain interpretations which should be made explicit. Suppose the alternatives to be ranked are composed of just two factors a, b with levels aj (j = 1, . . ., r) and bk (k=1, . . ., c). Crossing these factors gives the alternatives xi (i = 1, . . ., r . c). In the above discussion the error assumptions were placed on the combinations, xi. However the additive conjoint model assumes there exist scale values or utilities for the factor levels which are additive. Denoting these values with "^" the (errorless) model is:

xi = aj + bk   (2)

Since the statistical model assumes the xi are subject to error it is reasonable to relate this to error on the right hand side of equation 2. Several interpretations exist but perhaps the most transparent is to assume possible error in the implicit judgments on each factor. This model is written:

xi + di = (aj  + eaj) + (bk + ebk )   (3)

where; eaj, ebk  are random error terms associated with levels j and k on a and b respectively.

It is not the purpose here to develop the many consequences of this formulation. Some work has been attempted in this vein by Falmagne (1976). However it is important for the goodness-of-fit test used later to examine some ways the errors on the right hand side of eq. 3 may combine. These are simply listed for reference.

1. The additive independence axiom in an additive conjoint structure (see footnote 4) implies that whatever parametric distribution the c follow, these distributions are independent.

2. If the eare normally distributed then so are thed.

3. Under certain simplifying conditions, e. g: constant variance, the distribution for the di can be completely specified.

We remark that the errors on the right in eq. 3 are "un-observable" in the additive conjoint paradigm used in marketing implying their separate contributions to the can not be analyzed. [This is in contrast to Falmagne's (1976) paradigm where the factors are tones varying in intensity--a continuous attribute possessing ratio scale properties. The interest in his paradigm is on characterizing the psychophysical transform employed by a subject judging overall loudness based on tones given simultaneously in the left and right ears. In Falmagne's paradigm the errors on each factor can be estimated since replications of a set tone are possible in each ear.]

The goodness-of-fit test used below is not sensitive to the normality assumption for the di. It is somewhat sensitive to the assumption the di have constant variance but it is primarily a test of whether the data are commensurate with the unidimensional continuum implied by the additive conjoint model.


The statistical framework introduced above provides the basis for a goodness-of-fit test. The test assumes the experiment has generated a sample proportions matrix with entry pij defined as the proportion of times xi > xj. The test compares the pij to fitted proportions Pij generated by the model. Under the assumption that the errors di are independent, identically distributed normal random variables the fitted proportions are given by:

pij = F-1 (xi - xj)   (4)


F-1 = is the inverse of the standard normal deviate.

The details for this idea in the context of the law of comparative judgment are contained in Torgerson (1958) and need not be repeated here. The critical concept is that like the law of comparative judgment, additive conjoint measurement results in a unidimensional scale for the alternatives xi. The normality assumption allows differences between fitted scale values to be transformed into fitted proportions under the normal curve.

Mosteller (1951) showed that the statistic


is distributed as chi-square. The appropriate degrees of freedom with n choice alternatives is 1/2(n-1)(n-2).

The statistical model and the goodness-of-fit test are illustrated below. The section following discusses the test results and its usefulness for applied marketing problems.


The substantive problem of interest to the researchers was the effect of certain economic, performance and ecological considerations in the market acceptance of antiperspirants. A set of eight hypothetical antiperspirants was created by varying each of three factors over two levels. Figure 1 defines the factors and gives a synopsis of the anchoring cues for the high and low levels on each. As the figure indicates the high and low levels on price were defined relative to the means of existing products in the local market. [A survey of local drugstores located eighteen different brands of antiperspirants. The low and high prices observed in the local market were $.98 for Walgreen's private label and $1.90 for Dry Ban.] A similar approach was used for the effectiveness dimension with two levels suggested by a content analysis of advertising copy for representative brands. Environmental impact was described using the key concepts that summarize current scientific opinions on the matter. This includes the evidence of a hydrocarbon 6 ozone 6 radiation 6 skin cancer link but also that the evidence is inconclusive, the effects have not been systematically defined and there is a long time horizon associated with some of the effects.



The objective of anchoring the levels on this factor was to give a concise review of the issue without introducing any new information or stimulating the affective component.

Several other studies have dealt with how environmental concern manifests itself in purchase intentions and behavior. Included would be the Mazis, et al. study (1973) using reactance theory to explain a positive shift in attitudes and purchase behavior favoring higher phosphate detergents in Miami, Florida following passage of a local ordinance banning these detergents. Henion (1972) studied low phosphate detergents finding their sales increased with the mere presentation of passive information about phosphate levels. Kinnear and Taylor (1973) used INDSCAL in studying Canadian consumer panel data finding an ecological dimension in the purchase of detergents. Webster (1975) extended their study in an attempt to identify relevant socio-psychological variables prominent in the ecological market segments suggested by Kinnear and Taylor.

These previous reports have studied the environmental issue but they have not considered the use of aerosol containers or antiperspirants nor utilized additive conjoint measurement in their analysis. It seems reasonable that the conjoint model will provide useful insights about the utility trade-offs involved in evaluating products based on passive presentation of information about environmental factors. These concerns were instrumental in the present study.

Data and Method

One hundred University of Iowa marketing students were presented the 28 pairs formed by the eight hypothetical products in a complete paired-comparison design. Each subject indicated which of two antiperspirants was preferred for each pair with indifference and don't know answers not allowed. The three factors, price, effectiveness and environmental concern, were also presented to each subject in pairs asking them to select the factor that would usually be considered most important in a purchase decision. This data facilitates comparison of stated importance weights for the factors with importance weights implied by the additive conjoint model as derived from the preference data directly.

Groups of 30 to 40 subjects were presented full instructions and explanations for understanding the factor levels. The instructions, which took about 10 minutes, included an example of a paired-comparison choice in the same form as those in the study but using a different product class and different factors. The order of presentation of the paired-comparison and factor levels was randomized resulting in six different questionnaire forms for the 100 subjects.

The law of comparative judgment was used to aggregate responses over individuals to form a unidimensional scale for the eight products. This scale is referred to as the "metric" input to MONANOVA since in theory it is unique up to an affine transformation and therefore has interval scale properties. For comparative purposes the rank positions of the products on this continuum were also submitted to MONANOVA. This input is referred to as "ordinal." The ordinal input could have been inferred directly from the original proportions matrix (see Table 3) as the complete row sums without resorting to the law of comparative judgment.

Green (1974) has presented an excellent summary of experimental designs which reduce the data requirements for this type of study. Some of Green's suggestions might have been used here but would likely have confused the model testing which was facilitated by a complete factorial design. In large scale applications with more factors and levels, Green's suggestions would be used in combination with the method of paired-comparisons. Another important aspect of the design in a larger application is to use a probability sample from the market segment whose preferences are being analyzed. Inference to a population is not being stressed in this paper but is critical in applied situations.


Table 1 reports the part-worth utilities estimated by MONANOVA for each data type. The pattern of results is similar with either method. High levels are preferred on effectiveness and environment and low levels on price as expectations would dictate.



[MONANOVA standardizes the scores on each factor to have mean zero. The total variance from the factor is standardized to equal m or 3 in this application.]

A useful feature of the part-worth utilities is their suggestion of the relative importance of each factor to the overall ordering. A measure of importance is the variance of the scale values for each factor; i.e., the greater the variance of the utility values for a factor the greater its effect on total utility. Interestingly these results show that price is the least important of the three while effectiveness is most important.

Another indication of importance is given in Table 2 which shows that in the derived overall order low price is the first level given up by the subjects. High environment is given up next followed finally by nigh effectiveness.



Good arguments can be made for any of the six possible orderings (in terms of importance) of the factors for this group. For example one may have hypothesized that environmental concerns would dominate in a segment composed of students who tend to embrace social causes. As an extra check on the weights implied by MONANOVA the proportions from the three direct paired-comparisons of the factors were scaled by the law of comparative judgment. The resulting values were effectiveness (.43), environment (-.12) and price (-.31). The product moment correlation between these scores and the (metric) variances in Table 1 is .999. This result provides strong support for using the variances to measure importance. The correlation with the variances from the ordinal data is .988.

The ordinal output is useful for questions about the robustness of the MONANOVA algorithm but serves no other purpose in terms of the substantive results. Since the authors are confident that the law of comparative judgment does provide useful metric qualities to the scale, the remainder of the interpretations are based on the metric data. Table 1 illustrates that the principle effect of using only the order data is to redistribute the variance (and hence implied importance) of the factors. Table 2 shows the higher correlation of the metric output with the metric scale as would be expected. Even though zero stress was reached for both data sets, the ordinal data allowed a smoother monotone function since it imposed fewer constraints.

Table 2 exhibits an important structural feature of the final ordering--it satisfies the axiom of additive independence. [Krantz, et al. (1971, p. 301) give the following definition for additive independence. "A relation > on Xni=1"i is independent iff, for every M<N, the ordering >M induced by > on X1ieM"i for fixed choices aieAi ieN-M, is unaffected by these choices." Here N is the set of factors {1,2,...,n}. In an operational sense what must be checked in the present experiment is, for example, to fix a level on price (say at Low) and determine the induced order on Effectiveness x Environment. The reader can check that the order is (HH, HL, LH, LL). When price is fixed at high this ordering should remain unaffected--as is the case. Similar checks must be made fixing levels on the other factors.] This axiom requires that there be no interaction between the factors. The practical implication for this study is that there is no need to resort to a more complex (polynomial) conjoint model. A single violation of additive independence would theoretically require the use of a more complex model. But in practice the issues of parsimony and interpretability counter balance theoretical elegance--a situation which always involves subjective decisions by the model builder.

The final ordering in Table 2 is not as "clean" as it would nave been if alternatives 4 and 5 were reversed. The switch would result in a clear do-loop pattern among the factors, however the induced order on each factor is obvious. The only break in the otherwise perfect array is that low effectiveness when paired with high environ-merit and low price is preferred to high effectiveness when combined with low environment and high price.

Reproducing Proportions

The original and model estimated proportions are shown in Table 3. A cursory examination of the table suggests that the model does an excellent job of reproducing the proportions in all but a few cells. These results require closer attention however because the chi-square statistic is sufficient to reject the model at the a = .001 level. A complete analysis and understanding of this situation requires consideration of the following.

(1) How does one interpret the overall result; i.e. what particular aspects of the model would tend to increase c2?

(2) What patterns are revealed in a cell-by-cell analysis not revealed by the overall test?

(3) From a substantive point-of-view how does management use the test results?




Comments on Test Results

(1) Mosteller's c2 test is quite powerful against the model when applied with large samples as these results indicate. Even though the reproduced proportions matrix is very similar to the original, the model is overwhelmingly rejected. The test is more or less sensitive to each of the following assumptions:

a. A unidimensional continuum for the alternatives exists.

b. The di are normally distributed.

c. Variances for the error terms are equal.

As Mosteller (1951, p. 216) noted, the test is principally for revealing violations of unidimensionality. It is not especially sensitive to the normality assumption. This is fortunate because the assumption is primarily a computational device. Recent studies have shown that a viable choice model results from replacing the normal distribution with others; e.g. the logistic. [See the article by Rumelhart and Greeno (1971). These authors point out that using the logistic distribution in Thurstone's model is equivalent to Restle's (1961) choice model.] Finally, the test is somewhat sensitive to violations of the equal variances assumption. However with only one or two aberrant variances the main contributor to chi-square is still the incompatibility with unidimensionality.

One might conclude in this case then that the original proportions matrix makes the unidimensional solution of the additive conjoint model unreasonable. Another way of checking dimensionality in such a structure is to count the violations of weak, moderate and strong stochastic transitivity. [Weak stochastic transitivity is: P(x>y) > .5 and P(y>z) > .5 together imply P(x>z) > .5; where P(x>y) is the proportion preferring x at least as much as y. Moderate stochastic transitivity replaces the implied condition with [P(x>y) or P(y>z)] and strong stochastic transitivity replaces it with the max. of these two numbers.] If a structure is multidimensional frequent violations of weak stochastic transitivity will be found. As footnote c (Table 3) indicates the violation rates are low in this structure. The authors' experience with such violations in other cases suggests that the observed proportions are quite compatible with a single scale.

(2) The main problem with Mosteller's test on this data is the presence of many proportions near one. Use of the inverse sine transform makes the test very sensitive in these cells. Although they do not represent the largest deviations of observed and expected proportions, the four cells circled in Table 3 account for nearly 75% of the observed chi-square value. For example in the 1 vs. 5 choice the difference is only .05 (1.00- .95) for expected less observed but the associated chi-square value for the cell is l69 points. While for the 3 vs. 4 choice the error in proportion is twice as high (.69- .59 = .10) yet the pair contributes only 36 points to chi-square. The reason is because the arc sine function is very steep between about .9 and 1.00 leading to the above situation as the proportions in the (3,4) pair are near the mid-range in the domain of the transformation function.

These comments are not intended to discourage use of this test, but the authors suggest that it appears the results are easier to interpret if none of the proportions exceed about .9. Often the value or a goodness-of-fit test lies in pinpointing specific weaknesses in the model as opposed to an overall evaluation and this function is served here.

Comments on the Substantive Issue

Although pairs (3,4) and (6,7) did not contribute significantly to c2 they represent large deviations from expectation. Pairs (3,6) and (4,5) also nave large deviations and contribute significantly to c2. Three of these four pairs proved to be adjacent on the final continuum and adjacent choices represent the most difficult tradeoffs for subjects. For example 3 and 4 are respectively HLL and LHL on effectiveness, environment and price in order. Price does not play a role in the preference, but although 3 is higher in effectiveness than it also represents a greater threat to the environment. The earlier results show that effectiveness is the more important factor for the group so the difference in effectiveness proves compelling. However the difference on environment restricted the observed proportion favoring 3 to .59 from an expectation of .69. It appears that when faced squarely with the tradeoff between effectiveness and environment some individuals increased the weight attached to the environment component. Similar analyses follow for the other two adjacent pairs. Invariably the difference on environment played a critical role in deviations from expectations.

The non-adjacent pair 3 vs. 6 forces the subject to choose between two alternatives which are maximally different; i.e., HLL vs. LHH respectively. One would expect the high effectiveness combined with low price to be preferred to nigh environment combined with high price--and the data support this expectation. But the model estimate is a much higher proportion preferring 3 (.98) than is observed (.82). The sensitive environment issue again serves to dampen the enthusiasm for a product which dominates on purely economic variables.

Managerial Implications

In consideration of the proposed methodology, a major concern of corporate management will be the bias imposed by the use of preferences (intention scores) rather than actual purchase behavior. Actual choice behavior in market tests or laboratory settings could be used to obtain the proportions. This would obviously be more costly than the preference questionnaire reported, but would yield more valid and reliable results, in this particular case the literature on environmental concern suggests that the conjoint model would fit better if the proportions matrix were based on behavior. In the application discussed when the model was in serious error the choice usually involved a difference on the environmental issue with the estimated proportions in these cases too high. In an actual choice, perhaps between 3 and 4, the environmental issue would probably play a less critical role with consumer's willing to pay more in lip service to the issue than they will pay in dollars.

The most obvious application of this method is in new product development studies. The chief competitive models use multidimensional scaling or conjoint measurement with disaggregated data. The aggregate approach seems to have advantages over each of these. MDS is plagued by the problem of identifying the attributes contributing to perceived similarity and preference. Attribute identification is not a problem here. In fact the main contribution of conjoint measurement over MDS for new product development is to provide experimental control of the number and nature of the attributes. Secondly, using MDS to make inferences about preferences normally involves a two-stage procedure. A similarity configuration is derived and then ideal points are located in this space.

The problem with this sequential approach is that the same attributes may not be involved in both types of judgments. In addition the MDS methodology provides no goodness-of-fit test which can detect violations of this assumption leaving the procedure rather speculative.

As a data reduction procedure the method proposed here is more efficient than scaling the input of many individuals separately. That the model is falsifiable means the analysis can uncover areas in the decision process which may he especially critical to the brand share eventually achieved. This is an advantage of any statistical model over a deterministic version.

Use of this model for new product studies would follow most of the same principles already suggested in the literature. [See Shocker and Srinivasan (1974) or Urban (1975) for a good Summary.] The most critical development needed is a model which translates paired-comparison proportions into brand shares indicating how preferences are redistributed when all alternatives are offered simultaneously. There is no closed solution to this problem given the information in a paired-comparison matrix. However several alternative theories have been developed by mathematical psychologists; e.g., Corbin and Marley (1974). These require additional assumptions about the choice process including a critical one that allows for a no buy option. For example in a follow up study with the alternatives in this project, 100% of the respondents indicated alternative 8 was unacceptable and about 50% said alternatives 6 and 7 were unacceptable. A market share model would have to utilize this information in projecting brand shares and separating the alternatives into action and no-action classes.

A useful feature of forecasting brand share using conjoint measurement with aggregate data is that the model would also suggest relative penetration into competitive brand shares. A viable research strategy would be to embed competitive products in the alternative set along with the company's new product ideas. A new product idea would be judged not only on its projected brand share out also on its market position.


Ruth Corbin and A. A. Marley, "Random Utility Models with Equality: An Apparent, but Not Actual, Generalization of Random Utility Models," Journal of Mathematical Psychology, 11 (August, 1974), 274-293.

Jean-Claude Falmagne, "Random Conjoint Measurement and Loudness Summation," Psychological Review, 83 (January, 1976), 65-79.

Paul E. Green, "On the Design of Choice Experiments Involving Multifactor Alternatives," The Journal of Consumer Research, 1 (September, 1974), 65-79.

Paul E. Green and Vithala R. Rao, "Conjoint Measurement for Quantifying Judgmental Data," Journal of Marketing Research, 8 (August, 1971), 355-363.

Paul E. Green and Yoram Wind, "New Way to Measure Consumers' Judgments," Harvard Business Review, (July-August, 1975), 107-117.

Karl Henion, "Effect of Ecologically Relevant Information on Detergent Sales," Journal of Marketing Research, Vol. IX (February, 1972), 10-14.

Richard M. Johnson, "Trade-off Analysis of Consumer Values,'' Journal of Marketing Research, (May, 1974) 121-127.

Thomas C. Kinnear and James R. Taylor, "The Effect of Ecological Concern on Brand Perceptions," Journal of Marketing Research, Vol. X, (May, 1973), 191-197.

David H. Krantz, R. Duncan Luce, Patrick Suppes and Amos Tversky, Foundations of Measurement: Volume I Additive and Polynomial Representations (New York and London: Academic Press, 1971).

Joseph B. Kruskal, "MONANOVA: A Fortran IV Program for Monotone Analysis of Variance," Marketing Science Institute working paper.

R. Duncan Luce, Individual Choice Behavior: A Theoretical Analysis (New York: John Wiley and Sons, 1959).

Michael B. Mazis, Robert B. Settle and Dennis C. Leslie, "Elimination of Phosphate Detergents and Psychological Reactance," Journal of Marketing Research, Vol. X, (November, 1973), 390-395.

Frederick Mosteller, "Remarks on the Method of Paired Comparisons: III. A Test of Significance for Paired-Comparisons when Equal Standard Deviations and Equal Correlations are Assumed," Psychometrika, 16 (June, 1951), 207-218.

Frank Restle, Psychology of Judgment and Choice (New York: John Wiley and Sons, 1961).

Donald L. Rumelhart and James G. Greeno, "Similarity Between Stimuli: An Experimental Test of the Lute and Restle Choice Models," Journal of Mathematical Psychology, 8 (August, 1971), 370-381.

Allen Shocker and V. Srinivasan, "A Consumer-Based Methodology for the Identification of New Product Ideas," Management Science, 20 (February, 1974), 921-937.

Warren S. Torgerson, Theory and Methods of Scaling (New York: John Wiley and Sons, 1958).

Glen L. Urban, "PERCEPTOR: A Model for Product Positioning," Management Science (April 1975), 858-871.

Frederick E. Webster, Jr., "Determining the Characteristics of the Socially Conscious Consumer," Journal of Consumer Research. Vol. 2 (December, 1975), 188-197.



David Curry, University of Iowa (student), University of Iowa
William Rodgers


NA - Advances in Consumer Research Volume 04 | 1977

Share Proceeding

Featured papers

See More


F13. A Story of Waste: Trust, Symbolic Adoption & Sustainable Disposal

Marwa Gad Mohsen, Babson College, USA

Read More


System Justification and the Preference for Atavistic Products

Minju Han, Yale University, USA
George Newman, Yale University, USA

Read More


The Effects of Subjective Knowledge and Naïve Theory on Consumers’ Inference of Missing Information

Lien-Ti Bei, National Chengchi Uniersity, Taiwan
Li Keng Cheng, National Chengchi Uniersity, Taiwan

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.