Halo Effects in Brand Belief Measurement: Implications For Attitude Model Development


William L. Wilkie, John M. McCann, and David J. Reibstein (1974) ,"Halo Effects in Brand Belief Measurement: Implications For Attitude Model Development", in NA - Advances in Consumer Research Volume 01, eds. Scott Ward and Peter Wright, Ann Abor, MI : Association for Consumer Research, Pages: 280-290.

Advances in Consumer Research Volume 1, 1974    Pages 280-290


William L. Wilkie

John M. McCann

David J. Reibstein

[William L. Wilkie is Assistant Professor of Industrial Administration, Purdue University, on leave at the Marketing Science Institute and Harvard University, 1973-74. John M. McCann is Assistant Professor of Marketing, Cornell University. David J. Reibstein is a Doctoral Candidate in Marketing at the Krannert School, Purdue University.]

Multi-attribute attitudinal models have recently received considerable attention in consumer research. It is now reasonably well established that these models can provide useful predictions of brand affect, preference, and choice. Research interest is shifting toward model refinements in terms of conceptualization, measurement, and analytical methods (Wilkie and Pessemier, 1973). One basic issue concerns the measurement of brand beliefs (also termed instrumentalities or expectancies) and their impacts on the model testing and development process. This paper reports a study of halo effects in commonly used belief measurement techniques. Results indicate that resolution of measurement issues are likely to require prior attention to the general question of appropriate criteria for model performance.

The basic (linear compensatory, composition) multi-attribute attitude model used in marketing is here defined as:



Ajk = attitude toward brand j for consumer k

Iik = importance of attribute i for consumer k

Bijk = consumer k's belief as to the extent to which brand J offers satisfaction on attribute i

Although differing models and terms have been advanced, the basic purpose of all proposals is to gain understanding of brand predisposition through explicit measurement and suitable weighting of brand beliefs on attributes thought to determine consumer choice processes. It is in this regard that halo effects in brand belief measures become a significant issue.


The halo effect has long been recognized as a suppressor of variation in trait rating scales. Thorndike's (1920) classic work with psychological ratings warned of a rater's constant tendency to rate an individual high, medium, or low on many traits because he perceives the individual to be high or low on some particular trait. An alternative, Gestalt-like, conceptualization suggests that the Judge utilizes his overall evaluation of the individual or object as a guide for consistent trait ratings. For the purposes of this paper we shall not attempt to distinguish between causal notions, but rather recognize that either would be manifested by little or no differences in a respondent's belief ratings between attributes for a given brand. [This is not to exclude other causes of low variation across attributes.] The general measure of variation in belief ratings across attributes (but within individual and brand) is here termed "dispersion."

Marketing applications of the multi-attribute model rely upon dispersion in belief measures in order to assess a given brand's competitive strengths and weaknesses. If, for example, Crest toothpaste were to receive a "5" rating on every attribute, a simpler "how do you like Crest" measure would presumably serve both predictive and diagnostic functions far more efficiently than utilization of the multi-attribute model Halo effects have also been raised in a public policy context in FTC's hearing on Wonder Bread's "builds bodies 12 ways" advertising campaign. Here the issue concerned interpretation of increased belief ratings for nutrition given simultaneous increases in other brand belief ratings (Cohen. 1973).

Several marketing studies on the model have found strong evidence of halo effects. Bass and Talarzyk (1971) report a "completely consistent pattern of more favorable belief ratings given by respondents who prefer a particular brand" in a study covering fifty such measurements for toothpaste and mouthwash brands and attributes. One particularly interesting finding was Listerine receiving the best mean rating for taste by its users, while being ranked last on taste by users of all other brands! Cohen and Ahtola (1971) report thirty consistent rating patterns in their toothpaste study. Halo effects are also reported by Lehmann (1971) in a study of television shows and in a soft drink study by Bass, Pessemier, and Lehmann (1972). These observations have typically been accepted as reflections of "cognitive consistency" operators in the evaluative process. Cohen and Houston (1972) extend this explanation by positing z 'cognitive reevaluation" process for brand loyal consumers. It is possible, of course, that these operators might account for all of the observed consistencies; the present study investigates measurement factors as an alternative causal agent.

In a measurement sense, low levels of dispersion may be good or bad as a function of their causes. If reflective of the true state of respondent cognitions, low dispersions are valuable information. They are misleading, however, if they are a function of lack of knowledge of a particular brand or some attributes, if several of the attributes assumed to be independent by the investigator are instead interrelated in a consumer's mind, or if caused by experimental demand conditions. While familiar and polar brands are less susceptible to some of the latter agents, remaining brands are not. Thus, in addition to the halo effect, it is useful to consider possible causes for low dispersion in the mid-rated brands which are likely to be less familiar to the respondent. For both classes of low dispersion it is important to determine whether problems are amenable to improvement through differential measurement techniques.

In addition to the measurement issue per se, low levels of dispersion may have practical impacts on model performance,tests of model structure, and generalizations concerning consumer attitude structures. Conceptually, low B. dispersion results in a "flat" brand profile across attributes; we expected 12 that this flat profile would provide (1) clearer brand versus brand differences (i.e., better predictive performance of the model), (2) less differences between attributes within each brand (i.e., weaker diagnostic performance of the model), (3) less need for differential attribute weightings (i.e., importance weights not required in the model structure), and (4) more difficulty in comparing individual respondents who may have systematically used different portions of the belief rating scale (e.g., requiring analytical modifications such as normalization of BiJk and Iik for analyses across respondents). In order to investigate these possibilities an experiment was conducted that attempted to generate high and low dispersion levels in belief ratings. Obtained ratings were then used to test the above hypotheses.


Requisite model measures were obtained on the seven major toothpaste brands and six product attributes found useful in prior studies: taste/flavor, decay prevention, mouth freshening, whitening teeth, price, and breath freshening. Respondents were 186 graduate students who were randomly assigned to one of three questionnaire conditions. Each questionnaire contained identical number and type of measures, but differed slightly in the order of measures and most particularly in instructions for the brand belief rating task. [See Wilkie and McCann (1972) for details of instructions and measures.]

Condition 1 ("High Halo") was intended to encourage the respondent to utilize overall brand attitudes in his belief ratings and thus to produce low dispersion between attributes. This tendency was fostered by rating all attributes for one brand before moving to the next brand, and by warm-up instructions stressing brand "blends" and derogating consumers' abilities to independently evaluate attributes within a brand. Condition 2 ("Low Halo") was intended to discourage halo responses and thus produce higher dispersion across attributes. Brands were competitively rated within attribute and warm-up instructions stressed realistic brand problems in offering superior benefits on every dimension, derogated consumers who believe "their" brand to be superior on all dimensions, and indicated an interest in objective diagnosis of competitive strengths and weaknesses. A third condition with minimal instructions was also included to represent a typical measuring instrument as used in prior studies.

Conditions 1 and 2 were utilized for hypothesis testing. Random assignment to condition is assumed to control for all effects other than questionnaire. Five specific hypotheses were tested:

H1 = Observed dispersion in belief ratings is a function of the measuring instrument. Condition 2 dispersion will be greater than that observed in Condition 1.

H2 = The model's predictive performance is a function of the measuring instrument. Predictions of brand preference from Condition 1 will be greater than those from Condition 2.

H3 = The model's diagnostic performance is a function of the measuring instrument. The number of attributes inferred to exist in consumers' attitudinal structure will be greater in Condition 2 than in Condition 1.

H4 = Structure of the basic model (i.e., whether or not importance weights should be included) is a function of measuring instrument. Importance weights will contribute to both prediction and diagnosis to a greater extent in Condition 2 than in Condition 1

H5 = Contributions of analytical modifications are a function of measuring instrument. Normalization of BiJk and Iik data input to cross section analysis will have a greater effect in Condition 1 than in Condition 2.


Observed Dispersion in Brand Beliefs

Hypothesis 1 postulated that observed consistencies in belief ratings are not solely a reflection of the "true" beliefs of respondents; some portion is due to the elicitation process. Testing requires an operational measure of dispersion and comparison between conditions. Dispersion was operationalized as:

EQUATION, where Djk = consumer k's dispersion score for brand J.

Mean dispersion levels were compared, by brand, for respondents in Conditions 1 and 2. Results are summarized in Table 1.



Results support the hypothesis. Systematic differences in dispersion are evident for every brand between the two conditions; average total dispersion across brands (24.3 v. 34.1) summarizes these differences. Minimal instructions (Condition 3--not shown) resulted in an average dispersion of 31.2, significantly different from both high and low halo conditions. Differences are also found in terms of each respondent's most preferred brand. It should be noted that dispersion scores are higher here than for the average of remaining brands, indicating that low brand familiarity contributes to low dispersion.

These results indicate that cognitive consistency operators are not the only cause of low dispersion observed in multi-attribute attitude research, and that different measurement techniques can increase observed dispersion if desired. Whether an investigator should wish to increase dispersion, however, depends upon its impact on model performance--the subJect of remaining tests in this paper.

Predictive Performance

This is an especially important test in that prior marketing applications have uniformly invoked predictive criteria to Judge model performance for purposes of external validation, internal model structure analyses, and comparisons against competitive models. ["Prediction" as utilized in most marketing studies on the model is restricted to static systems and could be viewed as "retrodiction." See Zaltman, et.al. (1973, pp. 146-172) for examination of this issue.] Our hypothesis that low dispersion would lead to better predictions than high dispersion was based upon the dual notions of affect generalization and model constraints. Low dispersion obviously offers greater opportunity for a concentration of affect (presumably captured in the dependent variable) across attributes. To the extent that salient attributes may be missing from the set presented to respondents or underweighted in model calculations, high dispersion is unlikely to "recover" from affect generalization. Table 2 reports two tests of the model's ability to predict each respondent's stated brand preferences.



As hypothesized, the basic model was more successful with data from Condition 1 (low dispersion) than Condition 2 (high dispersion) in predicting each respondent's most preferred brand. The Spearman rho statistic, which accounts for accuracy in predictions of the entire preference ranking, also revealed stronger predictions with Condition 1 data.

Invoking only the predictive criterion, then, would seem to indicate that low dispersion (Condition 1) is the more desirable measurement approach. tn terms of preference prediction, however, there are alternatives to utilization of the multi-attribute model (Kraft, Granbois, and Summers, 1973). One alternative, a single "overall affect" item, yielded 78% correct predictions of the most preferred brand and an average Spearman rho of .88, both substantially higher than the model predictions in either condition. Given this result, it appears clear that "prediction" cannot and should not be the only rationale for the multi-attribute attitude model in marketing applications. The model does, however, offer a potential for diagnosis of consumer attitudinal structure which is not available in single item measures such as overall affect. The following section discusses results in this context.

Diagnostic Performance

While predictive criteria have consistently been utilized in model operationalizations, diagnostic criteria have typically been advanced in conceptual discussions as the major rationale for interest in the model. A particularly cogent argument has been presented by Cohen (1972). Unfortunately a lack of operationalized diagnostic criteria in past research provided no guidance for the present study. General requirements for diagnosis would seem to involve the number and nature of attributes in consumers attitudinal structures, with increased diagnostic potential associated with increases in the number of significant attributes. We operationalized this notion with both individual-level and cross section analytical techniques.

A staged entry method developed by Wilkie and Weinreich (1972) was used to ascertain, for each individual, the number of attributes needed to maximize predictions of his brand preference rankings. A "determinism" criterion (Myers and Alpert, 1968) provided special orders for attribute entry for each respondent. A Spearman rank correlation between an individual's stated preference ranks and model-predicted ranks was calculated after the first attribute had been entered, then the second, and so on until all six attributes were represented in the model. The optimal number of attributes for an individual was chosen as the lowest level that provided maximum correlations.

Disaggregated multiple regressions (e.g., Sheth and Talarzyk, 1972) were run for each brand to assess diagnostic potential via cross section analyses. The data for all respondents in a condition were pooled and stated preference or overall affect for a given brand was regressed on the separate normalized attribute ratings for that brand. The number of statistically significant coefficients is taken as a measure of the dimensionality of attitude structure for a given brand. Table 3 summarizes results of these diagnostic analyses by reporting the range and mean number of significant attributes for each test in each experimental condition. Recall that "individual-level" reflects tests of the model's ability to predict across brands for each respondent, while "cross section" analysis runs across individuals for each of the seven brands.



Results are generally supportive of the hypothesis that Condition 2 should provide better diagnostic potential than Condition 1. Both types of analysis revealed substantial variation in dimensionality. Many individuals in the first analysis required only one attribute while others required up to all six; Condition 2 did, however, reveal more attributes than Condition 1. Significance tests of mean differences for the cross sectional regressions are less appropriate due to potential problems of multicolinearity and the small "sample" of only seven brand regressions for each test. We therefore analyzed several models in search of systematic results on the hypothesis. As shown in Table 3, there was a consistent tendency for both range and mean increases from Condition 2 over Condition 1. It may also be noted that use of "overall affect" as the dependent variable in place of brand preference increased the number of significant attributes in both conditions, a result suggested by Sheth (1970). A third result--increases with the inclusion of importance weights--will be discussed in a following section.

The summary conclusion from these analyses is that diagnostic potential of the multi-attribute model is a function of measurement. More specifically, higher dispersion in BiJk (Condition 2) increases diagnostic potential. Our concern has been realized; Condition 1 provides superior predictive performance while Condition 2 provides superior diagnostic performance. Which measurement approach is preferable? Turning the issue slightly, which criterion--prediction or diagnosis--is more appropriate?

While this may not seem to be a serious question in the context of this experiment, it is fundamental to model development. If the two conditions are viewed as competing models, for example, marketers would be quite interested in choosing between them. The differences between these conditions are minor in comparison to distinctions between the original Fishbein model (1967), Cohen's expectancy-value approach (Cohen, Fishbein, and Ahtola, 1972), and the basic model of this paper. In addition to the issue of choosing between competing model approaches, moreover, the question of appropriate criteria may also have implications for issues of internal model structure and appropriate methods of analysis.

Inclusion of Importance Weights

This model structure issue has received far more attention in recent research than any other. A number of legitimate points have been raised in the controversy. [See, for example, Sheth and Talarzyk (1972), Beckwith and Lehmann (1973), Bass and Wilkie (1973), and Bettman (1973).] The essential question is whether or not the basic model requires a variable for inter-attribute belief ratings. In other words, does a simpler beliefs-only model "outperform" the basic belief times importance formulation? Our coverage of this question here encompasses two issues: (1) do halo effects or low dispersion reduce the efficacy of importance weights, and (2) which criterion--prediction or diagnosis--is more appropriate to judge the significance of these weights?

Our fourth hypothesis proposed that low dispersion in belief ratings would decrease opportunities for importance ratings to affect model output. It should be noted that this is an individual-level notion, based upon the obvious conclusion that, if no dispersion was present within each brand, model predictions would be unchanged by any sorts of importance weights. Analyses were again conducted both within-individual and with cross sectional regressions. Results in Table 4 are organized by predictive and diagnostic criteria in turn, in each instance reporting the magnitude and direction of changes associated with the inclusion of importance weights as compared to the simpler beliefs-only model form.



The hypothesis concerning superior performance of importance weights in Condition 2 is clearly rejected by reading across rows. Subsidiary analyses at the individual level (not shown) revealed that higher dispersion data are more influenced by importance weights, but that many individuals are affected negatively. The overall change in rho is minor in both conditions (+.03 and +.01). Cross section predictions mirror these results, with Condition 2 registering a slight decrease in r2. Cross section diagnostics show significant improvements in both conditions (+170% and +55%), but the greater increase in Condition 1 goes against the hypothesis. We conclude, therefore, that halo effects in belief ratings are not a useful explanation for importance weight results in prior studies.

Reading down each column of Table 4, however, highlights the issue of predictive versus diagnostic criteria. Recall that the basic issue is whether the inclusion of importance weights leads to better model "performance." Within each condition comparative performance is assessed by measures of the difference between the beliefs times importance model and the beliefs-only model. Invoking the predictive criterion, no significant improvements (i.e., +.03, +.01, +.01, -.01) with importance weights are found. These results are consistent between conditions and for both individual and cross-section analyses. It could easily be concluded, therefore, that importance weights do not add to model performance and that the more parsimonious beliefs-only model is preferable.

Turning to the diagnostic criterion, however, it is evident that both conditions benefitted substantially from the inclusion of importance weights. The beliefs-only model produced an average of 1.0 and 2.0 significant attributes in Conditions 1 and 2. Beliefs times importance provided 2.7 and 3.1 respectively, significant increases in both conditions. These. results indicate that importance weights provide significant benefits to model performance and that the simpler beliefs-only model is clearly inferior.


Our fifth hypothesis concerns analytical modes in model development. This issue was originally raised by Bass and Wilkie (1973) in the context of the importance weight controversy. The contention was that normalization of both belief ratings and importance weights would assist in overcoming inherent homogeneity assumptions of cross sectional models by removing response scale differences between individuals. Our hypothesis--that Condition 1 should benefit more from normalization than Condition 2--followed from the assumption that response scale biases are more likely to be evident when a smaller scale sector is utilized. This hypothesis is relevant only for cross sectional analyses. Tests were conducted by brand level disaggregated multiple regressions of preference on the basic model inputs, first with raw data inputs, then with normalized inputs. Overall brand affect was then substituted as the dependent variable and similar regressions run. Table 5 summarizes results by condition for prediction and diagnostic criteria. In each instance the cell entry reflects the extent to which model performance improves with normalized data over raw data inputs.



The expectation that Condition 1's low dispersions would benefit more from normalization is not statistically supported in reading across rows of Table 5, although directional differences consistently favor Condition 1. The more interesting results concern regularities between conditions in terms of the impacts found when data are normalized. Predictions of brand preference increase substantially with normalization. Predictions of brand affect, which begin at a higher level with the raw data inputs, are less affected. Diagnostic potential is greatly increased for both variables in both conditions. Affect, which again begins at a higher level, is here affected to a greater degree. We find no conflict between prediction and diagnosis on this issue, either criterion leads to a conclusion that this form of cross section analysis benefits from normalization of beliefs and importance weights to be entered into the multi-attribute model.


Research on multi-attribute models has now shifted toward specific attempts to refine model conceptualization and structure. Competitive composition (e.g., conjunctive, disJunctive, lexicographic) and decomposition approaches (e.g., nonmetric scaling) have been advanced (Day, 1972; Wright, 1973). These activities represent the beginnings of scientific progress in one subfield of consumer behavior research. Results of the study reported in this paper are an indication that model refinement is a more complex process than might be expected. The search for conclusions and solutions requires control over data inputs-concepts and measurement--and agreement upon fundamental criteria by which to assess model performance.

The two experimental conditions employed in this study differed only in item order and belief rating instructions, yet led to substantial differences in observed brand belief rating patterns. Analyses of model performance showed dependence upon measurement; predictions were higher with low dispersion in belief ratings while diagnostic potential was higher with higher levels of dispersion. No statistically significant differences between conditions were found for inclusion of importance weights or normalization of ratings used in cross section analysis. This suggests that slight measurement differences may not severely impair internal model refinement.

The major impediment to progress in model development is the present lack of agreement upon criteria for model Performance. "Prediction," in the sense of associative relationships between model outputs and concurrent measures of affect, preference, or choice, has been consistently employed in prior studies. Weaknesses of prediction as a sole criterion include both the probable existence of better predictors than the model (e.g., last period choice, single overall affect items) and an observed tendency to improve as "belief" measures incorporate more affect. This tendency can have several undesirable consequences. Generalizations concerning the complexity of consumer attitudes are driven toward simpler structures. With respect to alternative model formulations, those which tend toward cognitive constructs (e.g., Fishbein's model, 1967) are likely to consistently "lose" on predictive criteria. A careful comparative study by Ahtola (1973), for example clearly demonstrates this point.

"Diagnosis" has often been advanced as a basic purpose of the model, but has not received rigorous conceptual or operational development as a criterion for model performance. It is clear that diagnosis requires attention by users of multi-attribute models. In this paper we have operationalized diagnosis as simply the number of model attributes found useful in maximizing model predictions. This is a limited approach in need of future improvements.

If the two experimental conditions are viewed as competing (albeit highly similar) model approaches, predictive and diagnostic criteria conflict as to which is chosen. Resolution of the importance weight question would also differ depending upon the criterion applied. However, as demonstrated in our normalization analyses, prediction and diagnosis need not always conflict.

In addition to conceptual advances, future model development requires a careful consideration of the criterion issue raised in this article. We do not advocate that prediction be ignored, but do urge that diagnosis be further developed. We therefore propose that all future research on multi-attribute models invoke and report both predictive and diagnostic criteria. Differences in results signal a need for cautious interpretations. Criterial agreement, however, will add confidence in genuine model developments.


Ahtola, O. T. An investigation of cognitive structure within expectancy-value response models. Doctoral dissertation, University of Illinois, 1973.

Bass, F. M., Pessemier, E. A., and Lehmann, D. R. An experimental study of relationships between attitudes, brand preference, and choice. Behavioral Science, 1972, 17, 532-41.

Bass, F. M. and Talarzyk, W. W. Using attitude to predict individual brand preference. Occasional Papers in Advertising, 1971, 4, 63-72.

Bass, F. M. and Wilkie, W. L. A comparative analysis of attitudinal predictions of brand preference. Journal of Marketing Research, 1973, 10 , 262-9.

Beckwith, N. E. and Lehmann, D. R. The importance of differential weights in multiple attribute models of consumer attitude. Journal of Marketing Research, 1973, 10, 141-5.

Bettman, J. R. To add importance or not to add importance: that is the question. Paper presented to the Fourth Annual Conference, Association for Consumer Research, Boston, November 1973.

Cohen, J. B. Toward an integrated use of expectancy-value attitude models. Paper presented at the ACR/AMA Workshop, Chicago, November 1972.

Cohen, J. B. and Ahtola, O. T. An expectancy X value analysis of the relationship between consumer attitudes and behavior. Proceedings, Second Annual Conference, Association for Consumer Research, 1971, 344-64.

Cohen, J. B., Fishbein, M., and Ahtola, O. T. The nature and uses of expectancy-value models in consumer attitude research. Journal of Marketing Research, 1972, 9, 456-60.

Cohen, J. B. and Houston, M. Cognitive consequences of brand loyalty. Journal of Marketing Research, 1972, 2, 97-9.

Cohen, S. E. Wonder bread decision stalls FTC drive for corrective ads. Advertising Age, January 1, 1973, p. 26.

Day, G. S. Evaluating models of attitude structure. Journal of Marketing Research, 1972, 9, 279-86.

Fishbein, M. A behavior theory approach to the relations between beliefs about an object and the attitude toward the object. In M. Fishbein, ed., Readings in Attitude Theory and Measurement. New York, Wiley, 1967, 389-99.

Kraft, F. B., Granbois, D. E., and Summers, J. O. Brand evaluation and brand choice: a longitudinal study. Journal of Marketing Research, 1973, 10, 235-41.

Lehmann, D. R. Television show preference: application of a choice model. Journal of Marketing Research, 1971, 8, 47-55.

Myers, J. H. and Alpert, M. I. Determinant buying attitudes: meaning and measurement. Journal of Marketing, 1968, 32, 13-20.

Sheth, J. N. Attitude as a function of evaluative beliefs. Paper presented at the AMA Conference Workshop, Columbus, 1969.

Sheth, J. N. and Talarzyk, W. W. Perceived instrumentality and value importance as determinants of attitudes. Journal of Marketing Research, 1972, 9, 6-9.

Thorndike, E. L. A constant error in psychological ratings. Journal of ApPlied Psychology, 1920, 4, 25-9.

Wilkie, W. L. and McCann, J. M. The halo effect and related issues in multiattribute models. Institute paper No. 377, Purdue University, 1972.

Wilkie, W. L. and Pessemier, E. A. Issues in Marketing's use of multi-attribute attitude models. Journal of Marketing Research, 1973, 10.

Wilkie, W. L. and Weinreich, R. P. Effects of the number and type of attributes included in an attitude model: more is not better. Proceedings, Third Annual Conference, Association for Consumer Research, 1972, 325-40.

Wright, P. L. Analyzing consumer Judgment strategies: paradigm, pressures and priorities. Faculty working paper No. 94, University of Illinois, 1973.

Zaltman, G., Pinson, C., and Angelmar, R. Metatheory and consumer research. New York: Holt, Rinehart and Winston. 1973. 146-172.



William L. Wilkie
John M. McCann
David J. Reibstein


NA - Advances in Consumer Research Volume 01 | 1974

Share Proceeding

Featured papers

See More


Does It Pay to Be Virtuous? Examining Whether and Why Firms Benefit From Their CSR Initiatives

Dionne A Nickerson, Georgia Tech, USA
Michael Lowe, Georgia Tech, USA
Adithya Pattabhiramaiah, Georgia Tech, USA

Read More


Gossip: How The Relationship With the Source Shapes the Retransmission of Personal Content

Gaia Giambastiani, Bocconi University, Italy
Andrea Ordanini, Bocconi University, Italy
Joseph Nunes, University of Southern California, USA

Read More


L1. The Effects of Cultural Syndromes on Customers’ Responses to Service Failures: A Perspective-Flexibility-Based Mechanism

Vincent Chi Wong, Lingnan University
Robert Wyer Jr., University of Cincinnati, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.