Information Integration: an Information Processing Perspective


Joel B. Cohen, Paul W. Miniard, and Peter R. Dickson (1980) ,"Information Integration: an Information Processing Perspective", in NA - Advances in Consumer Research Volume 07, eds. Jerry C. Olson, Ann Abor, MI : Association for Consumer Research, Pages: 161-170.

Advances in Consumer Research Volume 7, 1980     Pages 161-170


Joel B. Cohen, University of Florida

Paul W. Miniard, Ohio State University

Peter R. Dickson, University of Waikato

[We wish to acknowledge the insightful and helpful comments of John G. Lynch, Jr. on an earlier draft of this manuscript.]

One of the most pivotal psychological processes involved in belief formation is the act of combining separate and often diverse pieces of information into a coherent judgment, say about a product or person. The judgment may be along a descriptive dimension (e.g., how durable is the product, how friendly is the person) or it may be a summary evaluation (e.g., an attitude toward the product or person). Sometimes referred to as the process of "impression formation," research in this area has been dominated for more than 15 years by the "information integration theory," approach of Norman Anderson and his associates. This theory has been responsible for a vast and impressive literature based upon some of the most careful and finely tuned empirical research in the entire human information processing field. In recent years, Anderson's information integration paradigm and functional measurement procedures have been extended into a virtually unending array of topical domains involving human judgments ranging from psycho-physics (e.g., magnitude estimation) and decision theory to dating choices and, of course, consumer attitudes (see Anderson and Shanteau, 1977 for examples).

The attitude area is, of course, no stranger to models which relate evaluative judgments to their presumed components (e.g., Osgood et al., 1957; Fishbein, 1963; Fishbein and Ajzen, 1975), and each of these must specify an integration function. Research using the Fishbein model, though, tends to be more outcome than process-oriented and tends to concern itself more with what elements are important in a mean-ends analysis than validating the integrating mechanism itself. Implicit in this research, however, is a specific combinatorial rule (i.e., weighted summation) which is used to derive an overall impression (i.e., attitude) based on the degree of perceived attribute-object association. The accuracy of this particular summative integration model has been studied by Bettman, Capon and Lutz (1975a, b).

In a broader sense, summative models and Anderson's averaging model have been viewed as the principle rivals among mathematical models of impression formation. The thrust of this paper is that the search for simplified and mathematically convenient and predictive combinatorial rules has so preoccupied the impression formation area that we are not as far along in our understanding of this phase of information processing as we might be. Needless to say, this view is not universal. There are those who (building on Anderson's formulation) maintain that, "Much of what we know about individual differences in social judgment can be included in an information-integration approach to judgment" (Kaplan, 1975, p. 162) and, "the evidence is sufficiently strong to suggest that much of human judgment and decision is governed by a general cognitive algebra" (Anderson, 1976, p. 690).

Here, as in other research areas, researchers often have different goals and operate at different levels: one person's "prediction" is another person's "understanding." Anderson forcefully (and we feel correctly) argues that linear models that are accepted merely because they fit the data well are a barrier to understanding: "When the linear model seems to do so well at prediction, it is hard to avoid thinking and implying that in some way it has deeper psychological truth.,. the methods that work so well for practical prediction can be extremely misleading in the search for understanding" (Anderson and Shanteau, 1977, p. 1168).

This concern with the implications of correlation as an index of fit, while real, resonates across a broader set of modeling issues involving accuracy of prediction as the dominant criterion. It is with this criticism as a backdrop, then, that we examine the adequacy of mathematical models through which information integration processes have been conceptualized. Wyer (1974, p. 268) suggests that two criteria are appropriate for evaluating such models: (1) accurate prediction of overall judgment, and (2) whether the mathematical procedures used to predict these overall judgments correspond to the cognitive processes to which the model theoretically pertains. The decision as to which criterion will be of primary importance is quite significant since it will typically dictate quite dissimilar approaches to model development and testing.

We submit that the present group of mathematical models have focused almost entirely on the first criterion and usually under highly artificial and unrealistic task settings. Fitting such models to the data has become the dominant thrust as opposed to linking information integration to the larger information processing goals and activities of an individual. We shall discuss these specific concerns at some length. It may, therefore, be helpful to review the characteristics of the most frequently advanced models.


To illustrate the two basic types of additive models, consider a setting in which one is to make some type of judgment on the basis of n informational stimuli. According to a summative formulation, one's response to this set of stimuli can be symbolically expressed as:


where R is the judgmental response; w is a weight parameter which reflects the importance of the given stimulus in the judgment; s is a scale value representing location of the stimulus along the dimension of judgment; and n is the number of informational stimuli.

Alternatively, an averaging formulation would predict the response to be represented as follows:


Note that equation 2 is equivalent to equation 1 if the weights are constrained to sum to unity. Thus, the averaging model postulates that the impact of each stimulus on R is dependent upon the remaining stimuli, while the summative model predicts such impact to be independent of the remaining stimuli. It is this restriction of the weight values that represents the fundamental difference between the models.

To illustrate the implications of this difference, suppose that one receives two information stimuli having the following values: s1 = 3, w1 = 2, s2 = 3, w2 = 2. Values of 12 and 3 represent the predicted responses of the summation and averaging models, respectively. Now let us assume that an additional stimulus is to be incorporated into the judgment of the values; s3 = 2 and w3 = 2. The summation model predicts an increase from to 16. Conversely, the averaging model would suggest the opposite result, a decrease from 3 to 2.67. This situation, the combining of moderately polarized to extremely polarized information, has been extensively used by researchers as it provides a setting capable of discriminating between the two formulations.


An overriding concern with models such as these is that they abstract the information integration phase out of any relevant information processing or decision making context. As so often happens when one sets out to model some complex phenomenon, we begin with a simplified or "stripped down" version of it. While this is often a justifiable, and perhaps necessary, strategy at least initially, some provision must somehow be made to subsequently build "reality" back into the picture. This point was driven home forcefully by Jenkins who, in looking at the results of some 30 years research on human memory, concluded that in trying to avoid the "contamination'' of memory by pre-existing cognitive structures through the use of nonsense syllables and unfamiliar and "nonmeaningful tasks,"

"Ebbinghaus and his successors have done a great deal of the refined work that we must do some day, but have failed to explicate the workings of the massive variable...that is central to remembering - the notion of knowledge. That is, while the work may eventually be of great value, it cannot be applied until we understand the major variable which Ebbinghaus tried to avoid studying" (1974, p. 3).

Specifically, within an information processing framework, information integration represents a complex activity in which already encoded information is reformulated in order to meet some conceptual, judgmental or decision-making needs of the individual. Reformulation is necessary because the individual items of information are more fragmentary (i.e., a specific information dimension may represent only one portion of the conceptual or response dimension), the information may be of varying quality or reliability (perhaps owing to the expertise or motivation of the information source), and the memory structure, goals and available capacity of the information processor -- rather than merely unencumbered characteristics of the information -- provide direction and an organizing framework. If we refer to the larger set of situational and personal conditions as "context," it can be said that information integration research procedures have effectively minimized the impact of context. Taking a position analogous to Jenkins, we believe that the act of combining information to reach some judgment is inextricably linked to context, and that before the results of such research may be confidently extended to the larger information processing and decision making areas, the interaction of important contextual variables will have to be examined.

For example, a considerable amount of time and effort have gone into demonstrating the presumed superiority of an averaging as compared to a simple additive model of information integration. It may, however, be that each model provides a better fit in different contexts but that meaningful explanations of the psychological process are not contained in either of the models themselves. Take, for example, the situation in which mildly discrepant information is received from two equally well informed and credible communicators. Say the information has strong implicit links to the response dimension; each piece is treated as a reliable indicant of the property being judged (e.g., information regarding product quality is received from one person and product appearance from a second person, and a judgment regarding product desirability is called for). Under such conditions it seems sensible for people to estimate that the "truth" lies reasonably near the average of the two evaluations. In addition, Wyer (1974, p. 264) points out that it is reasonable for subjects in such studies to believe that each source may have selected the most representative attribute: hence it may be treated as an indirect indication of that person's overall evaluation. Thus, an averaging model should fit the data pretty well.

On the other hand, if the two pieces of mildly discrepant information conveyed by reliable sources do not lead to an unequivocal evaluation of the total entity, but are really only "necessary" parts of the whole, it is sensible to prefer an object if it has both attributes, even though the evaluation of the second is somewhat less positive than the first. If the task implies that relative evaluation should be indicative of preference, this should result in an additive model being judged superior. Of course, if the task implies that the subject is assessing something akin to "inherent goodness," obviously an object or person is "tarnished" by the possession of less than ideal qualities (actually any qualities deemed inappropriate for that level of goodness). So the entity may well be judged less favorably (i.e., an averaging-like result) due to context effects on judgment criteria or scale anchors (e.g., placement of the neutral point). In addition, while an additive model might track preference pretty well given an increase in the number of desirable attributes, its fit would be progressively worse as the person believed it was sensible to draw inferences about the possession of other unenumerated -- and possibly undesirable -- attributes from the information that the object possessed a less positive attribute. More will be said about this later.


Research programs designed to validate these mathematical descriptions of the information integration process have relied almost exclusively on accurate predictions of overall judgment. Much of the research testing the adequacy of a summative information integration rule has, in addition, relied on correlational evidence. Such evidence can be, however, seriously misleading as "incorrect" models may achieve higher predictions than the "true" model (see for example Birnbaum, 1973; Anderson and Shanteau, 1977). The basic paradigm employed by Anderson to compare the two models is based upon the person perception task of Asch (1946). Using a within-subjects design, subjects are presented numerous profiles of hypothetical persons consisting of various traits or personality adjectives to which they provide some estimate of their favorability toward the person. This within-subjects design typically provides more power for data analysis as well as greater assurance that the response scale is given the same meaning in all cells than a between-subjects design with different subjects for such stimulus combination. Stimuli are presented on either separate pages of a booklet (e.g., Anderson, 1965) or separate index cards (e.g., Oden and Anderson, 1971). Order of presentation is typically randomized for each subject. Sometimes subjects are allowed to proceed through the task at their own pace (e.g., Anderson and Birnbaum, 1976), other times they are given a prespecified time (e.g., Anderson, 1965). Each trait, selected from a master list (Anderson, 1968), represents one of four possible levels of favorability: highly favorable (H), moderately favorable (M+), moderately unfavorable (M-), and highly unfavorable (L). Such traits are combined systematically to provide tests, described below, of the adequacy of additive models in general and what is claimed to be a discriminating comparison of summation versus averaging predictions.

Imagine that we had developed two sets of information, A and B, each composed of four stimuli. We then combine each stimulus in set A with each stimulus in set B to form a 4 X 4 factorial design yielding 16 different profiles which subjects respond to on some criterion scale. For the summative model to be tenable, the data should plot as a set of parallel curves. Proof of this is shown in Figure 1. The numerical values bordering the figure indicate the scale values associated with each of the four levels of sets A and B. Assuming, for simplicity, that the weights for each level of A and B equal 1, then the predictions of the summative model are depicted in the upper diagonal of each cell. As the reader can easily verify, the data will plot as a set of parallel lines.



The prediction of the averaging model regarding the pattern of the lines is dependent upon the equality of weight values within a given factor. If the weights associated with each level of A are equal and the same is true for levels within B, then a "simple" averaging model predicts parallelism, as reflected in the values in the lower diagonal of Figure 1. If, however, the requirement of equal weighting within factors is not met, the averaging model predicts non-parallelism (see Oden and Anderson, 1971, for examples of expected nonparallelism).

A more rigorous test of the parallelism requirement can, of course, be obtained through an analysis of variance. In particular, the interaction component should not be significant for either the summative or simple averaging models to hold. Such analysis can be performed at either an aggregate or individual level, the latter requiring replication of the original sets of judgments.

In addition to validating the model, parallelism also supports the assumption that the response measure is an interval scale since unequal intervals would produce non-parallelism even with a correct model (Anderson, 1976). Thus the test for parallelism provides evidence on the validity of both the model and criterion measures. Failure to support the parallelism requirements implies that either the model and/or criterion measure is invalid. To eliminate the possibility of measurement inadequacies, monotone rescaling procedures can be applied to the data in an effort to reduce the nonlinearities in the responses. If a monotone transformation cannot be found, this would imply that the model should be rejected. For a more detailed discussion of monotone transformation, see Anderson (1974b, pp. 227-231).

Anderson's (1962) early efforts in this area involved testing the integration model in person perception tasks. Subjects provided judgments of liking for persons described by 3 traits. Each trait varied in 3 levels of favorability, thus representing a 3 X 3 X 3 factorial design. A significant interaction violating the parallelism requirement was not found. As implied above, the parallelism requirement cannot distinguish between the summation and simple averaging models. Such a discriminating test might be obtained, however, if we consider each model's prediction regarding the addition of moderately polarized information to highly polarized information.

Thus, Anderson (1965) employed a variant of the person perception task in an effort to yield a test capable of discriminating between the two formulations. Subjects judged a variety of trait profiles which contained either two or four traits. Of direct relevance are the four trait profiles: HH, HHM+M+, LL, LLM-M-. One critical comparison involves the HH and HHM+M+ profiles. Both profiles have in common two highly favorable traits (i.e., HH), though the latter set also contains two moderately favorable traits. The summation model would predict that these additional traits should increase the favorability of subjects' responses, while an averaging process suggests that favorability should decrease (since the added M+ traits are below the average evaluation based solely on the initial H traits). For the addition of moderately unfavorable traits to a set of highly unfavorable traits (e.g., LL versus LLM-M-), the averaging model predicts a decrease in unfavorability while the summation model predicts an increase in un-favorability. The results were interpreted as supporting an averaging integration rule, as the addition of moderately polarized stimuli to highly polarized stimuli decreased the polarity of subjects' responses.

This same logic can be and has been extended to designs of a more complex nature as exemplified by Troutman and Shanteau (1976). In this study, subjects received ratings regarding the absorbency and durability of various disposable diapers. Ratings were one of four levels: high, above average, below average, and low. Levels of the two attributes were combined in a 4 X 4 factorial design to yield 16 different brands. To distinguish between the models, judgments for brands described by only one of the attributes were also necessary. By comparing judgments for single attribute brands with those for two attribute brands, a "critical test" may be carried out. In particular, when mean judgments of the single attribute brand are plotted in conjunction with the cell means of the 4 X 4 factorial, the dashed line representing the single attribute brands should have a steeper slope to be consistent with the averaging hypothesis. In fact (see Figure 2) the dashed line crosses over the four parallel lines. As can be seen, a brand described as high on a single dimension was judged more favorably than a brand rated high on the same dimension and above average on the remaining dimension. Similarly, a brand described as low on a single dimension was judged less favorably than a brand rated low on the same dimension and below average on the remaining dimension. These tests for deviations from parallelism were attained through the interaction component of the analysis of variance. In particular, the test of the Linear X Linear trend component should be significant. The remaining trend components represent discrepancies from the simple averaging model and should not be significant.

One finding from Anderson (1965) which apparently contradicted the averaging model and supported the summative model was that evaluations based on traits of equal polarity became more extreme as additional traits of the same polarity were added. For example, one's evaluation of an HHHH profile was more favorable than the evaluation of an HH profile. This result has been referred to as the "set-size" effect and required the inclusion of an "initial impression" term into the averaging model to explain such an effect. This term is interpreted as the judge's impression of a person or object prior to the receipt of any information (often assumed to be zero), its relative weight thus varying with the provision of additional information. Kaplan (see Kaplan 1975 for an overview and references) has extended this notion to specifically consider predispositional variations and differences in personal experience and knowledge. He attempts to account for configural processing (i.e., in which properties of stimuli are in part dependent on context) in terms of variations in the effective weight of a given stimulus, and therefore a function of the weights, of all other stimuli in the set, including the initial impression. This approach is in contrast to those who hold that stimuli actually change in meaning (both descriptive and evaluative) with changes in context. Further discussion of the initial impression term may be found in Anderson (1965; 1967; 1974b pp. 254-256) and Wyer (1974, pp. 294-298).



The person perception task has been employed in numerous investigations seeking to verify Anderson's (1965) early results. Although most have focused on the integration rule underlying the combination of homogeneous items such as traits (e.g., Hendrick, 1968; Anderson and Alexander, 1971; Takahashi, 1970; Hamilton and Huffman, 1971), paragraphs (Anderson, 1973), or product attributes (Troutman and Shanteau, 1976), some have focused on processes underlying the combination of such heterogeneous items as visual (photographs) and verbal (traits) information (Lampel and Anderson, 1968) and traits and class standing on a general assessment index (Oden and Anderson, 1971). Such studies have employed a variety of judgmental objects including meals, criminality, naval officers, toys, U. S. presidents and products.

One interesting variation on the research involving the addition of moderately polarized information to extremely polarized information has been comparing the different predictions of each model for the addition of neutral information. A summation model predicts that the inclusion of neutral information should have no effect, while the averaging model predicts a decrease (or increase) for the addition of neutral information to favorable (or unfavorable) information, respectively. Oden and Anderson (1971), for example, had subjects rate their liking for a variety of meals. Half of the meals combined a number of main courses which differed in their favorability with a neutral vegetable. The remaining half replicated the first set plus an additional neutral vegetable was included. In accordance with the averaging hypothesis, the inclusion of the neutral vegetable increased preferences for initially disliked meals and decreased preferences for liked meals. Averaging predictions for the addition of essentially neutral information have also been substantiated by Anderson (1973) and Hamilton and Huffman (1971).

The evidence bearing on the averaging versus additive model has favored the averaging model. There have, however, been a few exceptions. Hamilton and Huffman (1971) reported 12 critical comparisons involving the traditional person perception task. Two of the comparisons favored the summation model (although the authors only recognize one as such), whereas the remaining ten comparisons favored an averaging rule. Takahashi (1970) also reports eight comparisons using the perception task. Of these eight comparisons, three show significant differences favoring an additive model. The remaining comparisons do not reach significance, although five of them are in the direction predicted by an additive model. Some explanations for the support given to the averaging model may be found in the methods and procedures used as well as a failure to consider the role of inferential beliefs. More will be said about this shortly. Then too, there is the broader question of the relevance of these findings to an understanding of the information integration process.


The information integration process becomes far more complex when viewed from a larger information processing perspective and from the standpoint of the person who is trying to make sense of the "blooming, buzzing confusion'' that Krech and Crutchfield (1948) term the discrete impressions, unrelated experiences and unitary sensations which await cognitive organization. An additive strategy may simply reflect a well learned "the more the better" heuristic for evaluating the worth of an entity. The meaningfulness of that heuristic and the likelihood that it will be used can be affected in innumerable ways through impacts on the person, the task and the stimuli. Some of these impacts (as suggested earlier) make it sensible for a person to use an averaging heuristic. Rather than arguing the intrinsic worth of any heuristic, we would probably do better to seek to understand the kinds of information integration responses individuals make under different information processing and problem solving conditions and constraints. A good starting point for such an inquiry might be a consideration of the heart of process: the search for meaning.

If we begin at this point, we should evidence greater concern for the encoding of information. Fishbein and Ajzen, criticizing Anderson's standard information integration research paradigm, conclude that, "Perhaps the most basic problem with the research paradigm used in this area is that the subject's salient beliefs about the hypothetical person are not assessed. Instead it is assumed that the subject accepts the information he receives...and that his attitude toward the person is a function of these beliefs and only these beliefs" (p. 233). While Fishbein and Ajzen's criticism is directed at measurement, and therefore the outcome of an encoding stage, a more theoretically significant approach would be to more fully examine the impact of one's preexisting cognitive organization upon the integration of information.

This notion has a long history in the psychology of perception as suggested by the concepts "frame of reference,'' "implicit personality theory," "adaptation level" and "memory schemata." Recent work on cognitive scripts (Abelson, 1976; Schank and Abelson, 1977) and social perception (e.g., Cantor and Mischel, 1977; Higgins, Rholes and Jones, 1977; Srull and Wyer, 1980; and Hastie et al., 1980) is suggestive of a promising movement toward more integrative analysis of perception, memory and judgment. All such approaches implicitly take the position that it is very difficult if not impossible to know the implications of a given piece of information (e.g., a trait adjective) without also knowing the context in which the information is presented. If this is correct, mathematical modeling of information integration will be difficult to extend into more complex settings.

As early as 1946, Asch hypothesized a "change of meaning'' explanation of the widely disparate person descriptions that were generated following (in one case) subjects' exposure to 2 lists of 7 personality traits, of which only the 4th trait (either "warm" or "cold") varied. The "centrality" of such a trait, it was argued, produced a context effect which altered the meaning of the other information: intelligent-cold suggests a somewhat Machiavellian individual, while intelligent-warm suggests a sort of bright and helpful person. More generally, context is likely to affect both the evaluation of a trait ("irresponsible" is more negative in the context of a parent than an employee) and its meaning ("agreeable" in one context suggests a pleasant person, in another a wishy-washy person). Because of the devastating implications of context-based changes in meaning for building mathematical models of information integration, it is little wonder that the "battle lines" were drawn with respect to this interpretation. A number of other more mathematically tractable explanations were offered for this effect (see Wyer, 1974, pp. 236-261), and the adequacy of each is the subject of continued debate. One explanation that has been offered is that rather than a "change of meaning" these findings result from a "generalized halo effect." While this would be much more convenient mathematically (e.g., one could represent the overall evaluative implications of the other information in the set as an additional term) the fact that in a number of studies the effect of the "central trait" seemed to be concentrated on adjectives linked descriptively (but not evaluatively) casts doubt on this explanation.

Despite the fact that the research introduced in opposition to the change of meaning hypothesis has generally been carried out using paradigms which appear to minimize the opportunity for changes in meaning, there remains substantial (but not unequivocal) evidence in support of Asch's contention. Though not designed with the tight experimental precision necessary to choose among alternative explanations, there is much evidence in the larger social psychological literature for substantial context effects; whether upon the perception of personal traits (e.g., Kelley, 1950), the effects of persuasive communications (e.g., Hovland and Weiss, 1951) and the meaning of a behavior itself (e.g., Aronson, Willerman and Floyd, 1966). These studies indicate that by introducing a communicator as either "rather cold" or "very warm" or by introducing one additional piece of information about the assumed source of a communication (though the communication itself is identical) or having either a competent or incompetent job candidate spill coffee, this otherwise identical information (i.e., a speech, a written communication, an action) would be interpreted quite differently and lead to different judgments, often along strictly context-relevant dimensions.

In fact, it would be hard to find a stream of research which shows the effect of context in a more direct way than the research Anderson has introduced in support of an averaging model. The procedures used by Anderson in such studies, we submit, create a task context in which averaging becomes a reasonable heuristic for subjects to use. To begin with, Anderson typically gives his subjects clear instructions about how to treat the information provided to them. The instructions usually state that each of the stimuli are equally important and that equal attention should be paid to each. "This is intended to help ensure the assumption of equal weighting, which is necessary for the parallelism prediction under the averaging model" (Anderson, 1974b, p. 246). For the person perception task, each of the traits are attributed to a different acquaintance of the person being described. Subjects are also told that apparent inconsistencies are due to the fact that each acquaintance may be reporting about a different aspect of the person's personality.

Wyer (1974, p. 275) argues that, "Such procedures may also predispose subjects to interpret each adjective as an independent indication of the characteristic that is most representative of the person's personality, and therefore of his likeableness; this may increase the tendency to use an averaging rule..." It is, of course, reasonable for a subject to conclude that the "truth" lies somewhere between an H and an M+ trait supplied by the 2 judges.


The easiest way that summative models can be made to accommodate context effects is to assume that the weighting parameters are context specific. This suggests that the common specification for the adding and averaging models should be:


where wi/n indicates a weighting parameter for si conditional on the n-stimuli context. An averaging model carries the side condition that:


There is experimental evidence that the weight parameter should be regarded as context specific at least in the following circumstances;

(a) When information is totally discounted in the context of other more convincing and conflicting information,

(b) When information is completely or partially redundant, and

(c) When cognitive simplification results in attention-al discounting. (Wyer, 1974; Anderson, 1974).

In such cases the weight parameter approaches zero. Anderson explicitly accepts that context affects the weight parameter in his averaging model when he introduces the side-condition and the mechanism of his zero valued initial impression. His context specific weighting parameter can be expressed as:


The in-context weight is exactly specified in terms of the weight given to the initial impression (the initial context) and the context-free (absolute) weights of all of the stimuli. As he puts it:

The effective weights are the relative weights, and they depend on the weights of all the stimuli in the combination. This reflects the gestalt character of the averaging rule, in which the role of each part depends on the whole. (Anderson 1974, p. 296).

While the weights in the averaging model, by definition, account for what Anderson believes to be the most important configural integration process (averaging) it is doubtful whether the same function of absolute weights can account for all of the gestalt, context or configural effects that have been observed in information integration experiments. Further assumptions have to be introduced. An example is that the absolute weights are a function of scale value. The justification is that extreme value scores have greater diagnosticity and therefore contribute more to the overall impression. The conjunctive and disjunctive decision models (Einhorn 1970, 1971) are extensions of this assumption. Anderson (1972) specified that w = 1 + as + bs2 which resulted in the testing of the following integration model:


This differentially weighted averaging model fitted the configural responses quite well. As a matter of convenience (also, it probably wouldn't add much) the initial impression term was dropped: it is problematic just how its weight could have been determined in such circumstances. The above equation does seem to tend toward the model-fitting equivalent of poetic license. Recognizing that to begin with si is a statistically fitted estimate of the connotative value the person attaches to the ith adjective, it can hardly be asserted that the relationship between the context-free scale values of adjectives and overall person impression is as straight-forward as the standard averaging model indicates. The supporters of the averaging model may be in danger of painting themselves into a corner. They either have to explain configural effects by ever more subtle juggling of absolute weight ratios or admit that in-context adjective weights cannot behave according to their basic specification and account for all configural effects. This leaves them in the same position as supporters of the adding model, who simply assert that weights are affected by context and then proceed to estimate them endogenously (as statistical parameters or functions of some other statistically estimated variables).

As has been suggested, both linear, summative information integration models can accomodate context or con-figural effects by making the scale value (si) context specific (Sl/n). Anderson (1974) has admitted that the most attractive way of explaining stimulus interactions or context effects is to assume that scale values change but he has dismissed this proposition. He has doubted that there is a change of meaning context effect, "instead it appears that the adjectives are integrated into the impression at their context-free value. Once integrated they lose their separate existence and become part of the whole." (p. 258). In the process the connotative value of the nth adjective in the context of n-1 other adjectives becomes a composite of its context-free value and an overall person impression or generalized halo effect. Mathematically, si/n would then be a weighted average of si and Rn.

There is a very good reason why Anderson and other cognitive psychologists seek to avoid the introduction of context specific values (sl/n 's) into the summative models. It would be a contradiction in terms: if the molecular parts are dependent on each other, then a summative linear model is by definition misspecified and inappropriate. Much of the operational appeal of both the adding and averaging information integration models rests on the proposition that context-free connotative values can be used to construct the overall impression. Models formulated as if this proposition were correct appear to provide a useful enough approximation of at least the outcome of judgmental processes that we can expect their continued use despite the likelihood that gestalt or configural information integration is the true state of the world.


There is one implicit but critical context-based assumption underlying the traditional paradigm for comparing the models whose failure to be met seriously threatens the validity of existing data on this issue. It is assumed that, in making their judgments, subjects rely solely upon the information provided. That is, they take this information at its face value and do not consider any other implications it might have with respect to other attributes or traits. As will be shown, failure to meet this assumption can produce data which appear to support an averaging model when in fact the summation model represents an equally, viable explanation for the data.

To gain insight into the tenability of this assumption, let us place ourselves in the role of the typical subject in a typical integration task. A subject in the Troutman and Shanteau (1976) task, for example, is aware that he will receive information on a brand's performance on either one or two dimensions. When the subject encounters brands rated on a single dimension, he may do one of two things. First, he may, as is assumed, make his judgment solely on the information provided. Alternatively, he might "infer" what the brand rating on the missing dimension might have been and thus base his judgment on both the presented and inferred information.

By "infer", we refer to a process known as inferential belief formation (Fishbein and Ajzen, 1975) in which the person constructs or deduces a belief on the basis of his other beliefs (probabilistic consistency) or his attitude (evaluative consistency). Such inferential processes are not uncommon, and can be easily uncovered in consumer settings (e.g., inferring a product's quality on the basis of its price). The "true" scale value of a piece of information probably is some function of the scale values of the inferred traits. Thus, the addition of a moderately favorable trait or attribute to a very favorable attribute should not only increase the overall evaluation by the value of that single attribute, but it should at the same time, decrease the overall evaluation by the value of the unspecified but more moderately evaluated inferred traits. Fishbein and Ajzen's earlier-cited criticism of Anderson's lack of measurement of subjects' evaluations of all salient attributes is relevant here and may be an advantage of Fishbein's approach.

Such inferential processing is likely to be encouraged in studies in which the judgmental dimensions are highly correlated. In Troutman and Shanteau (1976), a strong relationship would appear to exist between durability and absorbency such that diapers which are highly absorbent are likely to be highly durable. Some preliminary research by the present authors had subjects rate the likelihood that a brand rated high on absorbency (durability) would be either high, above average, below average, or low on durability (absorbency). The results strongly confirmed that a positive relationship exists between the two traits in this context.

To illustrate the problems which occur when subjects fail to base their judgments solely on the information provided, let us assume that a subject infers the missing rating when judging single dimension brands. Thus, when faced with a brand rated high on durability, the subject infers that the brand is fairly close to high on absorbency. Similarly, when the brand is rated low on one dimension, the subject believes that the remaining dimension is probably low. If this occurs, then it should not be too surprising to find that a rating for a HM+ brand is lower than the rating for the H only brand since judgments for the H only brand should incorporate the belief that the brand is better than M+ on the unspecified dimension.

It is also easy to see how this contamination of subjects evoking inferential processes should be present in the person perception task. It is quite common for us to assume that persons' characteristics generally tend to be quite consistent. A kind person is likely to be honest, a cruel person is likely to be dishonest. Ratings for an HH profile are therefore likely to include the inference that the unspecified dimensions are very close to being highly favorable. When the subject encounters the HHM+M+ profile, he perceives a slight inconsistency between the traits such that the previously unspecified traits which were believed to be close to H are, in fact, M+. Thus the rating is accordingly lower.

Therefore, if subjects act in a manner similar to that described above (i.e., inferential processing is occurring) one cannot conclude that the "crossover" test, as currently operationalized, is indeed a valid discriminator between summation and averaging integration rules. Several approaches/procedures could be used to study the existence and magnitude of inferential processing in an information integration task. One easily implemented procedure would be an instructional manipulation designed to influence inferential processing. Certain subjects would receive instructions that they should infer how the brand would be rated on the unspecified dimension and that this inference should be incorporated into their judgments. Other subjects might, in addition, be instructed as to whether the dimensions were correlated positively, negatively, or essentially zero. The remaining subjects would not receive any such instructions. Comparisons among the treatment groups might produce a fair amount of insight into both the impact of inferential processes and the nature of the inferences made based only upon preexisting knowledge.

Alternatively, one could attempt to minimize subjects' ability to evoke inferential processing. For the product perception task, one could select attributes which are not correlated, though evaluative consistency may still be a problem. This approach may, however, create additional problems for subjects which they may attempt to solve through the use of discounting or other heuristics.

Finally, one might attempt to assess or model such inferential beliefs. Following completion of the information integration task each subject's conditional beliefs (regarding the unspecified dimension) could be measured and incorporated into the analysis. Modeling such a process, however, becomes cumbersome even with a few attributes and levels. To illustrate, consider the Troutman and Shanteau study where subjects rated a High Absorbency only paper towel higher in quality than one possessing High Absorbency-Above Average Durability, a result lending support to the averaging model. However, to the extent that subjects infer that a High Absorbency only towel is likely to have better than Above Average Durability, then neither model can be disproven. To test for this possibility, subjects could be asked how much they like each brand represented in the profiles. To predict, for example, subject's affect toward a High Absorbency only brand consider the following formulation:


where, for example, VHA is the value associated with buying a towel which is high in absorbency, VHD is the value associated with buying a towel which is high in durability, VAAD is the value associated with buying a towel which is above average in durability, VBAD is the value associated with buying a towel which is below average in durability; PHD/HA would be the conditional probability that a brand high in absorbency would also be high in durability; PAAD/HA would be the conditional probability that a brand high in absorbency would also be above average in durability; PBAD/HA would be the conditional probability that a brand high in absorbency would also be below average in durability, and PLD/HA would be the conditional probability that a brand high in absorbency would also be low in durability. Technically the "1" represents the simple probability that the brand is high in absorbency. Since the brand is explicitly described as high in absorbency in the profile, the probability is assumed to be 1. For that subject who prefers the High Absorbency only towel to the High Absorbency - Above Average Durability, then we would expect:


This inequality can be further simplified by the subtraction of (1.VHA) from each side which leads to the following formulation:


Should the inequality expressed in Equation 9 hold, this would suggest that subjects might be incorporating inferential beliefs into their Judgments. In addition, one should not expect Equation 9 to hold for subjects who do not prefer the High Absorbency only towel to the High Absorbency-Above Average Durability towel. Since it is difficult to believe that people typically engage in so complex a process, some other, and more descriptively accurate, means of representing customary inference processes should be considered.


Perhaps we need to return to some basic questions. What is information integration? What functions does it serve? We're exposed to a piece of information. We perceive it, presumably do some preliminary work on it to determine what it is. This may be akin to deciding where to place the information in memory. But why place it anywhere if not to make sense out of it, to see if the information is relevant to our needs and goals? Making sense out of information pretty clearly involves combining it with other information that we continue to be receiving as well as stored information.

Research based on existing information integration models tends to treat the person in a rather passive manner: discrete information items (usually adjectives) are received, and a large number of judgments are called for. One can characterize this approach as stimulus-oriented in a sense that meaningfulness, personal relevance and problem solving are minimized. More complex stimulus materials (e.g., paragraphs describing presidents, pictures of possible dates) have sometimes been used, though these are the exception, and together with the multiple judgment factorial designs and accompanying stimulus training (e.g., learning trials) and instructions, the setting is able to exert a dominant influence on behavior.

Contrast this with a setting in which a person is trying to make a sound judgment, say, regarding the purchase of a particular product, and is exposed to information about the product. In the latter case, certain appropriate concepts (perhaps important product attributes) are retrieved from memory and used as benchmarks to help make an accept/reject judgment. It might even be said that this consumer either has or is in the process of developing an overriding concept of a "suitable" or perhaps an "acceptable" product within this product class. In this sense one might look upon the consumer as "testing the hypothesis" that product X fits (or perhaps is a member of the set of products described by) this concept. Now, the situation may, and almost certainly is, more complicated than this: if we're expressing it in terms of hypotheses, there are probably a set of competing hypotheses which correspond to discriminable and meaningful levels of evaluation (e.g., superior, good, okay, unacceptable); for many decisions a single overriding evaluative concept may not be practical, or it may be too imprecise; perhaps the concepts are less abstract and more script-like (Abelson, 1976). Be that as it may, any such conceptually oriented approach will look upon the information integration process in a very different way than the existing stimulus oriented approaches.

To illustrate this in a very rough fashion, let us consider one of our consumer researcher ancestors who one day steps out of his cave, looks across a waving field of grain and spies a tawny coat about a quarter of a mile away. This information, unfortunately is somewhat ambiguous. Is the animal likely to be his dinner or is he likely to be the animal's dinner? Which of these concepts does the information support? Our caveman plays scientist as he advances cautiously hoping to gather new information. He spies a twitching tail. What concept now fits the combined information? Unfortunately, it's still not clear. He remembers having a tasty meal of an animal with a tawny coat and twitching tail. He also has a disconcerting picture in his head of a friend being dragged off by an animal with these attributes. So, up to this point, the "story" he has put together from the pieces of information and his stored recollections accommodates two very different concepts. He takes a few more steps and spots a great shaggy mane. The exemplar in one of his stories now becomes implausible. Back to the cave! Our caveman has concluded that the predator concept is much more likely,

Most important, whereas existing models of information integration give primary attention to a combinatorial rule for putting incoming information together, a conceptually oriented approach would focus on concept identification and formation and thus the validity of the information for the concept (or concepts) under consideration. The information is not simply combined, it is (as we suggested earlier) reformulated through an iterative concept identification and testing process. The person is assumed to begin any sequence with some degree of focus (usually with at least a vague sense that some cognitive categories are most likely to be useful or relevant) and possibly an already formed concept.

Depending on whether the choice is his or is limited by the environment, the consumer may first process information believed to have the highest likelihood of discriminating among the most salient alternative concepts. Thus, a housewife interested in judging whether or not to buy a particular paper towel probably has some concept -- if only a mental picture of herself using the towel -- of an acceptable, let's say particularly absorbent, paper towel. This concept may already include the "information" that thicker paper towels are stronger, most absorbent and last longer. In such a case, either firsthand information (e.g., trying to judge its thickness by feel) or information communicated by the package (e.g., "extra-thick") is used to test the validity of the concept (i.e., that the brand is particularly absorbent). In the absence of such information, the consumer will attempt to validate the concept by extracting meaning from other product cues, and in the process applying various inferential beliefs (e.g., things that are thicker weigh more; if it says "extra soft" it probably isn't very strong). Thus, the information is combined but almost in a hypothesis testing fashion. If such information is presented in an experimental context, particular instructions and a lack of incentive or certain elements of realism (e.g., requiring a great many judgments) may diminish this hypothesis testing behavior. Nevertheless, learning that a paper towel is only M+ in thickness after first learning it was H in strength should still strike the consumer as more consistent with a "less than superior" concept while the first piece of information by itself may tend to validate the concept of a "superior" paper towel.

The essence of this argument is that combining information to make some judgment about an object or person may not be very different from concept-learning. Much might be gained by relating this phase of information processing to the larger psychological literature on concept formation, though the research in this area is still pretty much focused on discrimination learning involving stimuli such as geometric shapes and colors (see Dominowski, 1974 and Bourne, 1974 for useful overviews and references). The direct appeal of a conceptually-oriented approach is that it emphasizes the search for meaning in combining elements of information rather than the predictive power of any particular cognitive algebra. Accordingly, research having this orientation cannot help but pay more attention to key aspects of the larger information processing context (conceptual system, goals, information characteristics, task environment, etc.).

There have been several attempts to directly apply concept identification principles to information integration. While these have been less ambitious in scope than the broad approach discussed above, they offer an alternative interpretation of the psychological process underlying more traditional information integration research which would be quite consistent with that approach.

Ostrom (1967) provided a concept-identification explanation of the process underlying changes in the meaning of an adjective when paired with other adjectives. He argued essentially that an adjective (e.g., intelligent) by itself is likely to be consistent with a range of concepts, and therefore since its meaning may be somewhat ambiguous, its evaluative rating might well reflect some mid value of the evaluative implications of this set of concepts. When the adjective is paired with another adjective (e.g., warm rather than cold), some of the concepts no longer seem viable (e.g., unsociable, greedy), so these evaluative implications are no longer considered. In this case the evaluation should be more favorable than would be predicted by considering the scale values of the attributes (intelligent and warm) assessed individually. Research bearing on Ostrom's interpretation is discussed in Wyer (1974, pp. 243-251). In addition, Wyer (1974, pp. 306-321) has operationalized a concept-identification formulation based on similar principles and utilizing independent and conjunctive probability distributions to represent the subjective distribution of evaluations associated with objects or people described by each adjective. Let's say, for example, that you are asked to evaluate the adjective "friendly." People you have previously labeled "friendly'' fall along some distribution of evaluation, though almost all these people have been evaluated positively. Evaluative implications of other adjectives likewise have subjective probability distributions. When any two are paired we narrow our identification of the concept partially described by these adjectives. The expected value of a conjunctive probability distribution of their evaluative implications thus provides an approximation of this process.

Wyer's efforts to operationalize a concept identification formulation make it painfully obvious that this approach, while theoretically appealing, is a nightmare from a modeling standpoint. First, the conjunction of several probability distributions cannot be determined solely from the expected values of the component distributions themselves. In addition, to the extent that the component distributions are not independent of one another, the conjunction of these cannot be estimated from the individual components. Finally, the theoretically attractive properties of a conceptually oriented approach, which expands our perspective on the information integration process, at the same time vastly complicates the modeling problem, and this is especially true when estimation of a conjunctive probability distribution is involved. Hypothesis testing strategies used to identify concepts virtually guarantee selection of interdependent information dimensions in some patterned sequence. This is likely to be an iterative process. Descriptively accurate modeling approaches would, therefore, have to reflect the multistage nature of this process. One such approach might involve the use of serial modeling, representing a process in which each new item of information is interpreted by, and then combined with, the belief structure existing immediately prior to the receipt of the new information.

Now it may be that we're simply too ambitious when it comes to modeling psychological processes anyhow, and that we too often trade insightful description for illusionary precision. It may be that at this stage in the game we should be a little less carried away by our successes in bringing ever more powerful methodologies to bear on carefully circumscribed problems. The alternative is to fit our models to our theories rather than the other way around and then to expect no more than directionality and perhaps order of magnitude predictions.


Progress toward a significant understanding of information integration processes would appear to require a more ambitious and interrelated treatment of perception, memory and judgment and less emphasis on mathematical models of combinatorial rules. Such combinatorial rules might best be treated as heuristics whose development and application in specific informational and situational context might be examined further. More theoretically significant insights into how information is brought together to achieve a particular goal may be generated by taking a concept identification/formation approach. Much of the research comparing summative and averaging models must be considered equivocal both because the procedures used to test such models are not only restrictive on theoretical grounds but may bias the results in favor of an averaging formulation. The lack of measurement of inferential beliefs is another major factor which contributes to this problem. Since there is a continuing need to specify a combinatorial rule in various models of human judgment, the best advice may be to employ whatever heuristic seems best suited to the particular context. The logic of a "the more the better" rule with regard to a product's possession of positively evaluated attributes would appear to support the use of a summative model such as Fishbein's in the product attitude area.


Abelson, R. H. (1976), "Script Processing in Attitude Formation and Decision Making," in Cognition and Social Behavior, Carroll, J., and Payne, J. (eds.), Hillsdale, N. J.: Erlbaum Associates, Inc.

Anderson, N. H. (1962), "Application of an Additive Model to Impression Formation," Science, 138, 817-818.

Anderson, N. H. (1965), "Averaging Versus Adding As A Stimulus-Combination Rule in Impression Formation," Journal of Experimental Psychology, 70, 394-400.

Anderson, N. H. (1967), "Averaging Model Analysis of Set Size Effect in Impression Formation," Journal of Experimental Psychology, 75, 158-165.

Anderson, N. H. (1968), "Likeableness Ratings of 555 Personality-Trait Words," Journal of Personality and Social Psychology, 9, 272-279.

Anderson, N. H. (1973), "Information Integration Theory Applied to Attitudes About U. S. Presidents," Journal of Educational Psychology, 64, 1-8.

Anderson, N. H. (1974a), "Algebraic Models in Perception,'' in Handbook of Perception, Carterette, E. C., and Friedman, M. P. (eds.), Vol. 2, New York: Academic Press.

Anderson, N. H. (1974b), "Information Integration Theory: A Brief Survey," in Contemporary Developments in Mathematical Psychology, Krantz, D. H., Atkinson, R. C., Luce, R. D., and Suppes, P. (eds.), Vol. 2, San Francisco: W. H. Freeman and Company.

Anderson, N. H. (1976), "How Functional Measurement Can Yield Validated Interval Scales of Mental Quantities," Journal of Applied Psychology, 61, 677-692.

Anderson, N. H. and Alexander, G. R. (1971), "Choice Test of the Averaging Hypothesis for Information Integration,'' Cognitive Psychology, 2, 313-324.

Anderson, N. H., and Shanteau, J. (1977), "Weak Inference with Linear Models," Psychological Bulletin, 84, 1155-1170.

Anderson, T., and Birnbaum, M. H. (1976), "Test of an Additive Model in Social Inference," Journal of Personality and Social Psychology, 33, 655-662.

Aronson, E., Willerman, B. and Floyd, J., (1966), "The Effect of a Pratfall on Increasing Interpersonal Attractiveness,'' Psychonomic Science, 4, 227-228.

Asch, S. E. (1946), "Forming Impressions of Personality,'' Journal of Abnormal and Social Psychology, 41, 258-290.

Bettman, J. R., Capon, N. and Lutz, R. J., (1975), "Multiattribute Measurement Models and Multiattribute Attitude Theory: A Test of Construct Validity," Journal of Consumer Research, 1 (March), 1-14 (a).

Bettman, J. R., Capon, N. and Lutz, R. J., (1975), "Cognitive Algebra in Multiattribute Attitude Models," Journal of Marketing Research, 7, 151-164. (b)

Birnbaum, M. H. (1973), "The Devil Rides Again: Correlation as an Index of Fit," Psychological Bulletin, 79, 239-242.

Bourne, L. E. (1974), "An Inference Model for Conceptual Rule Learning," in Theories in Cognitive Psychology, Solso, R. L. (ed.), Potomac, Maryland: Lawrence Erlbaum Associates.

Cantor, N. and Mischel, W. (1977), "Traits as Prototypes: Effects on Recognition Memory," Journal of Personality and Social Psychology, 35, 38-48.

Dominowski, R. L. (1974), "How Do People Discover Concepts?'' in Theories in Cognitive Psychology, Solso, R. L. (ed.), Potomac, Maryland: Lawrence Erlbaum Associates.

Einhorn, H. J. (1970), "Use of Non-Linear, Non-Compensatory Models in Decision Making," Psychological Bulletin, 73, 211-230.

Einhorn, H.J. (1971) "Use of Non-Linear, Non-Compensatory Models as a function of Task and Amount of Information," Organizational Behavior and Human Performance, 6, 1-27.

Fishbein, M. (1963), "An Investigation of the Relationships Between Beliefs About an Object and Attitude Toward That Object," Human Relations, 16, 233-239.

Fishbein, M. and Ajzen, I. (1975), Belief, Attitude, Intention and Behavior, Reading, Mass.: Addison-Wesley.

Hamilton, D. L. and Huffman, L. J. (1971), "Generality of Impression-Formation Processes for Evaluative and Nonevaluative Judgments," Journal of Personality and Social Psychology, 20, 200-207.

Hastie, R., Ebbesen, E. B., Ostrom, T. M., Wyer, R. S., Hamilton, D. L. and Carlston, D. E. (eds.) (198Q), Person Memory and Encoding Processes, Hillsdale, N. J.: Lawrence Erlbaum Associates.

Hendrick, C. (1968), "Averaging vs. Summation in Impression Formation," Perceptual and Motor Skills, 27, 1295-1302.

Higgins, E. T., Rholes, W. J. and Jones, C. R. (1977), "Category Accessibility and Impression Formation," Journal of Experimental Social Psychology, 13, 141-154.

Jenkins, J. J. (1974), "Can We Have a Theory of Meaningful Memory?" in Theories in Cognitive Psychology, Solso, R. L. (ed.), Potomac, Maryland: Lawrence Erlbaum Associates.

Kaplan, M. F. (1975), "Information Integration in Social Judgment: Interaction of Judge and Informational Components,'' in W, Immn Judgment and Decision Processes, Kaplan, M. F. and Schwartz, S. (ads.), New York: Academic Press.

Kelley, H. H. (1950), "The Warm-Cold Variable in First Impressions of Persons, Journal of Personality, 18, 431.

Krech, D. and Crutchfield, R. S. (1948), Theory and Problems of Social Psychology, New York: McGraw-Hill.

Lampel, A. K., and Anderson, N. H. (1968), "Combining Visual and Verbal Information in an Impression-Formation Task," Journal of Personality and Social Psychology, 9, 1-6.

Oden, G. C., and Anderson, N. H. (1971), "Differential Weighting in Integration Theory," Journal of Experimental Psychology, 89, 152-161.

Osgood, C. E., Suci, G. J. and Tannenbaum, P. H. (1957), The Measurement of Meaning, Urbana: University of Illinois Press.

Ostrom, T. M. (1967), "Meaning Shift in the Judgment of Compound Stimuli," Unpublished Manuscript, Ohio State University.

Schank, R. and Abelson, R. (1977), Scripts, Plans, Goals and Understanding, Hillsdale, N. J.: Lawrence Erlbaum Associates.

Srull, T. K. and Wyer, R. S. (1980), "The Processing of Social Stimulus Information: A Conceptual Integration," in Person Memory and Encoding Processes, Hastie, R., Ebbensen, E. B., Ostrom, T. M., Wyer, R. S., Hamilton, D. L. and Carlston, D. E. (eds.), Hillsdale, N. J.: Lawrence Erlbaum Associates.

Takahashi, S. (1970), "Analysis of Weighted Averaging Model on Integration of Information in Personality Impression Formation," Japanese Psychological Research, 12, 154-162.

Troutman, C. M., and Shanteau, J. (1976), "Do Consumers Evaluate Products by Adding or Averaging Attribute Information?," Journal of Consumer Research, 3, 101-106.

Wyer, R. S. Jr., (1974), Cognitive Organization and Change, Potomac, Maryland: Lawrence Erlbaum Associates.



Joel B. Cohen, University of Florida
Paul W. Miniard, Ohio State University
Peter R. Dickson, University of Waikato


NA - Advances in Consumer Research Volume 07 | 1980

Share Proceeding

Featured papers

See More


Conducting Consumer-Relevant Research

Jeffrey Inman, University of Pittsburgh, USA
Margaret C. Campbell, University of Colorado, USA
Amna Kirmani, University of Maryland, USA
Linda L Price, University of Oregon, USA

Read More


When Stigma Does Good: Accentuating Certain Aspects of Stigma Enhances Effectiveness of Mental Health Messages

Chethana Achar, University of Washington, USA
Nidhi Agrawal, University of Washington, USA

Read More


Doing Worse by Doing Good: How Corporate Social Responsibility makes Products Less Dangerous

Linda Lemarié, University of Neuchâtel
Florent Girardin, University of Neuchâtel

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.