Jagdish N. Sheth, University of Illinois
[ to cite ]:
Jagdish N. Sheth (1981) ,"Discussion", in NA - Advances in Consumer Research Volume 08, eds. Kent B. Monroe, Ann Abor, MI : Association for Consumer Research, Pages: 355-356.

Advances in Consumer Research Volume 8, 1981      Pages 355-356


Jagdish N. Sheth, University of Illinois


There is very little in common among the three research papers included in this session except that they all relate to very different aspects of consumer information as opposed to consumer choice behavior. Therefore, I will first discuss each research paper separately and then make some observations about consumer information as an area of research and theory. In order to conserve space, I will not summarize the findings and conclusions of each paper. Instead, I will simply discuss its strengths and weaknesses.


The first paper by Deshpande and Krishnan on establishing public policy priorities for consumer information programs based on market research findings is a very worthwhile and useful contribution to the discipline for at least three reasons. First, the research study adds to our very limited substantive knowledge about the information needs of the elderly consumers. Second, the authors provide a very neat conceptualization of information deficiency as a function of perceived need for information and perceived difficulty in obtaining it. In fact, the development of Information Deficiency Index may be the single most contribution of this paper. Finally, it represents a good application of Thurstone scaling procedures which have been neglected in the past in consumer research.

The paper, however, suffers from the "analysis overkill" syndrome. First of all, do policy makers really need or worse yet care about the relative magnitudes of information deficiency across the nine buying experiences? Would they not be just as happy and content to know the ranking of nine buying experiences? I seriously doubt that policy makers can even utilize the Thurstone interval scale points for budget allocation purposes since it is arbitrary, lacking in an absolute anchor point, and definitely subject to change by addition of other buying experiences to the list.

Ironically, the authors do have a more meaningful and useful quantitative scale in their Information Deficiency Index. It is an absolute score ranging in value from one to three. Average or median scores of nine buying experiences with respect to the index do represent aggregate market scale points which are at least interval scaled and, therefore, quantitative measures. In short, I fail to see any need for applying the Thurstone Case V Model to the data.

Finally, there are so many other more meaningful ways to generate relative magnitudes of information deficiency out of this data bank. For example, calculating the proportion of consumers who answered yes to both perceived need and access difficulty will provide a ratio scale which will remain invariant to addition of other buying experiences. Alternatively, one can calculate the mean deficiency score and perform a normal deviate analysis to generate standard scores of information deficiency among the nine buying experiences. Of course, this distribution will be subject to sampling problems but with a large number of buying experiences, it will reach limits and stabilize as a normal distribution.

A more interesting and useful analysis of the data is to identify those elderly consumers who really need information. For example, it would be very interesting to profile the "yes-yes" group (score three) in terms of demographic and socioeconomic variables: do they come from certain ethnic minorities or are they concentrated in some regions of the country? If our past research is any guide, it is very likely that there is a hardcore segment within the elderly consumers which is most information deficient across all nine buying experiences. The public policy makers and the society as a whole will be better off if consumer information programs are targeted toward those who really need them. Otherwise, we will keep on repeating our past program mistakes such as unit pricing, nutrition labeling, and truth-in-packaging. As I have recently stated, unfortunately one law of consumer behavior is that those consumers who need information, do not use it (Sheth 1979) and, therefore, public policy efforts should be targeted toward them in terms of understanding and motivating them to use association.


The second research paper by David Finn is an interesting study on how inferential beliefs enable consumers to add end subtract from what is communicated by the experimenter or the marketer. This study reminds of the neglected concept of stimulus-as-coded (s-a-c) proposed by Howard-Sheth theory of buyer behavior (Howard and Sheth 1969). According to their theory, prior attitudes and beliefs control overt search, attention and perceptual bias mechanisms with which stimulus-as-presented is modified into stimulus-as-coded. Recently, I have also discussed how advertising impacts on the consumer including many incidental, unintended and negative effects based on the stimulus-as-coded concept (Sheth 1974). I am, therefore, not as surprised as David Finn that consumers use phantom information.

There are also several methodological and conceptual problems with this study. First of all, post-test-only control group design simply cannot measure change as asserted by the author. It only measures differences between two groups, which is not the same thing as pre-post change. Second, although the control and test groups are randomly divided and belong to the demographically homogeneous population, it does not ensure that the two groups are psychologically homogeneous in terns of general toothpaste beliefs. Since the post-test control group is used as a benchmark, it is most critical to validate that the test group also has a comparable belief profile. One way to do this is to obtain population parameters about toothpaste beliefs and compare them with the sample parameters (control group). Otherwise, one can easily obtain perhaps a very different set of results by using another control group.

Finally, a closer examination of the mean differences among beliefs reveals that the greatest differences occurred for most probable (seal of approval, about the same price) and least probable beliefs (made with new type fluoride). Furthermore, lack of confirmation reduced the probability of most probable beliefs (-2.391 and -1.153), whereas highly assertive message increased the probability of least probable belief (1.300). I find this as evidence of what one would expect from all the cognitive consistency theories including the cognitive dissonance, congruence and assimilation-contrast models.

My own view is that projective techniques, rumor-transfer, story-telling and word association types of research may be more relevant if we want to know the information content of non-information.


It is amusing to critique a critique when you are not the author who is critiqued! I am sure Peter Wright does not need me to respond to the criticisms levied by Munch and Swazy. And, I am not eager to defend this research tradition either, simply because it is statistically uninteresting and frustrating by being loaded with correlations which gain significance from large samples rather than any evidence of true relationships between traits and information processing.

I do agree with Munch and Swazy about these three methodological and conceptual problems in Wright's study. First, there is a very definite social desirability bias in scales designed to measure General Social Confidence (GSC) and Information Processing Confidence (IPC). This is easy to detect from the skewed distributions of subject responses. In fact, I will go one step further and assert that most correlations based on these scales are likely to be biased and inflated and, therefore, encourage the experimenter to eagerly commit the type one error (rejecting the null hypothesis when it is true).

Second, there is conceptually strong support for the multi-dimensionality of constructs as complex as GSC and IPC. If they are multidimensional, as Munch and Swazy claim from their own research findings, it is very difficult to make causal inferences based on univariate analysis of variance techniques. You need to utilize path analysis or structural equation modeling procedures for testing the hypothesis.

Third, Munch and Swazy correctly point out the problem of intercorrelation between GSC and IPC. One can even argue that General Social Confidence (GSC) cannot be achieved by or attributed to someone who does not have Information Processing Confidence (IPC). In our society, individuals who think slowly, cannot concentrate, are at a loss for words or not quick-witted, do not emerge as either self-confident or socially confident individuals. I am, therefore, quite surprised that Wright chose to treat them as separate and orthogonal factors in his research study.

The Wright study and its critique by Munch and Swazy once again points out the age old admonition: Measure your independent constructs such as GSC and IPC more specific to the criterion situation. We must develop statements for both GSC and IPC which are directly related to advertising or at least consumer information oriented to ensure that both sides of the equation are at the same level of correspondence or specificity. Both personality researchers (Cohen 1967). and attitude researchers (Fishbein and Ajzen 1975) have belabored this point for so many years. However, we seem to ignore their admonition.


Consumer information as an area of scientific research and theory is at its infancy in consumer behavior. Howard and Sheth (1969) pointed out, quite some time ago, that we know at least something about the learning constructs (attitudes, intention, beliefs and motives), but we virtually know very little about the perceptual constructs (attention, ambiguity, perceptual bias, overt search). It is, therefore, premature to conduct deductive research, either by developing theories or by borrowing constructs from other disciplines in this area. Instead, we must do considerable amount of empirical inductive research. In short, we must learn how to crawl before we start walking or worse yet, running.

Accordingly, we must lean toward mere exploratory and qualitative research tools and tactics such as focused group interviews, projective techniques, clinical methodologies. and nonstructured surveys rather than experimental designs, construct development and measurement or laboratory studies.

In my opinion, consumer behavior as a discipline has already begun the shift from learning constructs such as attitudes and beliefs to the perceptual constructs such as consumer information's content, storage, retrieval and processing aspects. Contrast this year's ACR Proceedings with the proceedings of 1976 or 1977, for example.


Cohen, Joel B. (1967), "An Interpersonal Orientation to the Study of Consumer Behavior," Journal of Marketing Research, 4, 270-80.

Fishbein, M., and Ajzen, I. (1975), Belief, Attitude, Intention and Behavior, Reading, Mass: Addison-Wesley.

Howard, J. A., and Sheth, J. N. (1969). The Theory of Buyer Behavior, New York: John Wiley & Sons.

Sheth, J. N. (1979), "Surpluses and Shortages in Consumer Behavior Theory and Research," Journal of the Academy of Marketing Science, 7, 414-27.

Sheth, J. N. (1974), "Measuring Advertising Effectiveness: Some Theoretical Considerations," Journal of Advertising, 3, 6-11.