Methodology and Meta-Methodology in Consumer Research: a Commentary

Peter D. Bennett, The Pennsylvania State University
[ to cite ]:
Peter D. Bennett (1981) ,"Methodology and Meta-Methodology in Consumer Research: a Commentary", in NA - Advances in Consumer Research Volume 08, eds. Kent B. Monroe, Ann Abor, MI : Association for Consumer Research, Pages: 568-569.

Advances in Consumer Research Volume 8, 1981      Pages 568-569


Peter D. Bennett, The Pennsylvania State University

When asked to serve as a discussant in a session such as this one, there is a tendency to search for a common thread that holds it all together. That thread is there, though it's weak. In these brief comments, I will try not to stretch that thread to its breaking point.

All three papers plea for creative insight into new or revised methodologies, or in one case an even more fundamental plea to open our attention to new or different approaches to consumer research--our philosophies of science. All argue, more or less persuasively, for consumer researchers to not only look hard at our methodological biases and proclivities, but to develop new methodologies to fit unique problems which are beyond the evaluation of individual choice behavior.

But there the similarities end, so it seems the better part of wisdom to look at the three papers separately.

Hutton and McNeill

As a case study in the almost overwhelming difficulties of the evaluation of the impact of social programs, the paper is enlightening on the one hand, but frightening on the other.

While the paper really adds little in a methodological sense, it demonstrates a well conceived application of appropriate methods to the special problems of evaluation of a public domain program of some complexity. The contribution, then, lies in its treatment of the special problems of such public program evaluation. By that, I mean, were this research associated with a test market for a new brand of some existing product class, or even a new product, the "end-result" (how much did we sell?) as well as what Hutton and McNeill call "diagnostic" and "formative" issues would surely have been included by most sophisticated industrial researchers.

The authors point to a few of the special problems of the research domain. For one, they say that there are special problems of researcher/decision maker interaction because, "The policy maker does not have adequate training to judge research methodology and policy makers are, at the very least, hesitantly supportive of evaluation." How serious a problem this is, and how much it differs from the interaction between researchers and, say, brand managers, probably varies. In any case, it's not really a methodological issue, but a political one. In short, while it may be a problem, it doesn't really need to be one.

They also key on a problem of the timing of both the placement of evaluation as coming late in the program design, and decisions needing to be taken before the results are available. This is also not a methodological problem, but one of planning. Third, the problem they point to of the "orientation" of evaluators being toward survey and single measure research is not methodological, it is really a problem of education.

Having been involved myself in a research where the users were in the public sector, I sympathize with the authors' problems, and am grateful for their strong argument that these kinds of problems need to be overcome. We should also be grateful for their public report of an example of a real program evaluation study which tries to overcome some of these problems. Their research, insofar as the "end-results" portion is concerned, is well designed and executed. The "diagnostic" part of the research is perhaps as well as could feasibly be done in the field setting of such a quasi-experimental design. All alternative explanations, e.g., some inherent differences between New Englanders and New Yorkers, might have been more thoroughly explored, given of course, adequate time and resources.

Dardis and Stremel

The paper on risk/benefit analysis in relation to product safety is, on the one hand, both extremely interesting and methodologically "tight." On the other, it is sufficiently marred with unsupported (perhaps unsupportable?) assumptions that it is frightening if it were to be actually used for making public policy decisions. The authors argue irrefutably for both the need for a risk assessment methodology, and that such a methodology consider benefits as well as risks.

The methodology suggested in the paper is intriguing, and I hope it will stimulate considerable comment and debate, because the topic is one of such critical importance. Dardis and Stremel conclude that, "the proposed method for assessing risk appears feasible." Whether or not it will eventually emerge as meaningful and operational requires answers to a number of questions. Let's take as given that:

Risk/Benefit = Pf Cf + Pn Cn /V, and

                       Nf Cf = Nn Cn


where each term is as defined in their paper.

Since both Pi are really Ni/Qi, we need to examine the assumptions underlying this relationship. The value of Q is rooted in dollars spent on product i, (not actually quantity of items) during one year. This assumes each product has a life of one year (or that the durability of the products are equal). Our own experience leads us to question this assumption--nearly always my pants outlast my shirts and my wife's robes seem always to outlast her gowns. I have no solid empirical evidence which speaks to the differences in the clothing turnover rates, but the assumption of equal life seems questionable.

Since Ni is the incidence of consumers being burned, it would appear to include burns received in a fiery auto accident or home fire, neither of which seem to have anything at all to say about the safety of the garments the victim is wearing, though they are here lumped together with burns received which are so related.

The serious problem with V is that it treats benefits as value in exchange rather than value in use, and the footnote doesn't adequately deal with the problem. It is the use of a product, not its ownership, that exposes one to risk. One may own a large number of shirts, each one of which is used, say, once every two weeks, while one's robe is used almost daily. Again, we are without data, but the assumption built into the operationalization is open to serious question.

Finally, and perhaps of greatest concern is the proposed operational treatment of the cost constructs. Such an over-simplistic, purely economic, approach is flawed on both logical and political grounds. The openly stated assumption is that the rankings of products is not invalidated by ignoring "pain and suffering" costs if they are proportionate to the sum of direct and indirect costs. Since one major component to economic costs is the present value of future earnings, this view assumes that the widow of a $100,000 per year ad agency executive is greater than that experienced by the widow of a $25,000 per year consumer researcher, and infinitely more than the widower of an unpaid housewife. It is an assumption that is hard to swallow on logical grounds, and is one that the NAACP and NOW organizations are likely to see as untenable.

Perhaps what we have in this proposed methodology is a way of comparing the relative risks/benefits across products to society, or more coldly to the economy. However, the whole purpose behind product safety intervention is to protect individuals, not the economy. It seems to me that a major contribution of this paper is to make it clear to us that the methodology that should eventually be used is one that is not likely to rest on readily available secondary data sources. The contribution lies in identifying those constructs for which we will need data in order to do risk/ benefit analyses. The challenge will be to develop the appropriate operationalizations, which will no doubt be extremely complex and arduous tasks. That is why I said earlier that I am pleased this paper is here, and that hopefully it will stimulate comment and debate leading to those badly needed research efforts.

Uusitalo and Uusitalo

The final paper calls us to task at a much more fundamental level. The authors argue cogently for researchers in our young discipline to slip the bonds of a philosophy of science with roots in the physical sciences. I can remember when it was next to impossible to study the philosophy of science without using texts written by physicists, and for someone who knew nothing about physics, that was no easy task.

The authors have prepared some truly challenging suggestions with which I find myself largely in agreement. To the extent that researchers in our discipline are wedded to the tenets of logical positivism, we are losing opportunities to explore approaches to research questions which might be very fruitful.

Uusitalo and Uusitalo also suggest that we have continued to be wedded also to research approaches that are "traditional'' to either economics or psychology. If they were saying that we were guilty of borrowing theories and concepts from those mother discipline and applying them blindly to consumer behavior phenomena, our reaction could legitimately be "ho-hum." But that's not what I hear them saying. What I hear is something much more fundamental. I have a friend who is a very fine consumer psychologist. When faced with a research issue of interest to him, he will study it, analyze it, and conceptualize the research in terms of the results as they would appear in an ANOVA table. What I hear the authors saying is that there are other, some radically different, ways to conceptualize both the problem and the entire approach to the research the problem calls for. Let's open ourselves up to those alternative approaches.

I found the paper very difficult to read, and I think the reason is that it says so much in so few words. Had the authors had the opportunity to expand their arguments and illustrations to three or four times the paper's current length, it would no doubt be much clearer. I hope that this fact does not stand in the way of lively debate and response to these proddings. For instance, would following this advice lead us to a research tradition in consumer behavior which is even more eclectic than we have now? and would that be functional or dysfunctional? Many of us could take either side of that question and have a debate that would shed more light even than the "debates" between presidential candidates. While some may find these authors poking at their sacred cash cows, I think we should thank them for doing the poking.