Improving Consumer Research Measurement: an Overview

Michael J. Ryan, Columbia University
ABSTRACT - This overview has three purposes. First, to explain the rationale for placing the following three papers in this Proceedings; second, to show how they fit together; and third, to elaborate on some details and provide a few additional sources of information.
[ to cite ]:
Michael J. Ryan (1977) ,"Improving Consumer Research Measurement: an Overview", in NA - Advances in Consumer Research Volume 04, eds. William D. Perreault, Jr., Atlanta, GA : Association for Consumer Research, Pages: 392-393.

Advances in Consumer Research Volume 4, 1977   Pages 392-393


Michael J. Ryan, Columbia University


This overview has three purposes. First, to explain the rationale for placing the following three papers in this Proceedings; second, to show how they fit together; and third, to elaborate on some details and provide a few additional sources of information.


It is not unusual to find exhortations regarding the faulty measurement procedures prevalent in consumer research (cf. Jacoby, 1976; Kassarjian, 1971). In fact this Conference has a competitive paper session entitled Issues in Measurement. Yet, much like the weather, there is more talk than action. Evidence indicating sound measurement procedures, which are basic to sound methodology, is usually lacking in consumer research. The 'why' of this situation is interesting but important only in the sense that it provides guidance for 'what can be done about it' since the obvious payoff is better quality research. I'm reminded of a journal reviewer's comments to some colleagues that they should remove such worthless jargon as 'heterotrait-monomethod triangle' from a manuscript. This terminology, of course, is basic to anyone remotely concerned with construct validity. Yet, the reviewer and editor were unfamiliar with the term. At the risk of overgeneralization and oversimplification of arguments put forth in more detail in the following three papers, it may be that the Zeitgeist does not demand evidence for sound measurement procedures. In a publish or perish world, then, we may conclude there is simply no payoff. A possible reason for this situation is the preponderance of marketing people engaged in consumer research who have traditionally had more rigorous training in survey than in psychometric methods. In other words there may be a general lack of appreciation for the increase in quality that can be obtained with minimal effort focused on tried and proven measurement procedures. The purpose of the following papers is to put these procedures on the record. That is, no attempt is made to further the field of psychometrics or to put forth the normative models found in most texts treating the subject. Rather, some suggestions are made as to what should realistically be expected in measurement procedures and what is likely to result.


A useful way of integrating the papers is to consider them in view of the portion of variance arising from a measuring instrument with which each deals. The classic psychometric model views these components as: [A more detailed but basic treatment including underlying assumptions is provided by Gulliksen (1950) whereas a more advanced treatment is provided by Lord and Novick (1968).]

s2O = s2T + s2B + s2E   (1)


s2O = observed variance

s2T = true score variance or variance arising from the variable of interest.

s2B  = systematic variance arising from variables other than those of interest

s2E  = random variance

Peter's paper deals with random variance which is defined as instrument unreliability. It is important to note that this source of random error arises from the instrument not the sampling method although both sources enter the random component of any statistical model based on correlation, regression, or analysis of variance. Thus, a complete lack of sampling error does not guarantee precise estimation unless random measurement error is also eliminated. As with sampling error, it is not likely that an instrument can be made completely reliable. However, Peter provides basic descriptions and numerous sources of methods for reducing this error which should lead to more accurate estimation. In those cases where excessive measurement error cannot be reduced, the researcher, by recognizing its existence, may have an explanation for poor results. For example, the degree of unreliability places an upper bound on correlation coefficients. Hence, the unreliability of the best available instrument suggests the magnitude of correlations that serve as adequate evidence for theoretical relationships. A more immediate payoff may be had by correcting correlations for unreliability or determining what the relationship between variables would be if the instruments were perfectly reliable (Gulliksen, 1950). In terms of Equation 1, an increase in reliability (lowering the error component) brings us closer to the true score or at least guarantees a greater proportion of systematic variance (s2T+ s2B).

Leavitt's paper deals with that part of systematic measurement variance arising from bias. Whereas reliability explores the question of whether or not anything was measured, that is, does the measure contain systematic variance, validity issues deal with the sources of systematic variance. More specifically, given that something has been measured how can that something be used and what is it? Referring again to Equation 1 it is obvious that isolation of true score variance necessitates the removal of both random error and systematic variance due to variables other than those of interest. It is important to recognize that bias in an instrument can produce results more or less favorable than they should be. For example, response set bias across instruments representing independent constructs may cause multicollinearity as a result of the method of measurement rather than theoretical relationships among the variables thereby resulting in understated multiple Rs. Leavitt treats sources of bias as constructs in their own right since this allows their identification which in turn leads to methods for removing or isolating them.

Given that random and bias have been removed or reduced, Shocker and Zaltman turn our attention to the true score. Validity issues traditionally deal with the purposes for which the instrument is appropriate (concurrent and predictive validity) and the identification of the variable causing the true score variance (face and construct validity) (Nunnally, 1967). In regard to the first issue, the failure to ask 'for what purpose is this instrument valid' has led to considerable confusion in consumer research. For example, one frequently hears the response 'will the real multi-attribute model stand up'. This frenzied search for a 'correct' model is quixotic until we recognize that there are models within this general class that have different purposes. Excellent treatments of validity from the viewpoint of test purposes are provided by Ebel (1961) and Gulliksen (1950a).


Shocker and Zaltman summarize the flavor of all three papers by arguing and providing examples suggesting that it is realistic to expect explicit treatment of the issues raised in this session. They also predict that this will result in less quantity and more quality research, a prophecy that depends upon the climate fostered by the Zeitgeist. No doubt there will be more reports of measurement statistics, especially reliability coefficients, as they become available in canned computer programs (Specht, undated). However, this does not guarantee that these methods will be properly applied since both excellent and terrible examples of measurement construction appear in consumer research. Bagozzi (1976a, b, 1977), for example, provides lucid philosophical discussions of these issues and demonstrates how covariance structure analysis, a recent and complex psychometric method, can be employed to examine the issues discussed in this session by partitioning the variance due to constructs, methods, and error. On the other hand, there are published consumer research papers reporting internal consistency estimates for one item measures and direct operationalizations of hypothetical variables. An understanding of the nature of measurement (Torgerson, 1958, Chptr. 1; Jones, 1971) would have prevented these misapplications. This session, by pointing to the importance and availability of measurement methodology, will hopefully result in more rigorous consumer research due to more widespread interest in the nature of measurement.


Richard P. Bagozzi, "Construct Validity in Consumer Research," Unpublished working paper, University of California, Berkeley, June, 1976.

Richard P. Bagozzi, "Convergent and Discriminant Validity by Analysis of Covariance Structures: The Case of the Affective, Behavioral, and Cognitive Components of Attitude," in Advances in Consumer Research, Volume IV, ed. by William D. Perreault, Jr. (Atlanta: The Association for Consumer Research', 1977).

Richard P. Bagozzi, "Science, Politics, and the Social Construction of Marketing," in Marketing: 1776-1976 and Beyond, ed. by Kenneth L. Bernhardt (Chicago: American Marketing Association, 1976), 586-92.

Robert L. Ebel, "Must All Tests by Valid," American Psychologist, 16(1961), 640-7.

Harold O. Gulliksen, "Intrinsic Validity," American Psychologist, 5(1950), 511-17.

Harold O. Gulliksen, Theory of Mental Tests (New York: John Wiley and Sons, 1950a).

Jacob Jacoby, "Consumer Research: Telling It Like It Is," in Advances in Consumer Research, Volume III, ed. by Beverlee B. Anderson (Cincinnati: The Association for Consumer Research, 1976), 1-11.

Lyle V. Jones, "The Nature of Measurement," in Educational Measurement, Second Edition, ed. by Robert L. Thorndike (Washington: American Council on Education, 1971), 335-55.

Harold R. Kassarjian, "Personality and Consumer Behavior: A Review," Journal of Marketing Research, 8 (November, 1971), 409-18.

Frederic M. Lord and Melvin R. Novick, Statistical Theories of Mental Test Scores (Reading: Addison-Wesley Publishing, 1968).

Jum C. Nunnally, Psychometric Theory (New York: McGraw-Rill Book Company, 1967).

David A. Specht, SPSS: Statistical Package for the Social Sciences Version 6 Users Guide to Subprogram Reliability and Repeated Measurements Analysis of Variance (Ames: Department of Sociology, Iowa State University, undated).

Warren S. Torgerson, Theory and Methods of Scaling (New York: John Wiley and Sons, 1958).