Validity



Citation:

Larry Light and Fran Kahn (1975) ,"Validity", in NA - Advances in Consumer Research Volume 02, eds. Mary Jane Schlinger, Ann Abor, MI : Association for Consumer Research, Pages: 751-754.

Advances in Consumer Research Volume 2, 1975      Pages 751-754

VALIDITY

Larry Light, Batten, Barton Durstine & Osborn Inc.

Fran Kahn, Batten, Barton Durstine & Osborn Inc.

"That's interesting, but is it valid"? A statement often expressed by marketers. What does the marketer mean by "Valid"? he means, "is it right"? Will this technique produce information that leads to correct decisions? The key to the marketer's concept of validity depends on the decision the marketer plans to make. How will the technique be used? Let us note that the marketer may be concerned with predictive validity. That is, how good is this research technique in predicting the probable outcome of some decision? He may be concerned with concurrent validity. Does this measurement correlate with what is going on at this time? In this paper, we will examine predictive validity. There are several important points that are worth noting if we researchers are to succeed in improving the validity (predictive) of our research procedures.

We cannot avoid the fact that we are trying to predict the effect of alternative decisions. This is an important assumption. First, if there are no alternatives, who needs research to help make a decision? Second, the effects to be predicted must be specified in measurable terms. Third, the effects of the decisions may be masked by a complex mixture of contaminating variables. Careful research design is a must.

Trying to predict outcomes, is the same as answering the question, " What will happen if ______"?

This is a reasonable test of validity, but we better be careful. There is a lot of momentum in the marketplace, most decisions about individual components of the marketing mix may have a relatively small effect: the effects of marketing decisions may take a lot of time to show up. We can easily be deceived. All we have to do is predict no effect. If we don't take care to take precise measurements (which may be expensive), we can be misled into believing our techniques are valid. To test the predictive validity of a research technique, we must predict changes in the criterion measure. The measure of predictive validity is the correlation of predicted changes with observed changes in the field criterion.

This kind of research is difficult, takes tim-, and costs money. This is unfortunate. But, the facts are that we need to do validation of this kind or we may be misleading a lot of people.

Since answering "What would happen if _______" is difficult, many researchers have adopted another procedure which they call validation. They ask "does this technique correlate with some other technique"? One of the two techniques is commonly accepted, or has face validity. This test may, of course, be misleading, since two techniques may correlate, yet both may be invalid predictors.

Assume, however, that we have properly validated a research test. Now, we want to validate another.

Let's look at a common experiment. Suppose we were trying to determine the validity of a new commercial testing system. A frequently adopted procedure is to take a set of "high-low" pairs (obtained from testing on the accepted criterion system). These pairs of commercials are tested on the new system. The results look like this.

TABLE

Let us compare the results from system I and system II.

The new technique does seem valid. In five out of five cases the better commercial in a pair (as determined by system I) was correctly identified by system II. But is system II really valid?

It turns out that past experience with system I and system II permits us to classify the observed scores as "good", "average", "poor". So, we can re-examine the data in table I.

TABLE

Now we see a completely different picture. If a reasonable purpose of a commercial testing system is to identify potentially effective advertising, how good is the new system? We have four "good" commercials (according to the criterion system). The new system correctly identified only one. Furthermore, the new system mis-identified two "average" commercials as good ones. This does not seem to be very valid, does it?

Yet, this is the same technique that previously seemed so valid. Is the test valid or not? The answer is unfortunately. both yes and no.

The new test is valid, if the purpose of the test is to compare alternatives. Assuming that a marketer has a pair of commercials and must run one of the two. The new system is valid for this purpose. The new system is acceptable for comparing alternatives.

But, what if the marketer wants to evaluate a given commercial? He wants to know if it is good. The new system does not seem to be acceptable for this purpose.

So, the researcher is faced with a dilemma. Should he recommend the use of the new system? The answer is that the new testing system may be recommended as a technique for screening down a set of alternatives, but there is no basis for recommending the new system for purposes of evaluation.

Valid? For what purpose? Screening? Or Evaluation? The proper answer to the validity question depends on a proper answer to these questions.

This example of a validation study, assumed that we had some knowledge of the predictive validity of a criterion system. Unfortunately, this is not generally the case. The reason usually given for the lack of predictive validity data is that it costs money.

The burden of this cost has traditionally been placed on the shoulders of the "buyers" of testing techniques. They have had little choice. Purveyors of new techniques have shown little or no willingness to invest in properly designed validation studies. So the buyer, the user, must either do his own validation study or have lots of faith. The "burden of proof" that a testing system is valid seems reasonably to rest with the seller, not with the buyer.

Sure it costs money to validate. But, this cost can be amortized over future uses of the testing system. More importantly, we should not ask, "what does validation cost?" We should ask, "what is validation worth?"

It's worth a lot to complies like Du Pont, Scott Paper, General Foods, Pillsbury, and others.

It's worth a great deal to know whether a technique is valid. It may be worth even more to know that a technique you may wish to use is invalid. Validity---------------- what's it worth to you?

----------------------------------------

Authors

Larry Light, Batten, Barton Durstine & Osborn Inc.
Fran Kahn, Batten, Barton Durstine & Osborn Inc.



Volume

NA - Advances in Consumer Research Volume 02 | 1975



Share Proceeding

Featured papers

See More

Featured

M8. Nostalgia Increases Healthy Attitudes and Behaviors

Jannine Lasaleta, Yeshiva University
Carolina O. C. Werle, Grenoble Ecole de Management
Amanda Pruski Yamim, Grenoble Ecole de Management

Read More

Featured

N7. Emotion Or Information? Effects Of Online Social Support On Customer Engagement

Chuang Wei, Tsinghua University
Maggie Wenjing Liu, Tsinghua University
Qichao Zhu, Tsinghua University

Read More

Featured

When Disadvantage Is an Advantage: Benevolent Partiality in Consumer Donations

Gabriele Paolacci, Erasmus University Rotterdam, The Netherlands
Gizem Yalcin, Erasmus University Rotterdam, The Netherlands

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.