# When Do Advertisements Mislead the Consumer: an Answer From Experimental Psychology

^{[ to cite ]:}

J. Edward Russo (1976) ,"When Do Advertisements Mislead the Consumer: an Answer From Experimental Psychology", in NA - Advances in Consumer Research Volume 03, eds. Beverlee B. Anderson, Cincinnati, OH : Association for Consumer Research, Pages: 273-275.

^{[ direct url ]:}

http://acrwebsite.org/volumes/9276/volumes/v03/NA-03

The work of Jacoby and Small (197S) on misleading drug advertising is extended. The recommended procedure includes an objective measure of misleadingness (misprescriptions), an amended, non-misleading control advertisement, and a statistical test. Several more sophisticated procedures are offered. The analysis separates the information providing role of the social scientist from the value judgment role of the policy maker.

Many public policy makers must answer the question: is a given product harmful to the public? For example, judgments must be made about the safety of products from toys to automobiles. If harm is judged to exist, the policy maker must select an action: a warning, withdrawal of the product, or restitution for damages.

The problem facing the policy maker may be partitioned into two parts. First, he must establish that the product causes harm. Second, corrective action must be based on the seriousness of the harm. It is essential to distinguish these two questions. Social scientists, serving as consults to policy makers, are qualified to address only the first issue, the existence of harm to the consumer. The value judgments required to answer the second question, seriousness and corrective action, are the proper role of the elected or appointed policy maker. In keeping with this distinction the present paper is concerned only with deciding the existence of harm. One specific situation is considered, but policy makers should be able to generalize the proposed technique to their own policy setting.

MISLEADING ADVERTISING

The problem to be analyzed here is the identification of misleading advertisements. Jacoby and Small (1975) discuss this problem for the specific case of the advertising of drugs, especially to physicians in professional journals. This paper is an extension of their analysis of the problem.

Jacoby and Small point out a fundamental difference between the approaches of the Federal Trade Commission and of the Food and Drug Administration to the problem of misleading advertising. The latter attempts to demonstrate the intention to deceive, often relying on the testimony of experts and the adversary procedure that characterizes the judicial process. In contrast, the FDA focuses on the effects of improper drug advertising, i.e., whether a physician reading the ad is misled. This latter approach has two primary characteristics. It is __consumer oriented__, and it utilizes __empirical findings__. It is consumer oriented because the central concern is whether harm was done to the consumer, such as by a physician's improperly prescribing the advertised drug. In contrast, the FTC's strategy of demonstrating the intent to deceive ignores the effect on the consumer. The FDA's approach can utilize empirical knowledge because the question of misleadingness can be answered empirically. In contrast, the question of deceit is determined by traditional legal procedures.

Jacoby and Small (1975) proposed the following empirical procedure for determining misleadingness. First, devise a measure of misleadingness that is __minimally susceptible to bias__. For example, rather than ask physicians to judge directly whether an advertisement is misleading, ask whether they would be likely to prescribe the drug for some inappropriate ailment. A positive reply would indicate that the advertisement has misled the physician. Second, because prior attitudes toward advertising or drug manufacturers may influence the tendency to perceive misleadingness (Haefner, 1972; Koffman, 1964), select a __representative panel of physicians__. That is, let misleadingness be estimated from a group representative of those likely to read and use the advertisement in question. Third, employ a __control advertisement__, which would be shown to a second representative panel of physicians. The control advertisement would be modified to eliminate the tendency to mislead. For example, the questionable advertisement could be amended by excising a claim or inserting a warning.

Based on such a procedure, Jacoby and Small propose the __n% criterion__. If more than n% of the physicians are misled, then the advertisement is judged to be misleading. As the authors acknowledge, this is only a partial solution since a criterion value of n% must still be chosen. The real problem is that a single value of n may not be equally suitable in all situations. For example, if the advertised drug were very powerful the rate of misprescriptions might be considerably lower than that of a less potent drug. Similarly, the chance of being misled may depend upon how much is known either about this class of drugs (e.g., side effects), or about the ailment in question. For all these reasons, variation in the n% criterion must be expected. The task is, in the face of this variation, to find a procedure for determining that percentage of misled physicians which implies that the advertisement is truly misleading.

In the next three sections, three different procedures for establishing misleadingness are discussed. Each new procedure increases the level of sophistication, at some cost in data collection and in transparency of the results to the policy maker.

PROCEDURE 1

This procedure relies on a control advertisement and a yes/no determination of whether a physician has been misled. The control advertisement would be as similar as possible to the purportedly misleading advertisement, except for the removal of the misleading aspects. To determine whether each physician tested was misled by exposure to either the real or control advertisement, a simple question could be asked. For example, "Would you prescribe this drug for Ailment X?" (Ailment X would be selected as that ailment for which misprescription would be most harmful and most likely.) Responses to such a question would yield the following data: P_{M} and P_{C}, the proportions of physicians misled by the real and control advertisements, respectively. Note that P_{M} is the n% of Jacoby and Small.

Given both P_{M} and P_{C}, standard statistical techniques for hypothesis testing can be employed. P_{C} is an estimate of the misleadingness that is due to chance alone, i.e., to all factors other than the misleading- ness supposedly in the real advertisement. Any excess of P_{M} over P_{C} should be caused only by the misleadingness in the questionable advertisement. Using a standard test for the equality of two proportions, we can test whether the target advertisement is misleading. For example, let PM = .15 and PC = .02, where both proportions are based on the responses of 200 physicians. It can be shown that if we assume that there is no misleadingness (the null hypothesis), then the probability (p) of observing as large (or larger) a difference between PM and PC is .00029. Using the standard analytical-procedures of statistical hypothesis testing, we reject the null hypothesis and conclude that the target advertisement must have misled the physicians who read it.

Of course, it is the nature of statistical inference that one admits to less than absolute certainty in one's conclusion. Thus, there is a preset level of significance (a), such that if the observed value of p is less than a, one concludes that misleadingness exists. The a-level must still be set by judgment (that of the policy maker, not of the social scientist). Unlike the n% criterion, however, choosing an a-level is a well understood aspect of statistical practice.

The essence of this first procedure is the use of a control group to estimate a base rate. The reasoning involved is relatively simple and should be comprehensible to both policy makers and their constituents. For a similar application of the same techniques to a problem in criminal justice, see Russo (1975). Finally, note that the data, P_{M} and P_{C}, are also relatively easy to collect. One must only devise an instrument for a Yes/No determination of misleading-ness.

PROCEDURE 2

Procedure 2 differs from its predecessor in one respect only. It uses a numerical measure of misleadingness. I will not attempt to describe in detail an instrument for generating such a numerical rating. It might be based on a total misprescription score over several ailments. Even simpler, instead of asking each physician for a Yes/No prescription decision, one might ask for the percentage of cases of Ailment X for which he believes the advertised drug should be prescribed. The same basic question is being used, but the response requires more information, a numerical estimate of prescription rate rather than only a Yes or No.

The availability of a numerical measure permits the use of more powerful statistical tests, notably the t-test. Whether or not this advantage outweighs the increased effort needed to collect the numerical estimates of misleadingness is a question best answered by experience. Only by pretesting an instrument on a sample of physicians can we determine whether the more valid responses will be given to the Yes/No or to the numerical versions of the question.

PROCEDURE 3

One of the most bothersome aspects of the empirical determination of misleadingness is the existence of a prior tendency toward judging an advertisement as misleading. As noted earlier, attitudes toward advertising, drug companies and so forth, may contribute significantly toward the tendency to judge any advertisement as misleading. Jacoby and Small (1975) handle this problem in two ways. First, misleadingness is determined from questions that are as objective as possible (e.g., specific prescription behavior) and not from questions that make it easy for the bias to enter. Second, both experimental and control groups are selected such that any prior bias toward misleadingness will occur equally in both groups. The combination of these procedures should be adequate to insure the validity of either Procedure 1 or 2. Nonetheless, if additional protection against the effects of this bias is desired, a more sophisticated procedure can be employed.

The proposed procedures rely on a measure of the prior bias for each physician questioned. For example, one might ask, "What percentage of drug advertisements (in medical journals) do you consider to be misleading?" The responses to such a question can be used in several ways. First, one could check that the experimental and control groups were really balanced for prior attitude by performing a t-test on these data. Second, one could check the objectivity of the question that is used to determine the misleadingness of the target advertisement by testing for a relation between the measure of prior bias and the measure of misleadingness. If there is no relation, then the misleadingness question is truly objective. If there is a relation, then a matched t-test could be performed, with the physicians in the two groups matched according to their prior bias levels. Such a matching procedure removes the effect of prior bias on the misleadingness scores (similar to a repeated measures or within subjects design in the analysis of variance). This also enables a more sensitive detection of the existence of misleadingness.

OTHER PROCEDURES

The preceding procedures are not exhaustive. The theory of signal detectability (Green & Swets, 1966; Coombs, Dawes & Tversky, 1970, Chapter 6) might be applied to this problem. The decision criterion would be stated in terms of d' rather than Z (or p). Because the signal detectability paradigm provides a systematic method for deciding if the specific questions being asked are biased for or against misleadingness, the prior disposition toward perceiving misleadingness could be investigated in more detail. One possibility is a multiple regression analysis for identifying physician characteristics (age, income, specialty) that contribute to a prior bias. The point of mentioning these techniques is not to recommend them at the present time, but rather to illustrate the range of empirical information that can be made available to the policy maker.

IMPLICATION FOR POLICY FORMATION

Two aspects of the procedures that have been presented are critical for the policy maker. First, all final judgments remain in the control of the policy maker. All he gets from the social scientist consultant is information that will help him make those judgments. For example, it can be determined (at specified levels of certainty) that __some __misleadingness is or is not present in the target advertisement. The remedy adopted, however, is still up to the policy maker. If he feels that the harm caused is slight, simple withdrawal of the offending advertisement may be sufficient. If the harm is major, however, then corrective advertising could be required. The available options should not be reduced by the use of a sound, empirically based analysis of misleadingness.

The second important aspect of the proposed procedure is that the policy maker has several levels of sophistication to choose from. This advantage is often undervalued. The policy maker's freedom to choose any one of several techniques may be restricted by factors beyond his control. For example, in some situations it may not be possible to obtain a numerical measure of misleadingness. In such cases, only Procedure 1 is available. Alternately, the policy maker's constituency, such as a higher policy body or the drug companies and their advertisers, may not comprehend and accept a more sophisticated procedure. The point here is that more than analytical constraints may affect the acceptability of the empirically based analysis that a policy maker can use. By offering several levels of sophistication, the present proposal does not confront the policy maker with a "take it or leave it" situation. Rather he can choose the level of information that will best serve a variety of policy making considerations.

REFERENCES

Clyde H. Coombs, Robyn M. Dawes and Amos Tversky, __Mathematical Psychology: An Elementary Introduction__ (Englewood Cliffs, New Jersey: Prentice-Hall, 1970).

David M. Green and John A. Swets, __Signal Detection Theory and Psychophysics__ (New York: Wiley, 1966).

James E. Haefner, "The Legal Versus the Behavioral Meaning of Deception," in M. Venkatesan (Ed.), __Proceedings__, Third Annual Conference for Consumer Research, 1972, 356-360.

Jacob Jacoby and Constance Small, "Deceptive and Misleading Advertising: The Contrasting Approaches of the FTC and the FDA," __Journal of Marketing__, 1975, in press.

E. John Kottman, "A Semantic Evaluation of Misleading Advertising," __Journal of Communication__, 14(September, 1964), 151-156.

J. Edward Russo, "A Statistical Analysis of Wiretap Evidence," unpublished manuscript, University of California, San Diego, 1975.

---------------------------------------

Tweet
window.twttr = (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0], t = window.twttr || {}; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://platform.twitter.com/widgets.js"; fjs.parentNode.insertBefore(js, fjs); t._e = []; t.ready = function (f) { t._e.push(f); }; return t; } (document, "script", "twitter-wjs"));