Recent Controversy in Washington: an Ftc Case

ABSTRACT - A recent FTC case provided the opportunity to review three important aspects of survey research employed in legal proceedings -- the role of confidentiality of survey responses in determining the admissibility of surveys as evidence, a definition of the independence of surveys, and the definition of survey response rates.


Irving Roshwalb (1981) ,"Recent Controversy in Washington: an Ftc Case", in NA - Advances in Consumer Research Volume 08, eds. Kent B. Monroe, Ann Abor, MI : Association for Consumer Research, Pages: 276-280.

Advances in Consumer Research Volume 8, 1981      Pages 276-280


Irving Roshwalb, Audits & Surveys, Inc., New York, N.Y.


A recent FTC case provided the opportunity to review three important aspects of survey research employed in legal proceedings -- the role of confidentiality of survey responses in determining the admissibility of surveys as evidence, a definition of the independence of surveys, and the definition of survey response rates.


The "recent controversy" of the title involved the FTC's complaint that the survey findings used by a division of Litton Industries in its advertising were based on surveys which "...did not provide a reasonable basis for or prove the claims of the advertisements" (USA-FTC, p.5).

Later, the complaint goes on to say, "The Litton surveys had a very high rate of non-response. However, Litton failed to determine whether there was a bias of non-response, that is, whether the answers of non-respondents would have differed significantly from those of respondents" (USA-FTC, p.6).

This is, for those interested in the use of survey findings in legal proceedings, a very lively case. It presents several interesting aspects of the place of survey research in such proceedings. One of these is the problem of non-response and how it should be reported. Another deals with the terms under which a study should be deemed admissible in legal and/or regulatory proceedings. Finally, a third deals with the definition of an "independent" survey. The issues of survey "admissibility" and "independence" are discussed briefly before turning to the non-response problem.


At one point in the hearings, the judge was asked to deny the admissibility of an FTC-sponsored survey on the grounds that there was no way for Litton counsel to identify the individual respondents and connect them with their questionnaires. The ruling of Judge John Mathias was quite to the point:

"...I find that the admissibility of surveys depends on their relevancy and trustworthiness and not upon whether respondents gain access to the codes identifying survey-respondents with their respective questionnaires" (USA-FTC Order, p.1). [To keep the language straight, "respondent" in this context refers to those answering the FTC charges; "survey-respondent" refers to those interviewed as part of the studies.]

This statement parallels the "Federal Rules of Evidence'', which state, in part, "Attention is directed to the validity of the techniques employed rather than the relatively fruitless inquiries whether hearsay is involved" (West Publ. Co. 1977).

What is of greater interest is that the order then proceeds to give the reasons for this finding, stressing the confidentiality of the relationship between a survey agency and its respondents. It is worth quoting in full:

"... Both (...survey agencies...) claim not only that they have given pledges of confidentiality to the interviewees, but that to break such confidentiality would be against the ethics of their profession and would be detrimental to the efficacy of future surveys to be conducted by themselves and others of their profession...there is some legal support for the surveyors' claims in this regard. Although there is certainly no pollster-client privilege, it is recognized that such claims of confidentiality cannot be lightly brushed aside. Where they can be recognized without depriving a litigant of discovery adequate to fairly meet his opponent's case, due deference should be given to legitimate claims of confidentiality" (USA-FTC, pp. 6-7).

The concluding argument was:

"Since we are so dependent on surveys for much of the data upon which daily decisions are made, every attempt should be made to protect their accuracy. Yet the pollsters depend, to a great extent, on assurances of confidentiality to promote the efficacy of their product. If the survey-respondents believe they are going to be routinely re-interviewed and cross-examined in connection with any poll in which they participate, they will either refuse to participate or be guarded in their answers. In either event the survey method of gathering information is impaired. If enough people become so disenchanted with surveys that they refuse to participate, the ability to obtain a meaningful universe is seriously affected. If their answers are not free and open, the results of the poll are severely twisted" (USA-FTC, p.8).


A second charge leveled against the advertising was that some of the ads carried the misleading notation that the surveys were independently conducted. The studies were actually designed by Litton and then turned over to an outside agency to execute the field work and prepare the tabulations. The judge's comments are again worth quoting:

"In either case (whether or not the ad carried the 'independent' notation) I find that the reader vas not likely to believe that the Litton surveys were totally independent. It is difficult to perceive how any reader of the advertisements in question could possibly believe that the surveys were conceived, designed and conducted without any input by Litton, in view of their narrow focus. Further, the contact part of the surveys --which might be thought of as the 'conduct' of the surveys -- was, in fact, conducted independently by (... the survey agency ...)" (USA-FTC, p.6).

This represents a judicial attempt to deal with the concept of an "independent survey" by pointing out that independence is not an absolute judgment but one that must be dealt with on some relative scale and reasonably.


The discussion of non-response in the affected studies was pert of a much broader discussion of whether the studies were conducted according to the standards of the industry. The use of surveys in legal proceedings is generally governed by this concern. The U.S. Judicial Conference has recommended that the party offering the survey in evidence has the burden of establishing "...that the survey was conducted in accordance with accepted principles of survey research" (McCarthy 1973, p.508). One of the areas on which evidence would be required is that, "the sample, the questionnaire, and the interviewing were in accordance with generally accepted standards of objective procedure and statistics in the field of such surveys" (McCarthy 1973, p.509). It would appear, however, that defining "generally accepted standards'' or "accepted principles" of survey research can be a difficult task.

Before turning to the discussion of the non-response rates in the studies directly involved in this case, a brief description of the two would be useful. One called for interviewing technicians working for independent companies that service consumer microwave ovens; the other called for interviewing technicians working for independent firms that specialize in servicing commercial microwave ovens. Lists of the service agencies were provided by Litton Industries to the survey agency that conducted the field work. The survey agency's role in the study was to execute the field assignment by telephone interview and to provide tabulations of the findings.

In the "Statement of Issues" section of the complaint tiled by the FTC, the following charge was snide:

"II. The surveys do not provide a reasonable basis for or prove the claims of the advertisements.

2. The Litton surveys suffer from basic deficiencies in survey execution.

B. The Litton surveys had a very high rate of non-response. However, Litton failed to determine whether there was a bias of non-response, that is, whether the answers of non-respondents would have differed significantly from those of respondents" (USA-FTC, p.6).

The basis for this charge was obtained from the report summaries prepared by Litton Industries. In one instance, the report stated that the response rate for the Consumer Microwave Oven Technician Survey was approximately 47% (234 divided by 500) of the sample. In the second survey, the finding was that among commercial microwave oven technicians the response rate was 42.2% (211 out of 500).

The available data on sample disposition for the two surveys are contained in the following table. The first column of data refers to the Consumer Microwave Oven Technician Study while the second refers to the Commercial Microwave Oven Technician Study.


It is clear from a review of this table and from the charges made by the FTC that the calculation of the response rate contained in the charges was mote like that of a "hit rate", as much a measure of the quality of the list, perhaps, as of the quality of the effort expended to complete the interviews.

There are at least two entries in the table that should not be ignored in the calculation of response rates. The first is the listing of 'not qualified' persons. As far as evaluating the effort put into completing the interviewing assignment, these 'not qualified' respondents represent successful conclusions of interview attempt. That is so, at least from the point of view of the interviewer. If we consider those as successful attempts in the Consumer Technicians Survey, then the calculation of the response rate for this group, R(C1), is

R(C1) = (234+80)/500 = 314/500 = 62.8%.

Before we attempt a similar calculation for the commercial technicians, we have to consider what to do with the 30 telephone numbers which were found to be 'disconnected, out of business'. These 30 numbers are not part of the universe. Eliminating them from the base yields a response rate for the commercial technicians, R(CM1), of

R(CM1) = (211+95)/(500-30) = 306/470 = 65.1%.

A second approach is to measure response only among those who qualify. The rationale here is, of course, that it is only those who qualify who are the object of the research and only a measure of success among this group is reasonable and appropriate. Using this definition of non-response, a second response rate among the consumer technicians, R(C2), may be calculated --

R(C2) = 234/(234+15+171) = 234/420 = 55.7%.

The comparable calculation among the commercial technicians, R(CM2), is

R(CM2) = 211/(211+25+139) = 211/375 = 56.3%.

A third definition of response rate would now argue that the 139 consumer technician telephone numbers that were classified as 'nonreachable' and the 173 commercial technicians who 'could not be reached' should not all be judged as non-respondents. Some portion of them, could they be reached, would end up as unqualified. One estimate of how many should be so classified is to take the ratio of non-qualified among those that were reached and then apply this ratio to those who could not be reached. First, among consumer technicians:

The proportion who are estimated to be unqualified among the 171 who could not be reached is

80/(80+15+234) = 80/329 = 24.3%.

Therefore, the estimated number of the 171 who could not be reached who should be counted as non-respondents is (1.000-.243)171 = (.757)(171) = 129. The calculation of the response rate among consumer technicians, R(C3), then becomes

R(C3) = (234+80)/(234+80+15+129) = 314/458 = 68.6%.

Among the commercial technicians, in a similar manner, the 139 who could not be reached are reduced to an estimated 99 non-respondents. The calculation of the non-response rates, incorporating this estimate is

R(CM3) "211+95/(211+95+25+123) = 305/451 = 67.6%. The following table summarizes these calculations.


Clearly, the lowest value, that corresponding to the 'nit rate', is the least appropriate measure of response rate. In this study, either R1 or R3 is more appropriate. In both of these instances the number of respondents who are contacted and who prove to be unqualified for the purposes of the study are given fair representation in the calculation of the response rate. R1, the estimate that counts as successful interviews those contacts that establish the respondent as 'not qualified', is probably the better of the two, since it involves fewer assumptions. R2 simply assumes that all who are not reached are non-respondents, even though some of them would -- if ultimately contacted -- count among those who qualify.

The need to find a qualified respondent via some form of preliminary screening doesn't appear in all telephone surveys. In many studies, any adult member of the family is eligible for interview. As the screening procedure becomes finer, the incidence of unqualified (i.e., ineligible) respondents increases, and the problem of coping with them in the calculation of non-response becomes more acute.


One of the difficulties facing an expert witness in a case such as this is to be able to testify on industry standards. Wiseman and MacDonald comment in their study of response rates that "there is a lack of agreement among industry leaders regarding terminology and reporting procedures" (1978). The exercise we have gone through is a reasonable display of the variety of response rate calculations that are available and all are based on reasonable definitions of what it is we are trying to measure, in many instances, the major problem is whether response rates are reported at all, and, if reported, whether they are clearly defined.

There is one more example of response rate reporting that should be considered. Suppose the interviewing assignment requires that a given number of interviews be completed in each of several strata. Suppose further that the response rate varies among strata. In a simple example, the following table summarizes response rates and the required initial sample sizes needed to achieve the fixed number of completed interviews (in this case 10) in each stratum. Assume that the strata sizes are equal.


The simplest reading is that the response rate for this survey is 40/182.5 = 21.9%. If the assignment were stated a bit differently - as, for example, to estimate the response rate in the universe from which the sample was drawn, the solution would be to assign equal numbers of telephone numbers to call in each stratum, observe the response rates in each, and then construct the appropriate estimate of the universe parameter, as in the following table.


The response rate calculation for this set of data is 40/100 = 40.0%. In fact, the first procedure produces the harmonic mean of the strata response rates, while the latter calculation yields the arithmetic mean of the strata response rates. The harmonic mean is generally less than the arithmetic mean and would always produce an underestimate of the response rate.

This example bares another source of confusion in the calculation of response rates. Is the purpose of the calculation to come up with a descriptive, mechanical exercise to run the observed data through, or is the purpose to estimate the response rates in the universe from which the sample data have been drawn? The first is characterized by a (perhaps, apocryphal) researcher's comment made in the wake of the 1948 election poll experience that "we have no non-response problem. If we promise 100 interviews, we keep on going until we have 100 interviews." That response rates are governed by some probability mechanism has long been recognized, particularly in procedures proposed by Hartley and Politz and Simmons to weight for not-at-homes by the inverse of the probability of finding respondents at home (Hartley 1946, Politz & Simmons, 1949).

In this simple example, which calls for a fixed number of interviews per stratum where response rates may vary among strata, the mechanical calculation of response rates results in an underestimate of the response rate for the universe of the study. However, the simple arithmetic mean of the strata response rates yields an unbiased estimate of the universe response rate. So, not only must the elements of the response rate be properly designated, but the estimation procedure itself must be properly selected.


A number of things are quite clear as one considers the experience in this FTC case. This is an example of the ever-growing use of survey data by the parties to legal proceedings. As survey evidence finds wider usage in such matters as trademark cases, advertising substantiation and the support of product claims, the reviews of these documents can be expected to become more critical.

Creating and adhering to standards of survey research are going to become increasingly important to the users of survey research and, therefore, to the practitioners of the research art. And I use the word 'art' advisedly. It was not without careful thought that Stanley Payne named his book, "The Art Of Asking Questions" (1951).

The difficulty in setting standards for response rates is the temptation to be dogmatic, that is, to establish by fiat that response rates below some arbitrary level are inadequate or unacceptable. That is the easy way out. The FTC charge in this case states explicitly what concerns us when we discuss response rates. To repeat the relevant sentences from the charge, "The Litton surveys had a very high rate of non-response. (Forget for the moment that the calculation actually used was inappropriate. The next sentence is what counts.) However, Litton failed to determine whether there was a bias of non-response, that is, whether the answers of the non-respondents would have differed significantly from those of respondents" (USA-FTC, p.6).

Of course, that's where the answer to the problem lies, regardless of the rate of non-response. In most surveys, while this solution is implicitly recognized, it rarely is investigated thoroughly. For one thing, the nature of the non-response beast is that it doesn't make itself readily available for measurement. In any case, time and budget constraints usually preclude much effort in this direction. But there are other things that can and should be done. The first of these is to report the non-response rate, along with a precise definition of how it was calculated. The second is to state what the effect of the non-response could be, given some reasonable assumptions about the range of possible differences between respondents and non-respondents. One such solution is contained in the nomogram reproduced below (Roshwalb 1970, p.17).


For a stated response rate and the observed binomial proportion in the respondent sample, the nomogram yields directly the estimate of the lower limit of the total sample (i.e., respondents plus non-respondents) estimate if the binomial estimate for the non-respondents is taken as 0.00. It also indicates the value for the total sample if the value among the non-respondents is taken as 1.0. The possible effect of non-response on the estimate based on the total sample is thus available under very extreme conditions. With more information about the universe and subject matter under study it is possible to place more reasonable limits on the values for the non-respondents and thus obtain more reasonable estimates for the total sample.


Hartley, H. O. (1946), "Discussion of paper by F. Yates," Journal of the Royal Statistical Society, 109, 37.

Manual for Complex Litigation, St. Paul, Minn.: West Publishing Company (1977).

McCarthy, J. Thomas (1973), Trademark and Unfair Competition, Rochester, N.Y.: The Lawyers Co-operative Publishing Company.

Payne, Stanley L. (1951), The Art of Asking Questions, Princeton, NJ: Princeton University Press.

Politz, Alfred N. and Simmons, Willard R. (1949), "An Attempt To Get The 'Not-at-Homes' into the Sample Without Callbacks," Journal of the American Statistical Association, 44, 9-31.

Roshwalb, Irving (1970), Nomograms For Marketing Research, New York: Audits & Surveys, Inc.

United States of America, before Federal Trade Commission. In the matter of Litton Industries, Inc., a corporation, and Litton Systems, Inc., a corporation. Docket No. 9123.

United States of America, before Federal Trade Commission. In the matter of Litton Industries, Inc. Docket No. 9123. Order concerning the identification of individual survey respondents with their questionnaires. 6-19-79.

Wiseman, F. and McDonald, P. (1978), The Non-Response Problem in Consumer Telephone Surveys, Cambridge, Mass.: Marketing Science Institute.



Irving Roshwalb, Audits & Surveys, Inc., New York, N.Y.


NA - Advances in Consumer Research Volume 08 | 1981

Share Proceeding

Featured papers

See More


When the Ends Do Not Justify Paying for the Means: Consumers Prefer Shifting Costs from Means to Goals

Franklin Shaddy, University of Chicago, USA
Ayelet Fishbach, University of Chicago, USA

Read More


F3. The Dark Side of Happy Brands: A Case Study of Newport Cigarette Advertising

Timothy Dewhirst, University of Guelph, Canada
Wonkyong Beth Lee, Western University, Canada

Read More


D9. Consumption Closure as a Driver of Positive Word of Mouth

Christina Saenger, Youngstown State University
Veronica Thomas, Towson University

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.