Nonresponse in Consumer Surveys

ABSTRACT - Response rates in consumer surveys have declined in the 1970's due to a number of controllable and uncontrollable factors. This decline has been a source of concern to many professionals in government and private industry. As a consequence, consumer researchers now find their results and inferences being challenged when they are based upon survey data which are susceptible to substantial non-response bias. This paper reviews the findings of recent nonresponse research investigations and discusses their importance and implications for consumer researchers.


Fred Wiseman (1981) ,"Nonresponse in Consumer Surveys", in NA - Advances in Consumer Research Volume 08, eds. Kent B. Monroe, Ann Abor, MI : Association for Consumer Research, Pages: 267-269.

Advances in Consumer Research Volume 8, 1981      Pages 267-269


Fred Wiseman, Northeastern University


Response rates in consumer surveys have declined in the 1970's due to a number of controllable and uncontrollable factors. This decline has been a source of concern to many professionals in government and private industry. As a consequence, consumer researchers now find their results and inferences being challenged when they are based upon survey data which are susceptible to substantial non-response bias. This paper reviews the findings of recent nonresponse research investigations and discusses their importance and implications for consumer researchers.


During the 1970's there was a growing interest among many social scientists and statisticians within the Federal government, private industry and academia about the negative impact that certain uncontrollable, environmental factors were having on the survey research process.

Changing life styles, increased female participation in the labor force, flexible working hours and more leisure time made it increasingly difficult for a potential respondent to be contacted by an interviewer. Further, if and when contact was made, numerous other factors such as privacy related concerns, fear and suspicion of strangers, questions regarding survey legitimacy and excessive interviewing resulted in a substantial percentage of contacted respondents deciding to refuse participation.

In 1973, the American Statistical Association brought together a diverse group of social scientists and survey methodologists in order to discuss the problems of conducting present-day surveys of human populations. From the discussions that took place, a number of conclusions were reached. Among these was that "Survey research was in some difficulty, and, to an undetermined scale, that difficulty was increasing." For example, some participants noted that completion rates on general population surveys averaged 60-65% compared to 80-85% in the decade of the 1960's (The American Statistician, 1974).

What are the consequences of increased nonresponse on data quality? This is a question that cannot easily be answered in a survey because nonresponse bias depends not only on the magnitude of nonresponse, but also on the degree to which nonrespondents differ from respondents on the variables of interest. However, in discussing the effects of nonresponse, Platek (1977) points out that since the sampling variance of the estimates is inversely proportional to the response rate, estimates based on a simple random sample with 80% response rate will have a sampling variance that is 12.5% higher than the variance of corresponding estimates with 90% response rate.

This paper will briefly review some of the key findings of recent studies and investigations that have focused on nonresponse and data quality related issues. In addition, it will discuss efforts that are now underway that will hopefully result in an improved understanding of the nature and extent of nonresponse in consumer surveys. Finally, implications for the academic research community will be explored.


As an outgrowth of the 1973 ASA conference, the National Science Foundation agreed to support a pilot study in which 36 surveys were examined in detail in order to determine the extent to which stated survey objectives were achieved (Bailar and Lanphier 1978). Of the 36 surveys, 26 were conducted by or for the Federal government and the remaining ten by state governments, academic institutions or professional associations. The major finding of this investigation was that:

Twenty-two of the 36 surveys did not meet their objectives because of technical flaws such as a low response rate, the inclusion of inferences in the final report that could not have been substantiated by the survey results, no validation of survey interviews and no data "cleaning."

With respect to response rates, Bailar and Lanphier stated that they were difficult to collect and compare as they were found to have different names and different definitions in different places and circumstances. Further, when response rates were reported, they were often inflated.

A second major investigation was recently completed for the Comptroller General of the United States upon request of three members of the U.S. House of Representatives (Comptroller General 1978). The study examined the potential for incorrect or unreliable information being generated by public opinion polls and attitude surveys sponsored or conducted by Federal agencies. The Congressmen were concerned that poor quality data from surveys were being used as the basis for making Federal management decisions that affected national programs and policies.

In carrying out the study, the General Accounting Office (GAO) identified a number of recently conducted surveys at various agencies and decided to review five in detail. Their overall conclusion was:

Although there were no indications that survey results were intentionally misused, use of the results of all five should have been limited because each contained serious technical flaws.

Numerous examples of these technical flaws were cited by the GAO in their report including those in a survey designed to provide an information base for use in a model that compared potential national carpool incentive policies. The GAO concluded that findings could not be used for national projections because of the type of sampling used and the extremely low response rate.

As a result of that study, it was recommended that Federal agencies be given more explicit guidance as to what constitutes a good attitude survey or opinion poll and that the use of surveys which contain extensive technical flaws should be discouraged.

Given the concern within the government about the widespread use of faulty surveys, it was not surprising to see that the Federal Trade Commission, in a precedent-setting case, recently ruled that Litton Industries advertisements for its microwave oven violated federal law because the survey research data used to justify the ad claims were defective (Marketing News 1980). Irving Roshwalb, of Audits & Surveys, Inc. was asked to testify in that case and he'll later describe in more detail the specific nature of the issues raised.


In general, it appears that both users and producers of survey research in private industry have not paid as much attention to nonresponse related issues as have their counterparts within the Federal government. Among the reasons for this are (1) time pressures, (2) budget limitations, (3) client specifications, and (4) less stringent precision requirements. However, there is a growing awareness of the problem within the industry and a realization that improvements must be made in terms of research methodology in order to insure that accurate and reliable data are obtained.

During the past four years, Prof. Philip McDonald and myself have conducted a major nonresponse research investigation. This investigation carried out under the sponsorship of the Marketing Science Institute, has had the cooperation of their member firms as well as those in CASRO, the Council of American Survey Research Organizations. In 1978, 32 MSI and CASRO firms supplied data from 182 consumer telephone surveys that were conducted over a six week period. Collectively, for these surveys, there were over one million unique sample members selected to be interviewed. We found (Wiseman and McDonald 1978):

A relatively large percentage of potential respondents/households were never contacted. The median non-contact rate was 40%.

Of those individuals contacted, slightly more than one in four refused participation. The median refusal rate was 28%.

Overall, response rates were low, with a median rate of 30% for surveys in the data base.

The analysis of the data received from these commercial companies suggested that low response rates were due more to controllable, than to uncontrollable factors. For example, in almost 40% of the surveys only one attempt was made to contact a potential respondent and rarely did a research firm make a concerted effort to convert reluctant respondents.


The seriousness of low response rates depends upon the extent to which respondents differ from nonrespondents on the variables of interest within a survey. It may be that those individuals who are difficult to reach or who are unwilling to be interviewed share the same general attitudes, opinions, preferences, etc., as do individuals who are readily accessible and who are willing to be interviewed. If this be the case, then the potential consequences of a low response rate are substantially reduced.

If, however, significant differences do exist between respondents and nonrespondents then survey results, no matter how large the sample size, are likely to be of little value to decision or policy-makers.

Little is known about the characteristics of nonrespondents. However, in a number of recent studies, differences on a number of dimensions have been found to exist among those who readily respond in a survey, those who initially refused, and those hard-to-reach people who responded only after a large number of callback attempts. For example, Table 1 presents the results of four such studies in which various categories of nonrespondents were found to differ from their readily accessible and cooperative counterparts.



As can be seen, the results of the four studies with diverse populations show a measure of similarity with respect to the characteristics of individuals who are difficult to reach and those who are reluctant to grant an interview. However, it appears that on a number of dimensions, the characteristics of refusers are opposite those of hard-to-reach individuals. Thus, a strategy that involves a large number of callbacks without including any extra effort to convert initial refusers is one that is likely not to improve the representativeness of the sample. In addition, to completely assess the nature of nonresponse bias, we also need information on how the various response and nonresponse segments vary with respect to the key variables of interest in a survey and not just on their socioeconomic and demographic compositions.


As a result of the research that has taken place during the last few years, two Nonresponse Task Forces have been recently established involving members of MSI and CASRO.

The first Task Force, chaired by Lester Frankel (former President of the American Statistical Association) has been given the charge "to develop a uniform formula for measuring completion rates in the survey research industry for all modes of data collection -- mail, telephone and personal interview." This Task Force not only includes MSI and CASRO representatives, but also includes representatives from the Bureau of the Census, and Office of Federal Statistical Association.

It is anticipated that the Task Force will make its recommendations within the next year and, hopefully, after discussion within the survey research community, a set of standardized definitions and reporting procedures will be adopted. Such standards are long overdue.

A second Task Force is working on the development of a research design for a large scale study that will examine, in detail, the characteristics of nonrespondents and the impact that their exclusion has on survey results. At the present time we envision a national sample of 4,000 heads-of-household with up to 25 callback attempts made to locate as many hard-to-reach and "nonresponders" as possible.


The current thrust and interest in nonresponse related issues has important implications for academic researchers in three different areas: (1) teaching, (2) research, and (3) consulting.

Those in academia have a responsibility to stress, more than has been done in the past, the importance of data collection techniques and the need for quality data. We now have sophisticated sampling and experimental designs, along with methods of analysis. The element that links these two areas together is data collection and, too little attention has been given to this subject. Researchers should be concerned about low response rates and the potential impact that non-response and other forms of nonsampling errors can have on data quality. In a 1977 panel discussion sponsored by Advertising Age, leading research professionals indicated that one of the major problems facing the industry was the lack of competent research professionals (Advertising Age 1976).

In consulting activities and in testimony, as Keith Hunt emphasized in his presidential address last year to ACR members in San Francisco, it is incumbent on the research professional to stress the need and importance of sound methodological procedures especially if the results of consumer research will be used externally as well as internally. The Litton case should serve as a warning that the FTC and other government agencies will be looking very closely at claims and inferences that are made as a result of attitude surveys and opinion polls.

Finally, in conducting our own research, attention again must be given to data quality and improved response rates even if this means working with smaller samples. Lipstein (1975) made an eloquent plea for this in his excellent article entitled, "In Defense of Small Samples." It appears now that government sponsored studies will be more closely scrutinized than they have in the past. This is also likely to be true for articles submitted to scholarly journals. For example, in the current Journal of Marketing and Journal of Marketing Research style sheet, there is the following statement: "Do not ignore the nonrespondents. They might have different characteristics than the respondents." Bob Middlestaedt and then Bob Fetter can tell you a little more about the quality of our own data reporting and implications for journal acceptance.


Bailar, Barbara A. and Lanphier, C. Michael (1978), Development of Survey Methods to Assess Survey Practices, Washington: American Statistical Association.

Comptroller General (1978), "Better Guidance and Controls Are Needed to Improve Federal Surveys of Attitudes and Opinions," Washington: General Accounting Office.

Dunkelberg, William C. and Day, George S. (1973), "Nonresponse Bias and Callbacks in Sample Surveys," Journal of Marketing Research, 10, 160-168.

"FTC: Litton Used Defective Research for Microwave Ads," (1980), Marketing News, (July) 23, 12.

"Is Business Hurting Research? Users Differ," (1977), Advertising Age, 1.

O'Neil, Michael J. (1979), "Estimating the Nonresponse Bias Due to Refusals in Telephone Surveys," Public Opinion Quarterly, #3, 218-232.

Platek, Richard (1977), "Some Factors Affecting Non-Response," presented at the International Statistical Institute Meetings, New Delhi.

"Report on the ASA Conference on Surveys of Human Populations" (1974), The American Statistician.

The Data Group (1977), "A Study of Nonrespondents," Unpublished paper, Philadelphia: The Data Group, Inc.

van Westerhoven, Emile (1978), "Covering Nonresponse. Does It Pay?" presented at the Congress of the European Society of Opinion and Market Research.

Wiseman, Frederick and McDonald, Philip (1978), The Nonresponse Problem in Consumer Telephone Surveys, Cambridge: Marketing Science Institute.

Wiseman, Frederick (1980), Toward the Development of Industry Standards for Response and Nonresponse Rates, Cambridge: Marketing Science Institute.



Fred Wiseman, Northeastern University


NA - Advances in Consumer Research Volume 08 | 1981

Share Proceeding

Featured papers

See More


C6. How Does Unsatisfied Curiosity Stir Our Craving For Food?

Chen Wang, Drexel University, USA

Read More


N1. The Experiential Advantage in Eudaimonic Well-being – An Experimental Assessment

Aditya Gupta, University of Nebraska-Lincoln
James Gentry, University of Nebraska-Lincoln

Read More


Expressing Dissent: How Communication Medium Shapes Dehumanization and Attitude Change

Juliana Schroeder, University of California Berkeley, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.