The Role of Response Rates in Evaluating Manuscripts For Publication

Robert Ferber, University of Illinois
[ to cite ]:
Robert Ferber (1981) ,"The Role of Response Rates in Evaluating Manuscripts For Publication", in NA - Advances in Consumer Research Volume 08, eds. Kent B. Monroe, Ann Abor, MI : Association for Consumer Research, Pages: 274-275.

Advances in Consumer Research Volume 8, 1981      Pages 274-275


Robert Ferber, University of Illinois

The sad news I have to report is that in the manuscripts I have seen that involve surveys, a very large portion, at least half, are either rejected or sent back to the author for revision on account of inadequate attention to response rates and their effects. Let me cite some recent examples, mostly based on composites of past studies we have seen.

1.  A mail survey is conducted of the patrons of a particular service to ascertain people's preferences for different attributes of this service. The response rate after one follow-up mailing is 40%. No further attempts are made to find out about the characteristics of the nonrespondents. However, the data is subjected to an elegant multivariate analysis, the results of which are used to recommend to management how their future services should be structured, and also to suggest to this, and other worlds, how all such services should be structured. Management was very happy with the recommendations; the JCR reviewers did not go into similar ecstasies.

2.  A survey of the members of a consumer mail panel yields a 70% rate of response. A multivariate model is then developed to differentiate between the users and the nonusers of particular products, projecting to the population based on a finding of nonsignificance between the responders and the nonresponders to the mail questionnaire by various demographic characteristics. In this instance, two things were overlooked. First, virtually none of the demographic characteristics included in these comparisons for nonresponse bias were significant in the later analysis. Second, the mail panel by definition is already self-selected, a good estimate for that panel being that approximately 20% of those originally contacted were in the panel. Hence, the effective response rate was not 70%, but 14%.

3.  A telephone survey is made of a random sample of users of a hot line assistance service provided by a state agency. No information is provided on the type of response obtained, though the later analysis proceeded to generalize on the characteristics of the users of this service.

In giving these examples, I do not necessarily mean to imply that a high response rate necessarily means a good survey, or even that special attention should be given to non-response in every survey. For example, if the purpose of a mail survey is to ascertain how many and what type of people are likely to be interested in a new service, the non-respondents do not need special study, since by not replying these people have indicated they are not interested. (In a few instances, nonresponse may not indicate lack of interest, but these are so few as to be inconsequential.) The fact remains, however, that in the large majority of cases, one of the best ways how to assure rejection of a manuscript is to have a high rate of nonresponse, pay no attention to it, and then generalize to this and other populations.

Needed Information

For manuscript evaluation, what do we need to know about response rates? The answer is essentially the same as what we need to know in evaluating a particular study. First and foremost, we need to know the magnitudes of the response rates both for the population as a whole and for key strata. If, for example, the purpose of a study is to compare certain differences in behavior between blacks and whites, it is pertinent to give the response rates for each group.

Important also is a clear definition of response rates, since without such a definition it is difficult to evaluate the significance of particular rates. Thus, a response rate could refer to the total sample list as a base, only eligible sample members, or only those who were contacted. Since the difference between these quantities can sometimes be very great, the term must be defined before it can be interpreted.

In addition to information on the size of the response rates, it is useful to have some interpretive comments from the author on the reasons why particular response rates were encountered, especially if they are very low. Such insights can be very useful for interpreting the results of the later analysis--and also indicate whether the author is cognizant of the sort of analytical problems that may be engendered by low response.

The other key item about response rates, something that is stressed everywhere but is frequently ignored, is the effect of the response rates on the results. Some people feel this is only likely to be a problem if the response rate is low, by some suitable definition, but this is not a very useful rule and can be very misleading. Even if the response rate in a mail survey is, say, as high as 80%, if the survey deals with something like people's willingness to undertake certain energy conservation programs, the chances are that the nonresponding 20% will have very different (probably such more negative) attitudes on the subject than the responding 80%. To leave out this group of nonresponders and to generalize from the 80% could lead to very biased results.

In fact, what is meant by a low response rate? Analytically, there does not seem to be any meaningful definition. In fact, most authors seem to try not to recognize that a response rate is low, and often try to minimize the size of the response rate by referring to lower response rates obtained in similar studies, or to special circumstances that mitigated against getting a better rate of response. If they do mention that the response rate is low, they tend to point out that it is still within the wide range of response rates obtained in this type of study, and is therefore acceptable--this was the exact argument used in a manuscript that was recently processed by us. Of course, by this argument it is hard to imagine what sort of response rate mould be unacceptable.

In the final analysis, it is really immaterial whether a response rate is 'good' or 'bad'. The key question is whether the response rate is such that the results of the study are likely to be affected by some sort of nonresponse bias. Clearly, as so much depends on the subject of the study and the structure of the questionnaire. If willingness to undertake conservation programs is imbedded in 8 questionnaire covering many other topics, the effect of nonresponse on this question is likely to be much lower than if that topic is the sole subject of the questionnaire. In either event, some consideration is needed in virtually all instances of the possible effect of nonresponse on the results obtained in a survey study. One could generalize and say that such an evaluation is even more essential in the case of mail surveys, since nonresponse rates there can be very low, but the same basic questions have to be posed in almost any type of survey.

What Response Rates Tell Us

The fact of the matter is that the treatment of response rates can tell a journal editor a great deal about the quality of a particular survey and the soundness of the data on which the analysis rests. If the author does not even report any response rates, he or she has a probability of just about zero of getting that manuscript accepted by JCR. This is true even in the case of a purposive sample, where some authors argue that since the idea vas to obtain only a certain number of interviews with people having particular characteristics, and since they did so, response rates are of no consequence. (This is only true if the author could show that the nonrespondents are no different from the respondents on all relevant characteristics for that study; if that were possible, the study would be unnecessary.)

More important to editors is not only the reporting of response rates, but how the topic is handled. If the author gives short shrift to response rates in a situation where nonresponse bias is highly likely (such as a one-topic mail survey with no follow-ups), there is a strong indication that the author is not at all sophisticated in the analysis of survey data, and the manuscript will be returned. The one exception would be where the objective of the study is to illustrate some new analytical technique, and the survey data are used only for illustrative purposes. Even then, however, any sort of substantive inferences would have to be hedged on account of the possible bias due to nonresponse, and are actually best not made at all.

Personally, I find it rather curious that there seems to be little correlation between sophistication in the use of analytical techniques, such as multivariate analysis, and understanding of the problems that stem from nonresponse and other types of biases in survey data. With the increasing use of causal modeling techniques in consumer research, one would expect an increasing positive correlation in these two skills. So far, however, the correlation seems to be pretty close to zero--those who are sophisticated in survey techniques do not seen equally sophisticated in data analysis, and vice versa.

Recommendations to Authors

On the basis of these comments, let me conclude this presentation with a few suggestions on how authors of articles making use of surveys as the primary data source should deal with the subject of response rates:

1.  Cite the overall response rate as well as the response rates for the subgroups.

2.  Offer explanations for the reasons why the response rates are what they are, including comparison with the response rates on similar studies.

3.  Discuss the possibility of nonresponse bias and indicate, hopefully with some justification, why you feel such bias is or is not present.

4.  If nonresponse bias is expected, evaluate the nature of the bias as best you can. Moreover, keep this bias in mind when making generalizations about population attitudes or behavior, as well as in evaluating the significance of your findings (even though the significance will be diminished somewhat as a result).

In making these suggestions, I am not suggesting that you lengthen the manuscript appreciably. As many of you know, space is at a premium in both JCR and most other journals, and authors invariably write too much anyway. The fact remains, however, that this information can be presented very briefly, often in footnotes, and the results will be a much more useful and informative manuscript. The result will also be a lesser probability that the manuscript will be rejected or returned for major revision (though even with this change I am afraid that the chances of any manuscript being returned at least for revision will remain quite high).