Measuring Miscomprehension: a Comparison of Alternate Formats

ABSTRACT - The performance of two question formats for measuring miscomprehension, i.e. true-false and multiple choice, was examined. Immediately following exposure to one of six stimuli (TV ads), subjects answered a set of questions in either true-false or multiple choice form. Results indicate a high rate of miscomprehension for both question formats. However, there was some indication that the true -false questions were more difficult for subjects to answer correctly. Differences between the two formats were most pronounced for questions dealing with inferences (as opposed to facts).


Fliece R. Gates and Wayne D. Hoyer (1986) ,"Measuring Miscomprehension: a Comparison of Alternate Formats", in NA - Advances in Consumer Research Volume 13, eds. Richard J. Lutz, Provo, UT : Association for Consumer Research, Pages: 143-146.

Advances in Consumer Research Volume 13, 1986      Pages 143-146


Fliece R. Gates, The University of Texas at Austin

Wayne D. Hoyer, The University of Texas at Austin


The performance of two question formats for measuring miscomprehension, i.e. true-false and multiple choice, was examined. Immediately following exposure to one of six stimuli (TV ads), subjects answered a set of questions in either true-false or multiple choice form. Results indicate a high rate of miscomprehension for both question formats. However, there was some indication that the true -false questions were more difficult for subjects to answer correctly. Differences between the two formats were most pronounced for questions dealing with inferences (as opposed to facts).


In recent years, the miscomprehension of communications has become an important subject of research in a variety of fields. This is evidenced by the fact that an increase-number of studies in the marketing, communication, broadcasting, and journalism disciplines have attempted to assess factors such as rates of miscomprehension and characteristics of viewing audiences that contribute to the misunderstanding of messages. Yet, in spite of this increasing interest, a number of key issues need to be resolved. One of these issues focuses on the measurement of miscomprehension. The purpose of the present study is to provide additional data related to the measurement of this important construct.

Previous Research on Miscomprehension

As mentioned, a number of different studies across a variety of fields have examined the subject of miscomprehension. In the broadcast journalism area, Katz, Adoni, and Parness (1977) found that adding a picture to audio messages did not affect comprehension either positively or negatively, though the picture did improve memory for newscasts. Edwardson, Grooms, and Proudlove (1981) obtained an overall miscomprehension rate of 37.7% using newscasts as stimuli and a multiple choice quiz for measurement. They also found that interesting video (in contrast to talking heads) resulted in a greater proportion of correct answers, and that comprehension levels were higher for persons under 40 years of age. Robinson (1982), using newscasts and an unaided recall measure, obtained a similar finding in that individuals between the ages of 25 and 54 evidenced greater comprehension. Robinson also found that comprehension of the news increases as education and income levels increase, and that men tend to comprehend more than women. Finally, a 1981 study by Sahin, Davis, and Robinson revealed that viewers who know more to begin with (i.e. have a higher level of general knowledge) comprehend the televised news to a greater extent, and vice versa.

In the marketing area, a large-scale study conducted by Jacoby, Hoyer, and Sheluga (1980) (also reported in Jacoby and Hoyer, 1982) revealed an average miscomprehension rate of 29.6%. The stimuli used in this study included 30-second advertisements, brief program segments, and public affairs announcements. A nation-wide sample was employed and miscomprehension was assessed using a 6-item true-false quiz administered immediately after viewing. Among Jacoby, Hoyer, and Sheluga's findings were that: miscomprehension occurred for all types of communications to a significant degree, both younger and older viewers has a slightly greater tendency to miscomprehend, and miscomprehension appears to be inversely related to amount of formal education to a slight degree. These findings are in general agreement with those found by the researchers in the broadcasting and journalism areas. In a related study, Hoyer and Jacoby (1985) obtained an average miscomprehension rate of 33.7% for public affairs program.

Using a set of communications for Jacoby and Hoyer (1982), Jacoby, Hoyer, and Zimmer (1982) examined the levels of miscomprehension across three different types of mass media (TV, audio, and print). Results indicated that printed messages were least miscomprehended and audio-only messages led to the greatest degree of miscomprehension. More importantly, the overall mean miscomprehension rate was 225. While this figure is somewhat lower than in the earlier investigation, the sample was composed of students and. thus, was upscale in terms of education.

An identical modality effect was evidenced by Chaiken and Eagly (1976). While the main focus of the study was examining the persuasive impact of complex legal information, miscomprehension was included as a major dependent variable. Even in this entirely different context, a normative miscomprehension rate of 38% was in evidence.

Other studies using entirely different stimuli and alternative measurement techniques have produced similar findings. In a study by Lipstein (1980), 32% of advertising content was miscomprehended. Jacoby, Nelson, and Hoyer (1982) experienced miscomprehension rates as high as 50% in the case of corrective advertising claims. Finally, Schmittlein and Morrison (1983) reanalyzed the estimates obtained from the initial study (Jacoby and Hoyer, 1982) in order to account for the effects of guessing and yea-saying with true-false items. Their major conclusion was that a more accurate estimate of the miscomprehension rate would be 46% (as opposed to 29.6%).

An important implication of the studies previously reviewed is that comprehension of televised messages should not be assumed or taken for granted by those in broadcasting, advertising, and communications research, though this has commonly occurred in practice (Sahin, Davis, and Robinson 1981). Several factors contribute to the continued importance of comprehension as a research topic. For one thing, research concerned with the effects of advertising on the formation of consumer attitudes and purchase intentions has generally assumed that these processes occur after the message has been comprehended (Jacoby, Hoyer, and Sheluga 1980). In other words, a number of advertising models are predicated on the assumption that comprehension has occurred, which, as has been demonstrated, may be a questionable assumption. In addition, public policy topics such as whether or not an ad is deceptive or misleading ant/or whether corrective advertising should be required of a sponsor center on issues of comprehension (Jacoby and Hoyer 1982).

Purpose of the Present Study

One topic which has generated some controversy is that of measuring miscomprehension among viewers. It is apparent from the previous review of recent studies that researchers have used aided and unaided recall measures, true-false quizzes. and multiple choice quizzes to assess miscomprehension, and that various rates have been obtained. Due to its important public policy implications, the Jacoby, Hoyer, and Sheluga (1980) study sparked specific criticisms from Ford and Yalch (1982) and Mizerski (1982) on the issue of measurement. Fort and Yalch (1982) suggested, among other things, that "a multiple choice quiz may have been a better alternative than their true-false quiz because it can be designed to discourage guessing and would assess alternative interpretations of the message that might be considered 'acceptable"' (P. 30). Mizerski (1982), in supporting his claim that the Jacoby and Hoyer (1982) findings were "measurement bound", referred to results of an FTC commercial copy test in which measures of aided recall, unaided recall, and recognition (multiple choice) were taken. Miscomprehension rates of literal ad claims in that study alone ranged from 2% to 40%, depending on the measure used.

Again, in light of the importance these suggestions have for public policy issues, the purpose of the present study is to explore these criticisms and to investigate the performance of alternate types of recognition measures, specifically true-false item format versus multiple choice format, in assessing miscomprehension. We have chosen to focus on recognition measures for several reasons. First, Ortony (1978) and Woodall, Davis, and Sahin (1983) have pointed out that comprehension of a stimulus and memory for the stimulus may well be separate and distinct phenomena which need to be recognized and treated as such in the study of human information processing. While recognition measures probably do confound memory with comprehension to some degree, they are clearly not as dependent on storage of the stimulus in memory as are recall measures. Another reason for studying recognition measures specifically is that recall measures are quite different to code reliably, and need, at the very least, to be supplemented by more objective quiz items. Finally, as noted in a rejoinder by Jacoby and Hoyer (1982) to criticisms of their original study, miscomprehension rates ranging from 2% to 38% have been obtained in various studies depending on whether true-false or multiple choice questions were used as the recognition measure.


Competing hypotheses can be generated regarding the expected performance of true-false versus multiple choice questions. One perspective is that expressed by Ford and Yalch (1982), i.e., that subjects have a better chance of guessing correctly on true-false questions, which should result in lower rates of miscomprehension for true-false tests. The chance of guessing correctly on a true-false item is obviously 50%, whereas the chance percentage is reduced to 25% when a four-item multiple choice format is used. An alternative hypothesis is suggested by researchers in the educational measurement area. Frisbie (1973) noted that a multiple choice item "limits the universe of comparisons that the individual must make" (p. 303), meaning that the problem of searching for counter-examples that would falsify a statement is narrowed down for the subject when a question is asked in multiple choice form. A true-false question, in contrast, does not provide the subject with a neat set of four possibilities, with the result that a more extensive memory search is necessary in order for the subject to generate counter-examples. According to this line of reasoning, true-false items would be associated with a higher miscomprehension rate. The following study was designed to explore these hypotheses.



One hundred sixty eight college undergraduates participated in the study. This represents a convenience sample of sophomores and juniors enrolled in a promotion-strategy class. The authors feel justified in utilizing this type of sample since the goal of the research is to explore the internal validity of the measures employed. There will be no attempt to generalize our findings concerning miscomprehension rates to the television viewing population as a whole.


Six 30-second advertisements were chosen from the original Jacoby, Hoyer, and Sheluga (1980) study for use as stimuli in this study. Ads were initially chosen by Jacoby, et al. (1980) to represent the most heavily advertised product categories. The sponsors agreed to participate in the original study on the condition that they not be identified in the results, so only the products advertised in the ads chosen for this particular study can be identified. They were a laundry detergent, a brand of beer, small appliances, a skin care product, a breakfast cereal, and a cough/cold remedy. These ads were chosen to represent the spectrum of miscomprehension scores obtained by Jacoby, Hoyer, and Sheluga (1980) (i.e. from high to low).

Dependent Variables

True-False Quizzes. For purposes of comparison, the exact 6-item true-false quizzes used by Jacoby, Hoyer, and Sheluga (1980) were also used in this study. Half of the questions were based on factual material in the ad, while the other half were based on inferences which might reasonably be drawn from the ad. In each quiz, two items were accurate and four items were inaccurate. In sum, each quiz consisted of one true fact, one true inference, two false facts, and two false inferences. In order to construct the original true-false quizzes (Jacoby, Hoyer, and Sheluga 1980), each 30 second advertisement was analyzed for product or product equivalent information in both audio and visual form. This product relevant information provided the basis for quiz construction. It was determined by the authors at that time that six quiz items exhausted the range of suitable information for miscomprehension testing. The original quizzes were subjected to a series of pretests to assure that they properly sampled the information in the ads and that they were understandable to study participants.

Multiple Choice Quizzes. The multiple choice quizzes in the present study also contained six items each, which were written so as to match the true-false items in content. It was presumed that Jacoby, Hoyer, and Sheluga (1980) had adequately dealt with the issue of sampling the relevant information contained in the ads. It has been suggested (Anderson 1972) that paraphrases can be used to assess comprehension. A paraphrase captures the meaning of the original message, but differs with respect to the shape or sound of the specific words. Comprehension is presumably required for a person to answer a paraphrase question correctly (Anderson 1972). Jacoby, Hoyer, and Sheluga (1980) employed paraphrase to construct their original true-false quizzes, and that tradition was followed in writing multiple choice quizzes for the present study.

The task in composing the multiple choice items was to create an item "stem" and alternative answer categories that would correspond to Jacoby, Hoyer, and Sheluga's (1980) true-false items. It has been suggested in the educational measurement literature that the paraphrase be contained in the stem (Anderson 1972) and that the stem contain the essence of the topic being measured (Mehrens and Lehman 1975). Following these guidelines, the topic of the original true-false item was used in its original wording as the multiple choice item stem. Several options are possible for writing distractor answers for multiple choice items, including: (1) judgmental determination of plausible distractors; (2) administration of a completion test and use of the most commonly occurring errors as distractors; and (3) use of the errors (from a completion test) that best discriminate among high and low scorers Mehrens and Lehman 1975). It has been demonstrated (Frisbie 1971, Owens, Hanna, and Coppedge 1970) that method of constructing multiple choice alternatives has no significant effect on test validity (Mehrens and Lehman 1975). Therefore, for the present study, one multiple choice option corresponded exactly to the comparable true-false item, and the three additional answer possibilities were constructed on a judgmental basis to represent reasonable alternatives which might have been stated or implied in the ad.


Subjects were tested as a group. A 30-second ad was presented over television monitors distributed around the classroom. Immediately after viewing a particular ad, the subjects completed a true-false or multiple choice quiz pertaining to the ad. When the subjects had completed the questions for one ad, the next ad was shown. The quiz for each stimulus was contained on one page of the questionnaire, and subjects were requested not to read ahead in their questionnaires. Subjects appeared to cooperate with this request not to turn their pages until instructed to do so.


Overall Miscomprehension Scores

Table l presents a comparison of overall miscomprehension for both the true-false and the multiple choice formats with the scores obtained in the original study (Jacoby, Hoyer, and Sheluga 1980). As would be expected, a highly educated student sample exhibited significantly lower levels of miscomprehension than the general population across all six stimuli. It is important to note, however, that there is a strong degree of consistency in terms of the order of miscomprehension scores. In other words, the stimuli exhibiting the lowest, medium, and highest levels of miscomprehension in the original study were in the same order in the present study. This order was reproduced exactly in the true-false format (r=1.00) and very closely in the multiple choice format (r=.94). Thus, while statements regarding absolute levels of miscomprehension cannot be made from present data, conclusions regarding relative levels appear justified.



The most important findings from Table 1 concern the comparisons of the two measurement formats. It can be seen from the table that, for three of the stimuli. miscomprehension was higher for the true-false format. [Item total correlations were computed in order to examine the pattern of item difficulties for the true-false and multiple choice items. Results of this analysis indicated a fairly consistent pattern of item difficulties across the two formats.] In two cases, differences were non-significant, and in the last instance, miscomprehension was higher for the multiple choice format. Thus, there appears to be slight support for the hypothesis that true-false items require more cognitive elaboration and, thus, are more difficult items to answer. It should be noted, however, that the magnitude of these differences is not large, and substantial miscomprehension occurs with both formats.

Miscomprehension of Facts and Inferences

A major distinction in the Jacoby, Hoyer, and Sheluga (1980) study was the ability to comprehend factual and inferential material. Accordingly, overall miscomprehension levels were broken town into scores representing factual and inferential miscomprehension levels. Table 2 presents a summary of this analysis across the six stimuli.

It can be seen from the table that most of the differences between the two formats occur in the case of inferential items. In two cases, greater miscomprehension was evidenced for true-false items; in one instance, more miscomprehension occurred for the multiple choice format; and, in the remaining three instances, there were no differences.



For factual items, there were no significant differences between items in five of the six cases. In only one instance, the true-false format exhibited higher levels of miscomprehension than did the multiple choice format.

Thus, it may be concluded that differences in miscomprehension scores due to item format occur largely for inferential items. In this case, true-false items appear to be slightly more difficult. However, again, it must be noted that the differences are not large and a significant degree of miscomprehension occurs for both types of items in both formats.


Results of the present study appear to support, to some degree, the hypothesis that true-false questions are more difficult for subjects to answer correctly. The reasoning for this finding, as explained in an earlier section, is that true-false questions require more cognitive effort on the part of subjects in that subjects must generate alternatives and counter-examples on their own, rather than having them provided in the question (as they are when the multiple choice format is used.) One practical implication of this general finding is that, in future studies, researchers might find it fruitful to determine at the pretest stage if systematic misunderstandings can be identified for a stimulus ant, if so, use them as multiple choice alternatives. This would permit a more precise determination of miscomprehension due to specific types of information or alternative interpretations of this information. In other words, if miscomprehension is hypothesized to be a function of certain types of belief, statements related to these beliefs can be included as response alternatives to more accurately pinpoint the cause of miscomprehension.

The findings of this study contradict the implication by Ford and Yalch (1982) that guessing by subjects on true-false questions resulted in an inflated measure of miscomprehension in the original Jacoby, Hoyer, and Sheluga (1980) study. If guessing on true-false items led to inflated miscomprehension scores, one would have expected miscomprehension to be significantly lower when assessed with multiple choice items. Results did not, however, indicate definitive superiority of one question format over the other.

It is important to note that substantial miscomprehension occurred with both the true-false and the multiple choice formats, from a practical standpoint, the differences between measures are not large, and the results of the present study do not invalidate the use of the true-false format in prior studies, such as Jacoby, Hoyer, and Sheluga (1980). Thus, the key conclusion that substantial amounts of miscomprehension occur across the different types of televised communication does not appear to be invalidated by the type of question format employed.

The most notable differences in miscomprehension rates using true-false versus multiple choice format occurred with inference questions. There appeared to be a slight tendency for subjects to miscomprehend true-false inferential items to a greater degree than multiple choice inferential items. This finding can be viewed as consistent with the generation of counter-examples hypothesis since questions pertaining to inferences would require the generation of a greater number of counter-examples than would factual items. Thus, true-false inferential items would be more difficult than similar items for factual material. This finding needs to be replicated in future studies. However, on a practical level, future attempts to assess miscomprehension should keeP this distinction in mind.

It is important to note that the present study possesses some important limitations. The first involves the nature of the sample employed. Obviously. a college sample would be more intelligent and more highly educated than the population at large, and, thus, generalizations regarding the absolute levels of miscomprehension cannot be made. However, it is important to reemphasize that the rank ordering of miscomprehension scores (from lowest to highest) for the true-false format was exactly reproduced in the present study (as compared to Jacoby, Hoyer, and Sheluga 1980). Thus, it may be possible to make at least relative statements regarding the internal consistency of the data. Nevertheless, future research is needed to replicate findings such as these on a more representative sample.

A second limitation is that the ads were tested in one order at one point in time, and order effects may have threatened the internal validity of the findings. This problem appears not to be great, however, since, as mentioned above, there was a high correspondence between the ordering of scores in the original study and the present study.

A third limitation is that the manner of composing multipLe choice questions could have influenced results. It may be preferable, when using multiple choice questions, to determine alternative answer categories through use of a completion test at the pre-test stage, as discussed in the methods section.

In conclusion, the results of this study suggest that work remains to be tone on the issue of how to best measure miscomprehension. Given the importance of this construct for communication and advertising effectiveness, future studies which systematically compare other question formats would be particularly helpful. One obvious comparison would be between recognition measures, such as those employed in the present study, and recall measures, which have been collected in a number of the studies referred to previously. Exploring this type of measurement issue is one avenue that might be pursued in trying to specify the role(s) of memory in comprehension. In addition, future studies might consider possible interactions between ad characteristics and question format. The effects of question format in the present study differed depending on whether the question concerned a fact or an inference. Ad characteristics, e.g. cognitive vs. affective appear, or one-sided vs. two-sided message, might also reasonably be expected to affect question format performance.


Anderson, Richard C. (1972), "How to Construct Achievement Tests to Assess Comprehension," Review of Educational Research, 42(2), 145-170.

Chaiken, Shelly, and Alice Eagly (1976), "Communication Modality as a Determinant of Message Persuasiveness and Message Comprehensibility," Journal of Personality and Social Psychology, 34 (October), 605-614.

Edwardson, Mickie, Donald Grooms, and Susanne Proudlove (1981), "Television News Information Gain from Interesting Video vs. Talking Heads," Journal of Broadcasting, 25:1, 15-24.

Ford, Gary T. and Richard Yalch (1982), 'Viewer Miscomprehension of Televised Communication - A Comment," Journal of Marketing, 46, 27-31.

Frisbie, David A. (1971), "Comparative Reliabilities and Validities of True-False and Multiple Choice Tests," Unpublished Ph.D. dissertation, Michigan State University.

Frisbie, David A. (1973), "Multiple Choice Versus True-False: A Comparison of Reliabilities and Concurrent Validities," Journal of Educational Measurement, 10:4, 297-304

Hoyer, Wayne D. and Jacob Jacoby (1985), "The Public's Miscomprehension of Public Affairs Programming," Journal of Broadcasting and Electronic Media, in press.

Jacoby, Jacob, Wayne D. Hoyer, and David A. Sheluga (1980), The Miscomprehension of Televised Communication, New York: American Association of Advertising Agencies.

Jacoby, Jacob and Wayne P Hoyer (1982), "Viewer Miscomprehension of Televised Communication: Selected Findings," Journal of Marketing, 46, 11-26.

Jacoby, Jacob and (1982), "On Miscomprehending Televised Communication: A Rejoinder," Journal of Marketing, 46, 35-43.

Jacoby, Jacob, Wayne D. Hoyer, and Mary Zimmer (1982), "To Read, View, or Listen? A Cross-Media Comparison of Comprehension," in J. H. Leigh and C. R. Martin (eds.), Current Issues in Advertising.

Jacoby, Jacob, Margaret C. Nelson, and Wayne D. Hoyer (1982), "Corrective Advertising and Affirmative Disclosure Statements: Their Potential for Confusing and Misleading the Consumer," Journal of Marketing, 46:1, 61-72.

Katz, Elihu, Hanna Adoni, and Pnina Parness (1977), "Remembering the News: What the Picture Adds to Recall," Journalism Quarterly, Summer, 231-239.

Lipstein, Benjamin (1980), "Theories of Advertising and Measurement Systems," Attitude Research Enters the 80's, Chicago: American Marketing Association.

Mehrens, William A. and Irvin J. Lehman (1975), Measurement and Evaluation in Education and Psychology, Second Edition, New York: Holt. Rinehart. and Winston.

Mizerski, Richard W. (1982), 'Viewer Miscomprehension Findings Are Measurement Bound," Journal of Marketing, 46, 32-36.

Ortony, Andrew (1978), "Remembering, Understanding, and Representation," Cognitive Science, 2, 53-69.

Owens, R. E., G. S. Hanna, and F. L. Coppedge (1970), "Comparison of Multiple Choice Tests Using Different Types of Distractor Selection Techniques." Journal of Educational Measurement, 7, 87-90.

Robinson, J. P. (1982), "Comprehension of a Single Evening's Newscasts," Final Reports To The News Research Group, London: British Broadcasting Corporation.

Sahin, Haluk, Dennis Davis, and John P. Robinson (1981), "Improving the TV News," Irish Broadcasting Review, 11, 50-55.

Schmittlein, David C. and Donald G. Morrison (1983), "Measuring Miscomprehension for Televised Communications Using True-False Questions," Journal of Consumer Research, 10, 147-156.

Woodall, Gill, Dennis Davis and Haluk Sahin (1983), "From the Boob Tube to the Black Box,"Journal of Broadcasting, 27:1.



Fliece R. Gates, The University of Texas at Austin
Wayne D. Hoyer, The University of Texas at Austin


NA - Advances in Consumer Research Volume 13 | 1986

Share Proceeding

Featured papers

See More


More than just a number: The negative effect of 100% claims

Nira Munichor, Bar-Ilan University
Liat Levontin, Technion University, Israel

Read More


Contested and Confused: The Influence of Social Others in Disrupting Body Projects

Aphrodite Vlahos, University of Melbourne, Australia
Marcus Phipps, University of Melbourne, Australia

Read More


Signaling Fun: Anticipated Sharing Leads to Hedonic Choice

Nicole Kim, University of Maryland, USA
Rebecca Ratner, University of Maryland, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.