Uninformed Response Bias in Measuring Consumers’ Brand Attitudes

ABSTRACT - This research examines the uninformed response bias associated with measuring consumers’ brand attitudes on forced opinion scales. Results suggest that the amount of uninformed response bias is negatively correlated with consumers’ level of brand familiarity. Further, the validity of forced opinion scales is reduced with decreased brand familiarity. Less brand familiarity and greater uninformed response bias can cause brand attitudes to become more moderate and reduce the standard deviations of response distributions.



Citation:

Timothy R. Graeff (1999) ,"Uninformed Response Bias in Measuring Consumers’ Brand Attitudes", in NA - Advances in Consumer Research Volume 26, eds. Eric J. Arnould and Linda M. Scott, Provo, UT : Association for Consumer Research, Pages: 632-639.

Advances in Consumer Research Volume 26, 1999      Pages 632-639

UNINFORMED RESPONSE BIAS IN MEASURING CONSUMERS’ BRAND ATTITUDES

Timothy R. Graeff, Middle Tennessee State University

ABSTRACT -

This research examines the uninformed response bias associated with measuring consumers’ brand attitudes on forced opinion scales. Results suggest that the amount of uninformed response bias is negatively correlated with consumers’ level of brand familiarity. Further, the validity of forced opinion scales is reduced with decreased brand familiarity. Less brand familiarity and greater uninformed response bias can cause brand attitudes to become more moderate and reduce the standard deviations of response distributions.

The uninformed response bias has been demonstrated and documented by numerous researchers since the early years of public opinion research (Bishop, Oldendick, Tuchfarber, & Bennett, 1980; Bishop, Oldendick, & Tuchfarber, 1983; Bishop, Tuchfarber, & Oldendick, 1986; Hawkins & Coney, 1981; Schneider, 1985; Schuman & Presser, 1980; 1981). For example, Hartley (1946) found that a majority of college students expressed an opinion about fictitious nationalities (e.g., "Wallonians"). Collett and O’Shea (1976) found that people were willing to give directions to places that did not exist. Kolson and Green (1970) found that grade-school children had opinions about a fictitious political figure (Thomas Walker). And, Gill (1947) reported that more than two-thirds of people surveyed expressed an opinion about a fictitious Metallic Metals Act. In each case, those expressing opinions were by definition uninformed because the issues were fictitiousBthus the name uninformed response bias.

This phenomenon is very important and relevant to market researchers as well as public policy makers. It demonstrates that decisions can often be made based on meaningless answers to survey questions. If people are willing to express an opinion about a completely fictitious issue, they are likely to express opinions on actual issues about which they are uninformed or unfamiliar. And, efforts to increase response rates (e.g., forms of pressuring subjects to respond in an effort to increase item response rates, reduce sample bias and non-response errors) might actually increase the chances for uninformed response bias. Uninformed respondents can be pressured (forced) to provide meaningless answers to survey questions (Hawkins & Coney, 1981; Kanuk & Berneson, 1975; Linsky 1975; Yu & Cooper, 1983).

For the most part, research on the uninformed response bias has been limited to peoples’ opinions regarding fictitious public policy issues or federal legislation, such as the National Bureau of Consumer Complaint (Hawkins & Coney, 1981), and The Monetary Control Bill of 1983 (Bishop, Tuchfarber & Oldendick, 1986). Any response or opinion about these issues (either positive or negative) was by definition considered to be uninformedBbecause the issues were totally fictitious. The purpose of this research is to extend past research by examining the uninformed response bias associated with measuring consumers’ brand attitudes in market research. Specifically, this research examines the uninformed response bias as it relates to a common problem in measuring consumers’ attitudes toward brandsBforcing consumers to indicate an attitude without offering a "Don’t Know" (DK) option. This issue of whether or not DK options should be given on consumer surveys has been widely debated. The debate centers on the need for accurate and valid data versus the desire to increase the item response rates for a survey.

Forced Opinion vs. DK Options

Those who oppose providing DK options argue that they might result in higher "unknown" rates by providing an easy way out for respondents. Not providing DK options (a forced opinion scale) forces respondents to think about an issue and express an opinion, however slight their opinion is. Further, by eliminating DK options, researchers are not forced to make conclusions about majorities based on the responses of only a few who happened to express their opinion (Kalton & Schuman, 1982).

On the other hand, those in favor of providing DK options argue that their use results in more accurate and valid response distributions. Often, DK is a legitimate response, and can yield very important insights into not only brand attitude, but also brand awareness. It is preferable to have respondents state that they do not have an opinion on an issue versus making them provide a meaningless guess or not answer the question altogether (Poe, Seeman, McLaughlin, & Dietz, 1988). Forcing someone to indicate an opinion, when they do not have one, makes it difficult for respondents to answer and might actually turn them off to the entire survey (Churchill, 1996). Therefore, not providing DK options can result in uninformed response bias if consumers are forced to express their opinions of products about which they are uninformed or unfamiliar.

Uninformed Response Bias In Measuring Brand Attitudes

By using fictitious issues, past research has been able to very easily operationalize uninformed response bias as any response (or opinion) about the issue. However, the situation is very different when measuring consumers’ brand attitudes. In consumer research, the products about which we are asking consumers to respond are almost always actual (not fictitious) products. Because the products are not fictitious, a response can be either uninformed or not. This problem is most evident when using a scale with an odd number of categories (e.g., 1B7). The mid-point can be used to indicate unawareness as well as neutrality or indifference (Spagna, 1984). And, respondents who have no opinion or no knowledge typically mark the middle, or neutral category on the scale. Thus, the mid-point (a "4" on a 7-point scale) has two possible meanings. One possibility is that the respondent is knowledgeable about the product and their attitude is moderate (indifferent, neither positive nor negative). The second possibility is that the respondent is uninformed and is using the mid-point as a way of not having to indicate either a poitive or negative attitude. In this second case, a mid-point response is actually an uninformed response. And, these uninformed responses can reduce the validity of measures of central tendency and dispersion of response distributions. Because of the greater number of midpoint responses, uninformed response bias will cause the averages of brand attitudes to become more moderate, and standard deviations to become smaller.

What determines that amount of uninformed response bias in measuring consumers’ brand attitudes? Obviously, consumers’ familiarity with a brand name should play a significant role. The more familiar the brand name, the less uninformed response bias associated with the survey results. And because brand names have varying levels of consumer familiarity, the amount of uninformed response bias should also vary by the particular brand name being evaluated. This idea of a range of uninformed response bias is counter to the efforts of past research. In past research, researchers have sought to determine the percentage of uninformed response bias as a generalizable empirical result. For example, Hawkins and Coney (1981) found 23.3% uninformed response bias in their sample, and Schneider (1985) found that uninformed response bias was greater for opinion questions (58.7%) compared to factual questions (19.4%). And, this can be done if the issues being studies are all fictitious and therefore all have the same degree of familiarity to consumers. However, in the world of consumer market research, we seek to measure attitudes of consumers about brands that can vary in their degree of consumer familiarity.

TABLE 1

FORCED OPINION RESPONSES OF UNINFORMED CONSUMERS

The purpose of this research is to (1) examine the amount of uninformed response bias associated with traditional semantic differential scales designed to measures consumers’ brand attitudes, (2) examine the relationship between the amount of uninformed response bias and consumers’ brand familiarity, and (3) examine the effects of uninformed responses on the validity of forced opinion scales that do not offer a DK option.

Methodology

Two hundred twenty nine consumers completed a survey measuring their attitudes toward 26 different "brand names." Consumers indicated their attitudes on two different scales (both anchored by "Dislike" and "Like"). The first scale was a forced opinion scale, and the second scale offered a DK option. The amount of uninformed response bias on the first scale was measured as the percentage of consumers who later chose the DK option, when it was available.

The Scales: Consumers first evaluated all 26 brand names on a 1-7 forced opinion scale. This scale had an obvious mid-point ("4") but no DK option. Following this, consumers turned the page and found the same list of 26 brand names but this time they were instructed to evaluate them on a 1-7 scale with a DK option. They were told to check the DK box if they could not answer because they did not know anything about the brand or they had no opinion. The purpose of this was to measure the percentage of subjects who circled a "4" on the 1-7 forced opinion scale who later checked the DK option when it was available. This provides a measure of th amount of uninformed response bias in the 1-7 forced opinion scale.

After evaluating all brands on both scales, consumers indicated how familiar they were with each of the 26 brand names. A composite measure of consumers’ familiarity with each brand was created by averaging consumers’ responses to three questions, "How much do you know about the following?" "How much experience (purchase or use) have you had with the following?" and "How involved are you with (how much interest do you have with) the following?" Each of these familiarity dimensions was measured on a 7-point scale anchored by "Not Much" and "A Great Deal." These three items had high internal consistency. Coefficient alphas for the three item index ranged from .69 to .91 (average alpha=.87).

Results

Uninformed Response Bias and Forced opinion Scales. Uninformed response bias occurs when a respondent expresses an opinion about a brand when they would have used the DK option if it were available. Researchers have suggested that respondents unfamiliar with an issue tend to use the mid-point of a scale when not given a DK option (Spagna, 1984). Table 1 presents the results from an analysis to determine if uninformed respondents used primarily the mid-point on the forced opinion scale. Overall, there was a significant difference in the number of uninformed respondents choosing the different scale points on the forced opinion scale (F6,175=14.97; p<.01). As expected, the majority of uninformed respondents circled the mid-point of the forced opinion scale. The results of a Tukey’s multiple comparison analysis that controls for total experiment error revealed that significantly more uninformed respondents circled the midpoint of the scale than any other category. However, it should be noted that the uninformed response bias is not limited to only the mid-point of a scale. Whereas very few uninformed respondents circled the extreme ends of the scale (1,2, or 6, 7), 23.2% circled "3", and 11.3% circled "5." There was no significant difference in the number of uninformed respondents choosing either of the two scale points adjacent to the midpoint.

Uninformed Response Bias And Brand Familiarity. Table 2 presents the results from the analysis examining the relationship between the amount of uninformed response bias and consumers’ brand familiarity. The first column of Table 2 presents the mean familiarity scores for the 26 different brands. These familiarity scores are very evenly distributed across the range of possible scores (from 1 to 7). Twelve of the brands have familiarity scores less than the midpoint of the scale ("4"), and 13 brands have familiarity scores greater than the midpoint. This suggests that the set of brands used in this study is adequate for testing the relationship between uninformed response bias and brand familiarity.

The second column presents the total number (and percentage) of uninformed respondents for each brandBthose indicating an opinion on the forced opinion scale who would have checked DK if it were available. The third column presents the number (and percentage) of midpoint responses on the 1-7 orced opinion scale that were actually uninformed responses. For example, 76% of all expressed opinions about Hayes modems on the forced opinion scale were uninformed responses, and 87.2% of midpoint responses on the forced opinion scale were uninformed.

TABLE 2

UNINFORMED RESPONSE RATES FOR THE 26 BRANDS

TABLE 3

CORRELATIONS

FIGURE 1

PERCENT OF UNINFORMED RESPONSES BY LEVEL OF PRODUCT FAMILIARITY

The results from a correlation analysis (see Table 3) show that brand familiarity was significantly negatively correlated with both the total percent of uninformed responses as well as the percent of midpoint responses to the forced opinion scale that were actually uninformed responses. The significant negative correlations indicate that greater brand familiarity is associated with less overall uninformed response bias and less uninformed response bias in the midpoint of a forced opinion scale. Brand familiarity explained 80% of the variability in the amount of uninformed response bias and 72% of the variability in the amount of uninformed response bias in the midpoint of the forced opinion scale. These data are graphed in Figure1 and Figure 2.

Uninformed Response Bias and Measurement Scale Validity. As previously mentioned, researchers have suggested that allowing respondents to indicate their lack of familiarity or knowledge with a brand can increase the validity of response distributions. Often, DK is a legitimate response and forcing respondents to express an opinion they do not have forces them to guess and give meaningless answers (Churchill, 1996; Poe, Seeman, McLaughlin, & Dietz ,1988). Thus, by using the second attitude scale that offered a DK option as a benchmark, the difference in the response distributions between the two scales was used as a measure of the validity of the forced opinion scale that did not offer a DK option.

FIGURE 2

PERCENT OF "MID-POINT" RESPONSES THAT ARE UNINFORMED BY PRODUCT FAMILIARITY

FIGURE 3

ABSOLUTE VALUE OF DIFFERENCE IN MEANS FOR THE FORCED OPINION SCALE AND THE SCALE THAT OFFERED A DK OPTION BY PRODUCT FAMILIARITY

The scale means for the 1-7 forced opinion scale, the 1-7 scale that offered a DK option, and the difference between these two scale means are presented in Table 2, and the absolute value of differences in the two means are graphed in Figure 3. Overall, the results suggest that the validity of a forced opinion scale reduces with decreased brand familiarity. The significant negative correlation between brand familiarity and the absolute value of the difference in the two scale means indicates that the less familiar respondents were with the brand, the greater was the difference between the average of the forced opinion responses and what the average would have been if uninformed respondents were allowed to report that they had no opinion. For high familiar brands, this uninformed response bias had a negligible effect on the validity of mean measurements. However, uninformed responses had significant effects on the validity of mean measurements for less familiar brands (e.g., differences of 1.0, .57, and .66 scale points for Hayes modems, Canon Elan cameras, and BASF computer disks respectively). Overall, brand familiarity explained over 30% of the variability in the differences between the means of the two scales.

Similar results were found for the standard deviations of the two scales. These results are presented in the last three columns of Table 3, and the absolute value of the differences in the standard deviations are graphed in figure 4. Because the majority of uninformed respondents used the midpoint of the forced opinion scale, the greater number of midpoint responses for unfamiliar brands tended to reduce the standard deviation (more responses are very similar to each other). Notice that for 11 of the 13 least familiar brands, the standard deviation was smaller for the forced opinion scale. This is because these brands had the greatest amount of uninformed respondents who tended to use the midpoint of the forced opinion scale. The significant negative correlation between brand familiarity and the absolute value of the difference in the two scale standard deviations indicates that the less familiar respondents were with the brand, the greater was the difference between the standard deviation of the forced opinion responses and what the standard deviatin would have been if uninformed respondents were allowed to report that they had no opinion. Brand familiarity explained over 38% of the variability in the differences in the standard deviations of the two scales. In sum, the means and standard deviations of brand attitudes about familiar (unfamiliar) brands measured on forced opinion scales were similar to (different from) the means and standard deviations of brand attitudes measured on the scale with the DK option. Using a forced opinion scale to measure attitudes of less familiar brands can lead to reduced validity in terms of the mean and standard deviation of response distributions.

FIGURE 4

ABSOLUTE VALUE OF DIFFERENCE IN STANDARD DEVIATIONS FOR THE FORCED OPINION SCALE AND THE SCALE THAT OFFERED A DK OPTION BY PRODUCT FAMILIARITY

Conclusions

The purpose of this research was to extend past research on the uninformed response bias by (1) examining the uninformed response bias associated with measuring consumers’ attitudes toward brands, (2) examining the relationship between the amount of uninformed response bias and consumers’ brand familiarity, and (3) examining the effects of uninformed responses on the validity of forced opinion scales that do not offer a DK option.

What does the midpoint of a semantic differential scale mean? If respondents are not given a DK option the midpoint on a forced opinion scale can mean either a moderate opinion, or be the result of an uninformed respondent using the midpoint as a means of not providing either a positive or negative opinion. In this research, the percentage of midpoint responses that were actually uninformed responses was significantly negatively correlated with brand familiarity, ranging from 87.2% for less familiar brands to 0% for highly familiar brands.

How much uninformed response bias is associated with consumers’ brand attitudes? Much of past research has attempted to estimate a "generalizable" result of the amount of uninformed response bias associated with survey research (Bishop, Tuchfarber, & Oldendick, 1986; Hawkins & Coney, 1981; Schneider, 1985). However, the search for a generalizable finding about the amount of uninformed response bias in consumer market research is misguided. In this research, the amount of uninformed response bias was significantly negatively correlated with brand familiarity, ranging from 76% for unfamiliar brands to less than 2% for highly familiar brands.

The effects of uninformed response bias will be greater for less familiar brands when consumers are not given a DK option. Less brand familiarity and greater uninformed response bias can cause the average of brand evaluations to become more moderate (closer to the midpoint of a scale) and reduce the standard deviation of the response distribution. Marketers of less familiar brands should seriously consider giving consumers a DK option when measuring attitudes toward their brands. Otherwise, the greater number of midpoint responses can cause marketers to mistakenly conclude that brand attitudes are less favorable than in reality (a brand that consumers likeBif they are familiar with it), or that brand attitudes are more favorable than in reality (a brand that consumers dislikeBif they are familiar with it). And, any tests of differences in brand attitudes of less familiar brands compared to attitudes of competing brands can lead to incorrect conclusions due to the artificial decrease in variability as a result of the greater number of midpoint responses.

Future research should continue to examine the effects of uninformed response bias in measuring consumers’ brand attitudes. This research examined a 7-point semantic differential scale. Future research could examine uninformed response bias associated with scales that have fewer, and more categories, as well as scales with verbal labels (e.g., Likert scales). Also, research should examine the effects of pressuring consumers to respond to survey questions. The language used in the introdctory instructions to surveys can have a significant effect on the amount of effort consumers put toward determining their attitudes, as opposed to simply responding with a "don’t know" response.

REFERENCES

Bishop, George F., Tuchfarber, Alfred J., and Oldendick, Robert W., Opinions On Fictitious Issues: The Pressure To Answer Survey Questions. Public Opinion Quarterly 50 (1986): 240-250.

Bishop, George F., Oldendick, Robert W., Tuchfarber, Alfred J., Effects Of Filter Questions In Public Opinion Surveys. Public Opinion Quarterly 47 (1983): 528-546.

Bishop, George F., Oldendick, Robert W., Tuchfarber, Alfred J., Bennet, Stephen E., Pseudo-Opinions And Public Affairs. Public Opinion Quarterly 44 (Summer 1980): 198-209.

Churchill, Gilbert A., Basic Marketing Research, The Dryden Press 1996.

Collett, Peter, and O’Shea, Gregory, Pointing The Way To A Fictional Place: A Study Of Direction Giving In Iran And England. European Journal of Social Psychology 6 (1976): 447-458.

Gill, Sam N. How Do You Stand On Sin? Tide 72 (March 14, 1947).

Hartley, Eugene L., Problems In Prejudice. New York: Octagon Press 1946.

Kalton, Graham, and Schuman, Howard, The Effect Of The Question On Survey Response Rates: A Review. Journal of the Royal Statistical Society, Series A 145 (part 1, 1982): 44-45.

Hawkins, Del I., and Coney, Kenneth A., Uninformed Response Error In Survey Research. Journal of Marketing Research 28 (August 1981): 370-374.

Ju, Julie, Cooper, Harris, A Quantitative Review Of Research Design Effects On Response Rates To Questionnaires. Journal of Marketing Research 20 (February 1983): 36-44.

Kanuk, L., Berneson, C., Mail Surveys And Response Rates: A Literature Review. Journal of Marketing Research 12 (1975): 440-453.

Kolson, Kenneth L., and Green, Justin J., Response Set Bias And Political Socialization Research. Social Science Quarterly 51 (1970): 527-538.

Linsky, A., Stimulating Response To Mailed Questionnaires: A Review. Public Opinion Quarterly 39 (1975): 82-101.

Poe, Gail S., Seeman, Isadore, McLaughlin, Joseph, Mehl, Eric, and Dietz, Michael, "Don’t Know" Boxes In Factual Questions In A Mail Quesionnaire: Effects On Level And Quality Of Response. Public Opinion Quarterly 52 (1988): 212-222.

Schneider, Kenneth C., Uninformed Response Rates In Survey Research: New Evidence. Journal Of Business Research 13 (1985): 153-162.

Schuman, Howard, Presser, Stanley, Questions And Answers In Attitude Surveys. New York: Academic Press 1981.

Schuman, Howard, Presser, Stanley, Public Opinion And Public Ignorance: The Fine Line Between Attitudes And Nonattitudes. American Journal of Sociology 85 (March 1980): 1214-1225.

Spagna, Gregory J., Questionnaires: Which Approach Do You Use? Journal of Advertising Research 24 (February / March 1984): 67-70.

----------------------------------------

Authors

Timothy R. Graeff, Middle Tennessee State University



Volume

NA - Advances in Consumer Research Volume 26 | 1999



Share Proceeding

Featured papers

See More

Featured

When Consumers Choose for Others, Their Preferences Diverge from Their Own Salient Goals

Olya Bullard, University of Winnipeg

Read More

Featured

Can’t Take the Heat? Randomized Field Experiments in Household Electricity Consumption

Praveen Kumar Kopalle, Dartmouth College, USA

Read More

Featured

Perspectives on “What Can We Trust? Perceptions of, and Responses to, Fake Information” and the Changing Values of Information

Kristen Lane, University of Arizona, USA
Merrie Brucks, University of Arizona, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.