The Disruptive Effects of Self-Reflection: Implications For Survey Research

Timothy D. Wilson, University of Virginia
Dolores Kraft, University of Virginia
Douglas J. Lisle, University of Virginia
ABSTRACT - Asking people to explain their attitudes has been found to lead to temporary attitude change. Further, these new attitudes have been found to be poor predictors of behavior. The implications of these findings for survey research are discussed. Asking "why" questions before attitude measures can be disruptive, especially for people with relatively inaccessible attitudes.
[ to cite ]:
Timothy D. Wilson, Dolores Kraft, and Douglas J. Lisle (1990) ,"The Disruptive Effects of Self-Reflection: Implications For Survey Research", in NA - Advances in Consumer Research Volume 17, eds. Marvin E. Goldberg, Gerald Gorn, and Richard W. Pollay, Provo, UT : Association for Consumer Research, Pages: 212-216.

Advances in Consumer Research Volume 17, 1990      Pages 212-216

THE DISRUPTIVE EFFECTS OF SELF-REFLECTION: IMPLICATIONS FOR SURVEY RESEARCH

Timothy D. Wilson, University of Virginia

Dolores Kraft, University of Virginia

Douglas J. Lisle, University of Virginia

[This work was supported by grant MH41841 from the National Institute of Mental Health.]

ABSTRACT -

Asking people to explain their attitudes has been found to lead to temporary attitude change. Further, these new attitudes have been found to be poor predictors of behavior. The implications of these findings for survey research are discussed. Asking "why" questions before attitude measures can be disruptive, especially for people with relatively inaccessible attitudes.

There has been a growing interest in how attitudes can be influenced by the techniques used to measure them. Such factors as question wording, question order, and the response format have been found to influence attitude reports (e.g., Hogarth 1982; Feldman and Lynch 1988; Schuman and Presser 1981; Tourangeau and Rasinski 1988). As a result, both survey and experimental researchers need to be aware of a host of potential pitfalls when assessing attitudes, such as creating an attitude where none existed before, or of altering an attitude by the way it is measured.

In the past few years we have investigated a problem with attitude measurement that has not received much attention. We have been interested in what happens when people are first asked to explain why they feel the way they do about an attitude object, and then are asked to report their attitude. In several studies we have found that asking people to explain an attitude can change it, at least temporarily, and can reduce the extent to which the reported attitude predicts subsequent behavior.

We have developed the following set of arguments to explain these findings. (For a more detailed discussion, see Wilson in press and Wilson, Dunn, Kraft, and Lisle 1989.) When asked to explain an attitude, people are rarely at a loss for reasons. Of the hundreds of people we have asked for reasons almost no one has said, "I don't know," even when they were told that no one would ever look at what they wrote. Interestingly, however, there is a fair amount of evidence that people sometimes do not know why they feel the way they do (Nisbett and Wilson 1977; Wilson and Stone 1985). There has been some controversy over the extent of this lack of awareness (e.g., Smith and Miller 1978), though few researchers would dispute the claim that at least at times, people have difficulty knowing the exact causes of their attitudes.

Given that people are sometimes unaware of the reasons for their attitudes, what determines the reasons they will report? We suggest that people are susceptible to a variety of availability effects (Tversky and Kahneman 1-973) when reporting reasons. First, rational cognitions about the attitude object are, in our culture at least, viewed as the most plausible causes of attitudes, and hence are most available in memory. For example, when explaining why we like various political candidates, we are more likely to call upon such factors as their stance on the issues than such seemingly implausible things as the number of times we have seen their ads on television or whether their party affiliation is the same as our parents', even though these latter factors have been shown to influence people's attitudes. Other factors might be available in memory because they are easy to verbalize or were encountered recently. For example, if we happened to have just seen a televised debate where a candidate looked worn and ineffective, we might exaggerate the extent to which this contributes to our overall impression of him or her.

As a result of these availability effects, the reasons people bring to mind might imply a somewhat different attitude than they previously held. For example, suppose that a person has a generally favorable attitude toward Candidate X. When asked to explain why she likes this candidate, however, what comes to. mind is that he looked worn and ineffective during a recent debate, and that his position on the death penalty is at variance with her own. What-will happen? We have found in several studies that people adopt the attitude implied by their reasons (see Wilson, Dunn, Kraft, and Lisle 1989). Thus, if we asked this person how she felt about Candidate X she would be likely to report a somewhat negative attitude, at least more negative than if she had not focused on why she felt the way she did.

Two important points should be noted about the attitude change that results from analyzing reasons. First, it can be difficult to predict the direction of this change. The kinds of reasons that are available in memory for one person might be primarily negative, leading to change in a negative direction. The reasons that are available to another person might be primarily positive, resulting in change in the opposite direction. We found this to be the case in a recent study of political attitudes (Wilson, Kraft, and Dunn 1989). Subjects who analyzed reasons were more likely to change their attitudes toward political candidates, but this change was not in a common direction. Subjects who listed negative reasons tended to change their attitudes in a negative direction, whereas subjects who listed positive reasons tended to change in a positive direction. (Incidentally, subjects in this study, as in most of the ones we have conducted, were asked to list their reasons privately and anonymously to "organize their thoughts." Subjects believed that no one would ever see what they wrote, which reduces the plausibility of self-presentational or demand characteristic interpretations of the results.)

Second, the attitude change we have observed does not appear to be particularly long lasting. Indeed, it would be rather surprising if a permanent change in people's attitudes could be brought about simply by asking them to explain why they felt the way they did. Instead, over time people's original attitude seems to "snap back." Even though this attitude change is temporary, however, it can be consequential, in at least two respects. First, it can be consequential to researchers who are trying to predict people's behavior from their attitudes. If people report a new attitude that is only temporary, and revert back to their original position shortly thereafter, then this reported attitude will probably be a poor predictor of their behavior. Their behavior will be driven by their original position, at least if enough time has passed to allow this attitude to snap back. Consistent with this argument, Wilson, Dunn, Kraft, and Lisle (1989) reviewed 1-0 studies that manipulated whether subjects explained their attitudes and assessed the correlation between people's attitudes and behavior. When people did not explain why they felt the way they did, the average attitude-behavior correlation in these studies was .54. When they did explain their attitudes, the average correlation was only .17. Averaging across the studies, this difference was highly reliable. Thus, when people analyze reasons, the attitude they report is not very predictive of their later behavior.

The attitude change resulting from analyzing reasons can also be of some consequence to the person doing the analyzing. Imagine that people have to make a decision, such as a choice between different consumer goods. If they think about why they feel the way they do about each alternative, they might change their minds about which item they prefer, and make a different choice than they would have had they not analyzed reasons. If their initial attitude later snaps back, however, they might come to regret the choice they made. We have obtained results consistent with this hypothesis (Wilson, Lisle, and Schooler 1989). In one study, for example, subjects evaluated five different art posters. Those asked to explain their evaluations changed their minds about which ones they preferred, and chose different ones to take home. When telephoned a few weeks later, however, they were significantly less satisfied with their choice than were subjects who had not analyzed reasons.

Our concern has primarily been with how these processes operate in everyday life, and with the effects they have on attitude change, attitude-behavior consistency, and decision making. It is fairly common to be asked to explain why we feel the way we do about something, such as, "Why do you like or dislike George Bush?" "Why do you prefer this particular brand of laundry detergent?" "Could you explain your reaction to the movie, Do the Right Thing?" Thus, there are some important implications of these processes for everyday kinds of problems.

We have also considered the implications for survey research, which is closer to the topic of this symposium. As far as we can tell it is not all that common to include "why" questions on surveys, but they are sometimes asked. To the extent that they are asked before questions assessing people's attitudes, they might cause problems. Two studies have been conducted to test this hypothesis (Wilson and Pollack 1989). In the first, all students taking introductory psychology at the University of Virginia were given an attitude questionnaire at the beginning of the semester. This questionnaire asked subjects to rate their attitudes toward the death penalty, school busing to achieve racial integration, and a national health insurance, as well several other issues, on 7-point scales.

Several weeks later a random sample of these students were telephoned and asked to participate in an ostensibly unrelated attitude survey. Of the 116 reached by phone, 97%- agreed to participate. These subjects were randomly assigned to one of two conditions: In the control condition they were asked the same attitude questions they had completed earlier about the death penalty, busing, and a national health insurance. In the reasons question subjects were first asked to explain why they felt the way they did about an issue. The interviewer said that she would name an issue, and that the first thing she wanted the respondent to do was to "tell me why you feel the way you do about it." Subjects then answered the same attitude questions as in the control condition. The order in which the questions were asked was fully counterbalanced in both conditions.

The first question we addressed was whether the reasons manipulation changed subjects' reported attitudes in a common direction. For example, did subjects who first gave reasons become more favorable toward the death penalty, on the average? We did not expect such change to occur. As discussed earlier, the kinds of reasons that came to subjects' minds were likely to be pro for some subjects but anti for others, which would not produce a shift in attitudes in the same direction. This prediction was borne out. The mean attitudes reported in the telephone survey did not differ appreciably from the mean attitudes reported at the beginning of the semester in either condition.

We did expect attitude change to occur in the reasons condition when bidirectional shifts are considered. According to our hypothesis some subjects who analyzed reasons brought to mind a set of reasons that were biased in a pro direction for the issues, whereas others brought to mind a set of reasons that were biased in the anti direction, causing attitude change in different directions. If so, then the absolute value of the difference between attitudes at Times 1 and 2 should be greater in the analyze condition. This prediction was confirmed, as seen in Table 1. Though the differences in attitude change between the reasons and control conditions were not particularly large, averaged across the three issues these differences were reliable, p < .05.

TABLE 1

ABSOLUTE VALUES OF THE DIFFERENCE BETWEEN ATTITUDE RESPONSES AT THE GROUP TESTING SESSION AND THE TELEPHONE SURVEY (FROM WILSON AND POLLACK, 1989, STUDY 1)

It should be noted that the issues we used in Study 1 were probably not ones that most students had thought much about. In some of our other studies, we have found that there is a class of people who are immune to the effects of analyzing reasons: Those who are especially experienced with or knowledgeable about the attitude issue (Wilson, Kraft, and Dunn 1989). It was not clear from our earlier work, however, exactly why knowledgeable people are unaffected by analyzing reasons. One reason might be that knowledgeable people are more likely to know why they feel the way they do, and thus less likely to bring to mind a biased set of reasons. Another possibility, however, is that knowledgeable people simply have stronger attitudes that are less malleable than are the attitudes of unknowledgeable people.

Attitude strength is a variable that has been addressed in various guises by many attitude researchers. Such variables as attitude accessibility (Fazio 1989), affective-cognitive consistency (Rosenberg 1960), involvement (Sherif 1980), importance (Judd and Krosnick 1988), and conviction (Abelson 1988) have been discussed, but the extent to which these constructs overlap is not entirely clear. The chief purpose of Study 2 was to see which of these measures of attitude strength, if any, moderates the effects of analyzing reasons in a survey.

Study 2 was identical to Study 1, except for the following changes: Subjects participated in an initial session individually, and answered questions about only one attitude issue: How they felt about Ronald Reagan. (This issue was chosen because we expected there to be more variance in attitude strength on this issue compared to the ones used in Study 1). In addition to giving their overall evaluation of Reagan at this session, subjects also completed a battery of measures of attitude strength, including al! of those mentioned in the preceding paragraph. Attitude accessibility was measured with a procedure developed by Fazio, where the latency of subjects' response to an attitude question about Reagan was assessed (the faster the response, the more accessible the attitude). Details of how the other variables were measured can be found in Wilson and Pollack (1989). Several weeks after the initial session subjects participated in a phone survey that was identical to the one used in Study 1, except that only one attitude issue, Reagan, was included. As in Study 1, half of the subjects first explained why they felt the way they did about Reagan, whereas the others did not.

The results of Study 1 were replicated, in that subjects who analyzed reasons showed more bidirectional attitude change than subjects who did not. In this study attitudes were assessed on different scales at Times 1 and 2, thus they were converted to standard scores. The mean absolute value of the Time 2 - Time 1 difference in attitudes was .66 in the reasons condition and .51 in the control condition, a difference that was nearly significant (p = .06). Clearly, however, this difference is not particularly large. To see if it was larger for people with weak attitudes, subjects were divided at the median on the various measures of attitude strength that were included at Time 1. The one that moderated the effects of analyzing reasons the most was Fazio's measure of attitude accessibility. To unconfound accessibility and attitude extremity, people were divided into high and low accessibility groups via median splits at each level of attitude response at Time I (see Fazio and Williams 1986). As seen in Table 2, people with relatively inaccessible attitudes (i.e., slow response times at Time 1) were susceptible to our reasons analysis effect at Time 2, g = .004. People who had relatively accessible attitudes (i.e., fast response times) at Time 1 were uninfluenced by analyzing reasons at Time 2. This pattern of findings resulted in a significant Reasons x Attitude Accessibility interaction, p = .03. None of the other measures of attitude strength interacted significantly with the reasons analysis manipulation.

TABLE 2

ABSOLUTE VALUES OF THE DIFFERENCE BETWEEN STANDARDIZED ATTITUDE RESPONSES TOWARD RONALD REAGAN AT INITIAL SESSION AND TELEPHONE SURVEY (FROM WILSON AND POLLACK, 1989, STUDY 2)

It is not entirely clear why attitude accessibility was the best moderator of the effects of analyzing reasons. One possibility is that if an attitude is highly accessible, it is unlikely that people will bring to mind reasons that are inconsistent with it. People with inaccessible attitudes may be more likely to consider reasons that conflict with their attitude, and as a result are more likely to change their minds after analyzing reasons.

IMPLICATIONS

In some respects, survey researchers might not find the present results very disconcerting. First, asking people why they feel the way they do about an attitude object appears not to be very common. Even if these questions are asked, our findings suggest that they are likely to influence the reported attitudes only of people with relatively inaccessible attitudes. Considered in another light, however, the results may have important implications. Sometimes a substantial proportion of the populace have relatively weak and inaccessible attitudes, such as people's attitudes toward a new, unfamiliar product. Further, sometimes "why" questions are asked on surveys. If they precede questions about people's attitudes, they might well produce momentary attitude change, and make the attitudes less predictive of future behavior. Thus, one clear implication of our findings is that when "why" questions are asked, they should succeed rather than precede attitude measures.

Surveys are not the only place where people are asked to explain their attitudes. It is not that uncommon to be asked by a friend or an acquaintance why we feel the way we do, thus our findings have implications for attitude change and attitude-behavior consistency in everyday life. We have discussed these implications in some detail elsewhere (Wilson in press; Wilson, Dunn, Kraft, and Lisle 1989). Suffice it to say here that there may be times when it is advisable not to spend too much introspecting about our reasons when trying to determine how we feel about something. This argument is compatible with a recent model proposed by Feldman and Lynch (1988), who suggested that "momentarily activated cognitions have disproportionate influence over judgments made about an object or on related behaviors performed shortly after their activation" (p. 421).

REFERENCES

Abelson, Robert P. (1988), "Conviction," American Psychologist, 43 (April), 267-275.

Fazio, Russell H. (1989), "On the Power and Functionality of Attitudes: The Role of Attitude Accessibility." in Attitude Structure and Function, ed. Anthony R. Pratkanis, Steven J. Breckler, and Anthony G. Greenwald, Hillsdale, NJ: Erlbaum, 153-179.

Fazio, Russell H. and Williams, Carol J. (1986), "Attitude-Accessibility as a Moderator of the Attitude-Perception and Attitude-Behavior Relations: An Investigation of the 1984 Presidential Election," Journal of Personality and Social Psychology,51 (March), 505-514.

Feldman, Jack M. and Lynch, John G. Jr. (1988), "Self-generated Validity and Other Effects of Measurement on Belief, Attitude, Intention, and Behavior," Journal of Applied Psychology, 73 (March), 421-435.

Hogarth, Robin M. (1982), Question Framing and Response Contingency, San Francisco, Jossey-Bass.

Judd, Charles M. and Krosnick, Jon A. (1988), "Attitude importance, political expertise, and attitude structure." in Attitude Structure and Function, ed. Anthony R. Pratkanis, Steven J. Breckler, and Anthony G. Greenwald, Hillsdale, NJ: Erlbaum, 99- 128.

Nisbett, Richard E. and Wilson, Timothy D. (1977), "Telling More Than We Can Know: Verbal Reports on Mental Processes," Psychological Review, 84 (May), 231-259.

Rosenberg, Milton J. (1960), "A structural theory of attitude dynamics," Public Opinion Quarterly, 24, 319-341.

Schuman, Howard and Presser, Stanley (1981), Questions and Answers in Attitude Surveys, New York: Academic Press.

Sherif, Carolyn W. (1980), "Social values, attitudes, and involvement of the self." in Nebraska Symposium on Motivation 1979: Beliefs, Attitudes, and Values, ed. M. M. Page, Lincoln, NE: University of Nebraska Press, Vol. 27, 1-64.

Smith, Eliot R. and Miller, Frederick D. (1978), "Limits on Perception of Cognitive Processes: A Reply to Nisbett and Wilson," Psychological Review, 85 (July), 355-362.

Tourangeau, Roger and Rasinski, Kenneth A. (1988), "Cognitive Processes Underlying Context Effects in Attitude Measurement," Psychological Bulletin, 103 (May), 299-314.

Tversky, Amos and Kahneman, Daniel (1973), "Availability: A Heuristic for Judging Frequency and Probability," Cognitive Psychology, 5 (September), 207-232.

Wilson, Timothy D. (in press), "Self-Persuasion via Self-Reflection," in Self-Inference Processes: The Ontario Symposium, ed. James M. Olson and Mark P. Zanna, Hillsdale, NJ, Vol 6.

Wilson, Timothy D., Dunn, Dana S., Kraft, Dolores, and Lisle, Douglas J. (1989), "Introspection, Attitude Change, and Attitude-Behavior Consistency: The Disruptive Effects of Explaining Why We Feel the Way We Do." in Advances in Experimental Social Psychology, ed. Leonard Berkowitz, Orlando, FL: Academic Press, Vol. 19, 123-205.

Wilson, Timothy D., Kraft, Dolores, and Dunn, Dana S. (1989), "The Disruptive Effects of Explaining Attitudes: The Moderating Effect of Knowledge About the Attitude Object," Journal of Experimental Social Psychology, 25 (September), 379 400.

Wilson, Timothy D., Lisle, Douglas J., and Schooler, Jonathan (1989), "Some Undesirable Effects of Self-Reflection," unpublished manuscript, Department of Psychology, University of Virginia, VA 22903.

Wilson, Timothy D. and Slone, Julie I. (1985), "More on Telling More Than We Can Know." in Review of Personality and Social Behavior, ed. Phillip Shaver, Beverly Hills, CA: Sage, Vol. 6, 167-183

Wilson, Timothy D. and Pollack, Scott (1989). "Effects of Explaining Attitudes on Survey Responses," unpublished manuscript, Department of Psychology, University of Virginia, VA 22903.

----------------------------------------