Cognitive Models For Behavioral Frequency Survey Questions



Citation:

E. Marla Felcher and Bobby Calder (1990) ,"Cognitive Models For Behavioral Frequency Survey Questions", in NA - Advances in Consumer Research Volume 17, eds. Marvin E. Goldberg, Gerald Gorn, and Richard W. Pollay, Provo, UT : Association for Consumer Research, Pages: 207-211.

Advances in Consumer Research Volume 17, 1990      Pages 207-211

COGNITIVE MODELS FOR BEHAVIORAL FREQUENCY SURVEY QUESTIONS

E. Marla Felcher, Northwestern University

Bobby Calder, Northwestern University

Response Errors

How we ask a question affects how it is answered. In the course of normal conversation, a respondent is typically able to decipher a question's meaning, and the questioner is typically able to correctly interpret the response. For example, if a wife asks her husband if he "wants to see a movie", he can easily ask her to clarify what she means, by posing counter-questions to her initial question. By the close of their conversation, "see a movie" will have been defined in terms of location (home video versus theater), type (comedy versus drama) or perhaps even in terms of a specific movie choice.

This situation becomes more complex when we are being asked a question by an unfamiliar person within an unfamiliar context, as in the case of survey research. In this situation, unlike the husband-wife interaction described above, it is much less likely that the respondent will force the questioner to specify exactly what he/she means when asking a question. Likewise, because the questioner typically does not know anything about the respondent's history, it is less likely that "wrong" or unusual answers will be detected. Consequently, response errors are likely to occur.

Throughout the past fifty years, an extensive body of research has grown out of an effort to identify types of response effects (errors in surveys caused by factors other than sampling procedures), such as respondents misinterpreting a question, intentionally misleading the researcher, or lying (see Bradburn, 1983 for review). Response effects research has demonstrated main effects such as: face-to-face interviews generally result in over-reporting of behaviors, respondents to telephone interviews under-report behaviors, closed-ended questions in self-administered questionnaires produce under-reporting of behaviors, respondents tend to give longer responses to longer questions, and respondents are likely to remember events occurring more recently than they actually did (Bradburn, 1983; Schuman and Presser, 1981). Because of its pervasiveness throughout all survey research, perhaps the most disturbing of all the response effect findings is that question wording and question order can have significant effects on the distribution of responses (Tourangeau and Rasinski, 1988).

A useful question to ask after close to fifty years of research in this area is, "What have we learned?". Basically what we have learned is that response effects do exist. However, this body of research is riddled with inconsistencies; sometimes the order of presentation of questions affects respondents' answers, while at other times order appears to be irrelevant; sometimes respondents forget events which have occurred in their lives, while at other times they seem to fabricate events which never happened. Both Bishop, et. al. (1985) and Schuman and Presser (1981) report situations in which they have failed to replicate apparent response effects (question wording and order).

Information Processing Approaches to Response Effects

Frustrated by these seemingly unreliable findings, survey researchers have recently turned to psychology for insights into the underlying cognitive processes involved in answering questions. Initiated by researchers conducting large scale survey projects, the first of these studies attempted to identify the strategies respondents use in answering survey questions, and to apply basic principles of how we process information to questionnaire design, in the-hopes of eliciting from respondents more accurate, and overall higher quality answers (Lessler, Mitzel, Salter, and Tourangeau, 1985; Lessler, Tourangeau, and Salter, 1986). Noting that a respondent's task is to understand a question, retrieve the relevant facts from memory, make a judgment, and respond, Tourangeau (1984) identifies the many opportunities in the process for error. The cognitive psychology literature on comprehension, encoding and retrieval of events, memory, and forgetting provide insights into what can go wrong in this process and the precautions researchers can take to guard against them. Most recently, Tourangeau and Rasinski (1988) apply these concepts to the measurement of attitudes, suggesting that in order to answer an attitude question, a respondent must interpret the question, retrieve the relevant beliefs and feelings from long term memory, use these beliefs and feelings to make a judgment, and use the judgment to choose a response. Applying this information processing based model, Tourangeau and Rasinski are able to explain some of the seemingly unreplicable and inconsistent response effects found by previous researchers.

Given the importance of survey research to marketers, it seems that they would have quickly seized upon this promising new area. However, this has not been the case. To date, only one empirical study has appeared in a major marketing journal addressing this topic (Blair and Burton, 1987). The focus of this study was to identify the strategies used by respondents in answering behavioral frequency questions, and to predict the task conditions affecting strategy choice. Specifically, survey participants were asked how often they had purchased gasoline and clothing, made a long-distance telephone call, attended a movie, and dined at a restaurant within either a two-week, two-month, or six-month time frame (depending upon condition). Blair and Burton found that 28% of the respondents used an episodic enumeration strategy (recalling each specific episode of the behavior), while 56% used an estimation procedure (i.e., constructed a base rate, then converted to an estimate of overall frequency). A negative relationship was found between question time frame and reported use of episodic enumeration. Finally, an episodic enumeration strategy was used only when reported frequencies were low; no respondents used this strategy when the number of events was greater than ten.

Blair and Burton's work was useful in indicating to marketers that they should be interested in the process-related issues of question-answering, rather than simply response distributions. However, they did not go far enough. We need to know why a respondent chooses to use a particular strategy, and how this choice of strategy ultimately affects response accuracy. If our ultimate goal as marketers is to increase the quality of survey research data, we must identify the factors which guide a respondent to answer in a particular way. Blair and Burton (1987) suggest that task conditions such as the time frame of interest and the number of behavioral events reported will dictate strategy choice. However, we should have learned from decades of response effects research that results such as this will simply spawn multitudes of similar studies, indicating that factors such as method of administration (telephone versus personal), length of wording of questions, question type (open versus closed-ended)* etc., etc., etc., will affect whether a respondent uses an episodic enumeration or estimation strategy in answering a behavioral frequency question. In other words, we will have simply replaced the dependent variable studied by response effect researchers, the distribution of responses, with a new dependent variable, strategy choice. And there is no reason to assume that this new research will be immune from the inconsistencies permeating the existing response effects research. Therefore, it seems that a more fruitful path to take is one briefly mentioned by Blair and Burton (1987) toward the end of their paper; "to gain an understanding, if possible, of the cognitive mechanism that controls selection of a response formulation process" (p. 288). Specifically, in order to understand how respondents answer behavioral frequency questions, we need to take a broader perspective, and look at how humans encode, store, and retrieve frequency information in general.

Is the Encoding of Frequency Information an Automatic Process?

A basic assumption in Blair and Burton's research is that frequency estimation is a conscious retrieval process. While they do note that 17% of their respondents reported using "simple direct estimates" of frequency, it is not clear whether this means the respondents had frequencies automatically encoded and stored for the categories requested, or that they were unable to describe their retrieval strategies in the protocols. It seems that an important initial question to ask, therefore, is, "Is frequency information automatically encoded and stored"?

Contrary to Blair and Burton's (1987) assumptions, Hasher and Zacks (1984) strongly contend that the encoding of frequency is an automatic process. They cite a variety of studies indicating that humans are remarkably accurate at estimating the frequency of letters, words, syllables, professions, and diseases, which are "not the sorts of events whose frequency one might be expected to learn deliberately" (p.1373). For example, after presenting subjects with lists of words on a memory drum, and not forewarning them of an upcoming frequency estimation test, Underwood, et. al. (1971) found a strong, positive, linear relationship between actual and judged frequencies of words. Furthermore, factors which have shown to strongly affect recall ability, have not been shown to affect the ability to judge frequencies. For instance, the accuracy of frequency estimates is not affected by providing subjects with feedback and additional practice, nor is it affected by individual differences such as intelligence, and age (at least between 5 and 20). (Zacks, Hasher, and Sanft, 1982). Finally, factors known to decrease cognitive capacity, such as depression! stress, arousal, and competing tasks, have been shown to decrease problem solving ability and temporarily suppress I.Q. However, these factors have no effect on frequency judgments (Hasher and Zacks, 1979). Therefore, because humans are sensitive to frequency without intending to be, training and feedback do not improve frequency estimates, there are few individual differences in the ability to provide accurate frequency estimates, and frequency judgements appear to be invariant with respect to arousal, depression, and competing task demands, Hasher and Zacks (1984) have concluded that the encoding of frequency information is an automatic process.

Contrary to Hasher and Zacks' findings, there is strong evidence suggesting that we do not automatically encode frequency information. For example, while Underwood et. al. (1971) demonstrated a positive correlation between actual and estimated frequencies, subjects tended to underestimate the more frequent events and overestimate the less frequent ones. Therefore, if we do automatically encode frequency information, it is not consistently accurate. Johnson, et. al. (1977, 1979) found that simply imagining that events occur increases the judged frequency of events which actually occur. This indicates that if we do automatically encode frequency information, the encoding process is unable to distinguish between actual and imagined events. Finally, numerous studies indicate that frequency judgments are not invariant across various task conditions, as was indicated by Hasher and Zacks (1984). Specifically, Rowe (1974) has shown that semantic processing of words increases the accuracy of frequency judgments, Tversky and Kahneman (1973) have shown that events which are more available are judged to be more frequent, and Greene (1986) has demonstrated that frequency estimates are sensitive to intentionality of learning. Thus, there appears to be ample evidence indicating that frequency information is not always automatically encoded, and that demands posed at the time of retrieval may significantly affect frequency judgments.

Encoding of Frequency for Autobiographical Events

At this point it is useful to interpret the evidence presented thus far in relation to memory for autobiographical events. Specifically, we may ask, "Is frequency encoding automatic for real world and autobiographical events?" The answer to this question will have strong implications for survey researchers. For example, if it is found that the encoding is automatic, then research on retrieval strategies (i.e., estimation versus episodic enumeration) may be moot. Researchers investigating retrieval strategies are making an apriori assumption that frequencies are not automatically encoded. It is possible that the process of taking protocols to learn about subjects' strategies merely creates a demand effect in which respondents construct strategies in order to seem logical to the researcher and/or to themselves. If the process, or even a portion of the process is automatic, this will be extremely difficult to uncover through protocols. Finally, if frequency information is automatically encoded, is it encoded correctly? It -may be that we-automatically store relative frequency information, but not absolute frequencies. And finally, in the event that frequency data are automatically encoded correctly, is it possible that these counts get "adjusted" during the retrieval process?

Marketers have not yet directly addressed the automaticity of frequency encoding, however, it may be possible learn something about the processes involved by reassessing data from studies conducted for other purposes. For example, Burton and Blair, (unpublished manuscript) asked college student subjects to estimate how many courses they had completed outside of the college of business. The correlation between actual and judged frequencies ranged from .41 to .86, depending on condition. When the researchers did not specify how much time the subjects should take to respond, the correlation between actual and judged frequency was .41. When instructed to take 10-20 seconds, the correlation was similar, at .47. However, when told to take a full 70 seconds to answer, the correlation jumped to .86. If frequency data are automatically encoded and invariant to retrieval task, it is likely that there would not have been such a large increase in accuracy when told to take more time to answer. Therefore, these results suggest that time constraints may affect the choice of retrieval strategy. When under time pressure, it is possible that subjects respond with an automatically encoded, "rough" frequency judgment. However, when these time constraints are relaxed, they engage in more conscious, effortful strategies, perhaps adjusting the automatically encoded "rough" estimate. (While this explanation may be likely, it is not certain, as the study was not designed to test these hypotheses).

Two additional studies indicate that the judgment of frequency of autobiographical events is not a totally automatic process. In one study, subjects were queried on their frequency of visits to the dentist (Lessler, Tourangeau, and Salter, 1986). Similar to Burton and Blair (unpublished manuscript), these researchers found that the time taken to formulate a response significantly affected accuracy. Specifically, given more time to answer, subjects over-reported visits; when given less time they significantly under-reported visits. In a similar vein, Means, et. al. (1988) found that subjects consistently under-reported visits to an HMO for recurring health problems (i.e., when the patient needed to make multiple visits for the same problem). However, they discovered that prompting respondents with contextual and time-line cues (i.e., reasons they may have made the visits, times they were likely to have made the visit, etc.), significantly increased the accuracy of estimates. Again, if frequency encoding were a totally automatic process, these cues would have had no effect on frequency judgments.

The Retrieval Process

It is interesting to note that both Lessler, et. al. (1986) and Means, et. al. (1988) found that providing respondents with cues significantly increased the accuracy of their frequency judgments. Lessler, et. al. found that "reasons" cues were the most helpful in prompting more accurate estimates, while Means, et. al. found that general contextual cues worked best. In order to understand why certain cues are more beneficial than others in prompting accurate responses, it is useful to take the approach advocated by Tourangeau and Rasinski (1988), and look at the cognitive demands placed upon a respondent when asked a survey question. In this specific case, we must ask, "What cognitive processes must occur for a respondent to be able to accurately answer a behavioral frequency question?" Kolodner (1983) provides valuable insights into these processes through her theory of reconstructive memory.

Kolodner's model of memory focuses on the relationship between the structure of memory and the retrieval process. Her research is driven by the observation that while humans are experts at retrieving general autobiographical information, when they are asked details of specific episodes, this information is typically not available. Retrieval, then, is not a simple task of direct enumeration, but rather, "... a process of reconstructing what must have happened". (p. 284) The cues provided by Lessler, et. al. (1986) and Means, et. al. (1988) simply help the survey respondent reconstruct "what must have happened".

Kolodner posits that related events which occur in our lives are organized into Episodic Memory Organization Packets (E-MOP's). Each E-MOP will therefore contain multiple, related events. For example we may have an E-MOP for "Movies I Have Seen", the component parts of which are individual instances of going to the movies. When an individual event is integrated into an existing E-MOP, it is indexed according to the features which differentiate it from other events already in the E-MOP. Differentiating features used to index these individual episodes into an existing "Movies I Have Seem" E-MOP may be people you have seen the movie with, movie types, places, times, etc. Different people may have different organizing E-MOP's, but all E-MOP's are organized according to differentiating features. Finally, events within an E-MOP will have similarities which allow for the construction of generalized episodes. For example, while the "Movies I Have Seen" E-MOP will contain different movie-watching episodes, these episodes will contain many similarities.

This E-MOP structure plays an important role in the retrieval process. In answering a behavioral frequency question, the query will typically guide the respondent to an appropriate E-MOP. For example, if asked, "How many times have you gone to the movies in the past 4 weeks?", the memory search will be directed to the "Movies I Have Seen" E-MOP. "Going to the movies" is defined as the target event. If the respondent has seen multiple movies, there will be multiple events stored in this E-MOP. However, the question asked in this way has failed to provide cues which will help differentiate the E-MOP's component events. Therefore, the target event must be further specified; differences between the E-MOP's component events must be specified in order for the respondent to be able to answer the question. Specifically, Kolodner theorizes that the original target event must be arrowed down through the specification of features. Recall that when an event is integrated into an E-MOP it is indexed according to those features which differentiate it from similar events. During retrieval, differentiating features are specified, and the events which are indexed by these features are retrieved.

The problem with behavioral frequency questions is that they are too general; they do not provide the respondent with enough features on which he/she can differentiate similar episodes. When asked the question, "How many times have you gone to the movies in the past four weeks?", a respondent may attempt to direct her memory retrieval process by thinking of the titles of the movies she has seen, the nights of the week she is likely to have gone to the movies, or people she is likely to have gone to the movies with. By doing so, she is essentially elaborating on the original question, by generating differentiating features. The original question posed to her did not specify the differentiating features which would allow the traversal process to occur through the E-MOP, therefore, she had to generate possible differentiating features herself. The cues which Lessler, el. al. (1985) and Means, et. al. (1988) found to be so effective were serving as these differentiating features, without which respondents were unable to construct an answer.

Summary

The pervasiveness of survey research in our society makes it an important topic, and one which is especially worthy of marketers' attention. Decades of response effects research has taught us that response distributions are affected by a multitude of factors, and that these effects are not always consistent. In an attempt to understand how survey respondents answer questions, marketers have identified typical response formulations strategies. However, it is our contention that the more fruitful path to take is that advocated by Tourangeau and Rasinski (1988) in their essay on the cognitive processes involved in answering attitude questions. Specifically, they state that an answer to an attitude question is the culmination of a four-step process. First, the respondent interprets the question, then he/she retrieves the relevant beliefs and feelings from long-term memory, and applies them to make a judgment. Finally, the judgment is used to formulate a response. The processes involved in answering a behavioral frequency question are similar; the respondent must interpret-the question, retrieve the relevant information from long-term memory, and formulate a response.

Instead of focusing solely on retrieval-based strategies (i.e., Blair and Burton, 1987), we need to take a step back and look at some of the assumptions inherent in this work. First is the assumption that response formulation is a retrieval, rather than encoding process. Related to this issue is the assumption that frequency is determined consciously, rather than automatically encoded. While this issue has yet to be resolved in the cognitive psychology literature, there is ample evidence on both sides of the argument, suggesting that at least part of the process is automatic, and part is effortful. According to Jonides and Naveh-Benjamin (1987), "... it seems to us that the data concerning frequency estimation largely militate against any simple view of the coding mechanism involved. At this point, it seems a reasonable research strategy to investigate the relations among several mechanisms in the coding of frequency estimation". (p. 239) Finally, response strategy research focuses on the response formulation stage, essentially ignoring how the respondent interprets a question and determines what to retrieve from memory. Clearly, survey researchers have much to gain in understanding how these mechanisms operate as well.

REFERENCES

Bishop, G., R. Oldendick, and A. Tuchfarder, (1985), 'The Importance of Replicating a Failure to Replicate: Order Effects on Abortion Items," Public Opinion Quarterly, 49 (Spring), 105-114.

Blair, Edward, and Scot Burton (1987), "Cognitive Processes Used by Survey Respondents to Answer Behavioral Frequency Questions," Journal of Consumer Research, 14 (September), 280-288.

Bradburn, Norman M. (1983), "Response Effects", in Handbook of Survey Research, eds. P. Rossi, J. Wright, and A. Anderson, New York: Academic Press.

Burton, Scot and Edward Blair (1987), "Task Conditions, Response Formulation Processes, and Response Accuracy for Behavioral Frequency Questions in Surveys," unpublished manuscript.

Greene, Robert L. (1986), "Effects of Intentionality and Strategy on Memory for Frequency," Journal of Experimental Psychology: Learning, Memory and Cognition, 12 (October), 489495.

Hasher, Lynn and Rose Zacks (1979), "Automatic and Effortfull Processes in Memory," Journal of Experimental Psychology: General, 108 (September), 356-388.

Hasher, Lynn and Rose Zacks (1984), "Automatic Processing of Fundamental Information," American Psychologist, 39 (December), 13721388.

Johnson, Marcia K., Carol L. Raye, Alvin Y. Wang, and Thomas H. Taylor (1979), "Fact and Fantasy: The Roles of Accuracy and Variability in Confusing Imaginations with Perceptual Experiences," Journal of Experimental Psychology: Human Learning and Memory, S (May), 229-240.

Jonides, John and Moshe Naveh-Benjamin (1987), "Estimating Frequency of Occurrence," Journal of Experimental Psychology: Learning, Memory, and Cognition, 13 (April), 230-240.

Kolodner, Janet (1983), "Reconstructive Memory: A Computer Model," Cognitive Science, 7 (October-December), 281 -328.

Lessler, Judith, Roger Tourangeau, and William Salter (1986), Cognitive Aspects of Questionnaire Design: Final Project Report, Chicago: NORC.

Means, B., D. Mingay, A. Nigam, and M. Zarrow (1988), "A Cognitive Approach to Enhancing Health Survey Reports of Medical Visits," in Practical Aspects of Memory: Current Research and Issues, Volume 1, eds. M. Gruneberg, P. Morris, and R. Sykes, Chichester: John Wiley & Sons.

Rowe, Edward (1974), "Depth of Processing in a Frequency Judgment Task," Journal of Verbal Learning and Verbal Behavior, 13 (December), 638-643.

Schuman, Howard and Stanley Presser (1981), Questions and Answers in Attitude Surveys, New York: Academic Press.

Tourangeau, Roger (1984), "Cognitive Sciences and Survey Methods," in Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines, eds. T. Jabine, M. Straf, J. Tanur, and R. Tourangeau, Washington, D.C.: National Academy Press.

Tourangeau, Roger, Judith Lessler, and William Salter (1986), Cognitive Aspects of Questionnaire Design: Part C, Chicago: NORC.

Tourangeau, Roger and Kenneth Rasinski (1988), "Cognitive Processes Underlying Context Effects in Attitude Measurement," Psychological Bulletin, 103 (May), 299-314.

Tversky, Amos and Daniel Kahneman (1973), "Availability: A Heuristic for Judging Frequency and Probability," Cognitive Psychology, S (September), 2()7-232.

Underwood, Benton J., Joel Zimmerman, and Joel S. Freund (1971), "Retention of Frequency Information with Observations on Recognition and Recal," Journal of Experimental Psychology, 87 (February), 149-162.

Zacks, Rose, Lynn Hasher, and H. Sanft (1982), "Automatic Encoding of Event Frequency: Further Findings," Journal of Experimental Psychology: Learning, Memory, and Cognition, 8 (January) 106-116.

----------------------------------------

Authors

E. Marla Felcher, Northwestern University
Bobby Calder, Northwestern University



Volume

NA - Advances in Consumer Research Volume 17 | 1990



Share Proceeding

Featured papers

See More

Featured

H9. Market Emergence: the Alignment Process of Entrepreneurs’ Socio Cognition and Consumers’ Perception of the Market

Hao Wang, University of South Florida, USA

Read More

Featured

Making Sense of Spontaneity: In-The-Moment Decisions Promote More Meaningful Experiences

Jacqueline R. Rifkin, Duke University, USA
Keisha Cutright, Duke University, USA

Read More

Featured

From Novice to Know-it-All: How Google-Based Financial Learning Affects Financial Confidence and Decisions

Adrian Ward, University of Texas at Austin, USA
Tito L. H. Grillo, University of Texas at Austin, USA
Philip M. Fernbach, University of Colorado, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.