Some Methodological Issues in Consumer Research

Naresh K. Malhotra, Georgia Institute of Technology
ABSTRACT - This paper offers comments on the four papers presented in the session on Methodological Issues. The four papers are considered in the order in which they were presented. Comments are made on the use of secondary task technique, questionnaire pretesting with verbal protocol analysis, respondents' moods as a biasing factor in surveys. and quota samples versus probability samples.
[ to cite ]:
Naresh K. Malhotra (1991) ,"Some Methodological Issues in Consumer Research", in NA - Advances in Consumer Research Volume 18, eds. Rebecca H. Holman and Michael R. Solomon, Provo, UT : Association for Consumer Research, Pages: 583-585.

Advances in Consumer Research Volume 18, 1991      Pages 583-585

SOME METHODOLOGICAL ISSUES IN CONSUMER RESEARCH

Naresh K. Malhotra, Georgia Institute of Technology

ABSTRACT -

This paper offers comments on the four papers presented in the session on Methodological Issues. The four papers are considered in the order in which they were presented. Comments are made on the use of secondary task technique, questionnaire pretesting with verbal protocol analysis, respondents' moods as a biasing factor in surveys. and quota samples versus probability samples.

THE SECONDARY TASK TECHNIQUE

The secondary task technique has been applied in a variety of contexts in the behavioral sciences (e.g., Inhoff and Fleming 1989). Applications in marketing have also surfaced recently (e.g., Lord and Burnkrant 1988; Lord, Burnkrant, and Owen 1989). Marketing history abounds with eases where techniques used in other disciplines were borrowed hastily and applied in marketing settings without an adequate examination of the underlying assumptions and without critically evaluating the applicability of those techniques. Hence, the paper by Owen (19')()) addressing the assumptions of the secondary task technique is certainly in order. Beyond the assumptions identified by Owen, the following limitations of the secondary task technique are emphasized to caution readers against ill-considered applications of this technique.

The theoretical relationship between the performance on the secondary task and constructs such as attention, elaboration, and effort devoted to the primary task is not well understood. Note, understanding the theoretical relationship of a measure with other constructs it is supposed to measure, or is related to, is an essential requirement for construct validity. This means that given the lack of theoretical understanding, it would be very difficult to even begin to establish the construct validity of the secondary task techniques, such as the RT-probe, as measures Of constructs such as attention, elaboration, and effort.

The use of the secondary task technique becomes all the more problematic in a study where attention, elaboration, and effort are all being measured. In such a case, it is not clear which one of these constructs the secondary task technique is measuring. The secondary task technique is incapable of detecting anything other than apparent interference between concurrently performed tasks. Furthermore, it should be realized that attention, elaboration, and effort are multidimensional constructs. The secondary task technique measures only one dimension of these constructs, perhaps a dimension which is common to all. Hence, exclusive reliance on the secondary task technique to provide a sole measure of these constructs is not appropriate.

The secondary task technique provides an indirect measure of attention, elaboration, and effort. Given the limitations associated with this technique, it should be used with extreme caution. Perhaps, its use to carry out manipulation checks is more defensible than its use to measure attention, elaboration, and effort.

QUESTIONNAIRE PRETESTING WITH VERBAL PROTOCOL ANALYSIS

Designing a questionnaire has many facets. The objectives and steps involved in questionnaire design may be described by the acronym QUESTIONNAIRE (Malhotra 1991, 1992):

Objectives

Q  uestions that respondents can answer

U  plift the respondent

E  rror elimination

Steps

S  pecify the information needed

T  ype of interviewing method

I  ndividual question content

O vercoming inability and unwillingness to answer

N onstructured versus structured questions

N onbiasing question wording

A  rrange the questions in proper order

I  dentify form and layout

R  eproduction of the questionnaire

E  liminate bugs by pretesting

Verbal protocol analysis, as advocated by Bolton (1990) tests only one aspect of questionnaire design, namely question wording. Furthermore, there are many issues involved in determining the exact wording for each question. These may be summarized by the acronym WORDING (Malhotra 1991, 1992).

W ho, where, what, when, why, and how

O rdinary words

R egularly, normally, usually etc. should be avoided

D ual statements (positive and negative)

I  mplicit alternatives and assumptions should be avoided

N on leading and nonbiasing questions

G eneralizations and estimates should be avoided

The approach by Bolton examines only limited aspects of question wording. Nevertheless, the use of protocols to test the comprehension, retrieval, judgment and response difficulties associated with questions is interesting. It is important that this framework be applied taking the context of the survey appropriately into account. For example, in encoding retrieval, Bolton (1990) assumes that "surveys typically elicit memory-based rather than stimulus-based judgments". This assumption is obviously grossly violated in conjoint analysis and other information processing surveys, where the respondents are asked to evaluate a given set of stimuli described in terms of the information provided. At the heart of this pretest methodology lies the coding scheme. Hence, it is important that the codes developed be consistent, complete, and applied uniformly to the protocols. Given the limited aspects of the questionnaire which are examined in this methodology, it should not be used as the sole method of pretesting. However, it could be useful when used in conjunction with other pretest procedures.

RESPONDENTS' MOODS AS A BIASING FACTOR IN SURVEYS

There is evidence to suggest that mood states have a direct and indirect effect on consumer behavior (Gardner 1985). In the context of judgment and affective reactions a direct link may involve associations in memory between mood states and affective responses. An indirect effect may involve-the effect of mood being mediated by cognitive activity such as information retrieval. In an indirect way mood states may affect evaluations by making mood congruent items more accessible in memory and thus influencing affective responses (Isen et al. 1978). In examining the impact of moods on Norway as a travel destination, Heide and Gronhaug (1990) do not distinguish between the direct and indirect effects.

Based on the available literature, Heide and Gronhaug (1990) hypothesized that positive mood would result in higher evaluations than neutral mood. However, they did not hypothesize any directional effect of negative mood. Literature does indicate that the effect of negative mood states seems to be more heterogeneous than the effects of positive mood states (Isen 1984). The heterogeneity in the effects of negative moods may be attributed to at least two factors. Negative mood states may be themselves more heterogeneous than positive mood states. Secondly, processes that terminate negative moods may compete with automatic tendencies to engage in mood-congruent behavior (Clark and Isen 1982; Gardner 1985; Isen 1984). In their study, Heide and Gronhaug (1990) found biases in mood-congruent direction for both positive and negative moods.

Heide and Gronhaug (1990) also hypothesized a negative correlation between mood effects and level of knowledge about Norway. However, they did not find support for this hypothesis. The reason may well be that almost all the respondents (students in the USA) lacked knowledge about Norway. Hence, there was not enough variation on this variable in the sample. To examine the impact of knowledge, this variable should have been experimentally manipulated, just as mood states were. For example, some subjects could have been provided with information about Norway to induce a high knowledge stale. Another, reason for the weak/inconclusive results might be that the sample size was small. There were a total of only 65 respondents with about 16 being assigned to each treatment condition.

It appears that the mood states are more likely to have an impact on consumer evaluations when the stimuli are ambiguous, the perceived benefits of being precise are low, and induction and action are contiguous (Gardner 1985). However, more research investigating the effects of specific positive and negative moods on consumer behavior is certainly needed.

QUOTA SAMPLES VERSUS PROBABILITY SAMPLES

The contention of Melnick et al. (1990) that under certain conditions, quota samples may be preferred to simple random samples is a reasonable one. Few would argue with it. However, reservations may be expressed about some of the comments and the two studies conducted by Melnick ct al. (1990). First, it should be noted that telephone interviews, and not personal interviews, are the dominant mode-of data collection in marketing research conducted in the USA (Malhotra 1990). Probability sampling is often employed in telephone surveys._Variants of random digit dialing, particularly directory based designs, are often used to generate telephone samples. Melnick et al. (1990) are also not quite correct in assuming that the sample size in marketing research is small (200300). In most commercial applications in marketing research, the sample size is much larger. Syndicate services using omnibus panels, often use large samples Of 2000 or more. In their discussions, the authors imply that panels always use quota samples. This is not true. Probability sampling schemes can be and have been applied to select samples from panels.

The empirical-comparison reported by the authors raises several questions. They have compared quota sampling with simple random sampling. However, stratified random sampling is more similar to quota sampling and should have been the probability sampling technique selected for comparison. The population should have been stratified on the sa nc variables and levels used to select quotas. A second factor which makes the comparison uneven is sample size. The sample size for quota sampling is 2400 whereas for simple random sampling it is only 600. Since the sample size for quota sampling is four times the sample size for simple random sampling, is it surprising that quota sampling does better? A further factor biasing the results against simple random sampling is that the pattern Of nonresponse was deterministic. In practice, nonresponse may not be truly random but is also not truly deterministic. A biasing factor in favor or simple random sampling is that 1000 samples were drawn, when in practice only a single sample is drawn.

The authors' conclusion though may still be reasonable, but for reasons other than those emphasized in their paper. In certain situations, quota sampling may be preferred because the cost of probability sampling is high, sampling errors are small in comparison to nonsampling errors, nonresponse rate may be high and nonrandom, and the sample size is small. Yet, in other instances, where the conditions are just the opposite, probability sampling might well be preferred.

CONCLUSIONS

It is indeed appropriate to have one or more sessions on methodological issues in ACR conferences. The four papers discussed here emphasize the need to examine and validate our measures, pretest the questionnaires and other research instruments, take into account the effect of relevant variables which may have a direct or mediating influence on the phenomenon of interest, and adopt a suitable sampling plan. These, and other methodological issues are central to the quality of consumer research. If the goal is to accumulate unequivocal findings in consumer behavior, it is imperative that the research conducted be methodologically sound.

REFERENCES

Bolton, Ruth N. (1990), "An Exploratory Investigation of Questionnaire Pretesting With Verbal Protocol Analysis", in Advances in Consumer Research, Vol. 18, eds. Rebecca H. Holman and Michael R. Solomon.

Clark, Margaret and Alice Isen (1982), "Toward Understanding the Relationship Between Feeling States and Social Behavior, in Cognitive Social Psychology, eds. Albert Hastorf and Alice Isen, New York: Elsevier/North Holland, 73-108.

Gardner, Meryl P. (1985), "Mood Slates and Consumer Behavior: A Critical Review," Journal of Consumer Research, 12 (December), 281-300.

Heide, Morten and Kjell Gronhaug (1990), "Respondents' Moods as a Biasing Factor in Surveys: An Experimental Study", in Advances in Consumer Research, Vol. 18, eds. Rebecca H. Holman and Michael R. Solomon.

Inhoff, Albrecht W. and Kevin Fleming (1989), "Probe-Detection Times During the Reading of Easy and Difficult Text", Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(2), 339-35 1.

Isen, Alice (1984), "Toward Understanding the Role of Affect in Cognition", in Handbooks of Cognition, eds. Robert Wyer, Jr. and Thomas Srull, Hillsdale, NJ: Lawrence Erlbaum, 179-236.

Isen, Alice, Thomas Shalker, Margaret Clark, and Lynn Karp (1978), ''Affect, Accessibility of Material in Memory, and Behavior: A Cognitive Loop?," Journal of Personality and Social Psychology, 36 (January), 1 - 12.

Lord, Kenneth R. and Robert E. Burnkrant (1988), "Television Program Elaboration Effects on Commercial Processing", in Advances in Consumer Research, Vol. 15, ed., Michael I. Houston, 213-218.

Lord, Kenneth R., Robert E. Burnkrant and Robert S. Owen (1989), "An Experimental Comparison of Self-Report and Response Time Measures of Consumer Information Processing", in Proceedings of the American Marketing Association 1989 Summer Educators' Conference.

Malhotra, Naresh K. (1990), "Administration of Questionnaires for Collecting Quantitative Data in International Marketing Research," Journal of Global Marketing, Vol. 4, No. 2, forthcoming.

Malhotra, Naresh K. (1991), "Mnemonics in Marketing: A Pedagogical Tool", Journal of the Academy of Marketing Science, forthcoming.

Malhotra, Naresh K. (1992), Marketing Research: An Applied Orientation, New York: Harper-Collins, forthcoming.

Melnick, E. L., R. Colombo, R. Tashjian, and K. R. Melnick (1990), "Sampled Survey Data: Quota Samples Versus Probability Samples", in Advances in Consumer Research, Vol. 18, eds. Rebecca H. Holman and Michael R. Solomon.

Owen, Robert S. (1990); "Clarifying the Simple Assumption of the Secondary Task Technique", in Advances in Consumer Research, Vol. 18, eds. Rebecca H. Holman and Michael R. Solomon.

----------------------------------------