Consumer Panels: a Review of Characteristics and Use in Consumer Behavior Research

ABSTRACT - Consumer panels are now commonly used for longitudinal and cross-sectional studies of consumer behavior. This paper reports on analysis of the published consumer behavior research using panels which appeared in 12 leading sources during the period 1975-1991. Specific attention is given to reliability and related methodological characteristics of such studies.


Karen F. A. Fox, Gerald Albaum, and Sujata Ramnarayan (1993) ,"Consumer Panels: a Review of Characteristics and Use in Consumer Behavior Research", in E - European Advances in Consumer Research Volume 1, eds. W. Fred Van Raaij and Gary J. Bamossy, Provo, UT : Association for Consumer Research, Pages: 133-141.

European Advances in Consumer Research Volume 1, 1993      Pages 133-141


Karen F. A. Fox, Santa Clara University, U.S.A.

Gerald Albaum, University of Oregon, U.S.A.

Sujata Ramnarayan, University of Oregon, U.S.A.

[The authors thank M. Venkatesan, University of Rhode Island, for his helpful comments on earlier drafts of the paper.]


Consumer panels are now commonly used for longitudinal and cross-sectional studies of consumer behavior. This paper reports on analysis of the published consumer behavior research using panels which appeared in 12 leading sources during the period 1975-1991. Specific attention is given to reliability and related methodological characteristics of such studies.

Researchers studying consumer behavior have increasingly turned to the use of consumer panels. By following the purchase and consumption behavior of selected households, particularly over time (i.e., longitudinally), researchers can understand such phenomena as brand loyalty and brand-switching, impact of deals and advertising campaigns, and other marketing-relevant factors. More broadly, it has been stated that the most prevalent type of longitudinal data in the behavioral and social sciences is obtained from panels (Ragosa 1987, p. 2).

At the present time panel studies are conducted by more than two dozen major commercial consumer panel organizations which operate in one or more countries. In the United States, consumer panel organizations include, among others, such well-known national commercial panels as Market Research Corporation of America, National Family Opinion, and Market Facts. In addition, a number of consumer product firms maintain their own consumer panels or create short-term "ad hoc" panels to test new products and promotional techniques. Some universities also maintain consumer panels to obtain research data and to generate revenues by providing data to others. Moreover, the application of electronic and communications technology continues to encourage new types of panels, such as those involving advertising delivery via split-cable television and in-store scanner-recording of purchases.

In the midst of this growing interest in panel research, there exists considerable vagueness about how to set up and conduct a panel, what constitutes panel research, the problems and opportunities associated with panel research, and issues of data analysis and reliability.

The first issueChow to set up a panelCis covered quite well elsewhere. Several key sources, most notably Sudman and Ferber (1979) and Nicosia (1965), provide normative advice on the specifics of setting up panels. Sudman and Ferber (1979) consider the uses of consumer panels, sampling, panel recruiting and maintenance, data collection methods, data processing, and costs. In a broad context, constructing a panel is essentially an issue of sampling and how to ensure that it is representative of the population. In addition when constructing a panel there must be concern for how representative samples of all types can be obtained from this larger sample known as the panel. The general principles are repeated, in greatly abbreviated form, in most marketing research textbooks. Therefore, those issues are not addressed in the present paper.

This article focuses on the reported use of the panel approach in consumer behavior research. The data in this study consist of published studies of substantive aspects of consumer behavior. The following issues are addressed:

1. What constitutes panel research?

2. What are the advantages and problems associated with panel research?

3. What decisions go into conducting panel research?

4. How is panel data analyzed?

5. What reliability issues are important in panel research?

One result of this investigation is an assessment of the state of the use of panel approach in published consumer behavior studies. As such, it represents a more-than-quarter-century follow-up to Nicosia's (1965) review of the application of panel methodology to the study of change in marketing.


The term "panel research" can be applied to a wide variety of research. This body of work can be described under three principal headings: research on the structure and operation of a panel; non-longitudinal research which happens to draw one or more samples from an existing panel; and longitudinal research using data collected from a panel. A panel itself is a sample of entities (e.g., persons, households, organizations, and so on) from which information is obtained.

The first type includes using a panel to study methodological issues of the technique itself. Such so-called "research panels" can play a vital role in investigating the reliability of the panel technique and showing how the efficiency of panel operations can be improved (Ferber and Lannom 1980). A research panel is one that may be used solely for experimenting with panel technique or to study the cost or the flexibility of a large-scale panel by means of a pilot study. Ferber and Lannom (1980) report that research panels are invariably of the latter type and may have objectives such as:

1. Ascertaining the types of problems likely to be encountered in that type of panel operation.

2. Exploring means of dealing with these problems.

3. Obtaining operating experience with such a panel over a period of time.

The second type consists of non-longitudinal research using data obtained from panels. In such studies the panel is like a river flowing by, from which the researcher has scooped up a bucketful of waterCa set of observations taken at one point in time. The same data could equally well have been gathered from a one-time sample. The panel was used as the source of a preexisting sample, as a convenient sampling frame. Such studies are not really "panel research," as the fact that the data were gathered from a panel is incidental to the research.

The third typeCthe true panel researchChas as its distinguishing feature the repeated collection of data from a sample of respondents on the same topic (Sudman and Ferber 1979). Strictly speaking, a panel study is one in which there are at least two measurements (interviews) of the same things taken from the same people, although additional information can be obtained as well. Ideally, there should be at least three contacts for data collection as the one-reinterview situation is more likely to be a pretest-posttest experiment (Ferber and Lannom 1980). In addition, research designs with two observations are usually inadequate for the study of individual growth and individual differences in growth; at best, two-wave designs permit studying individual differences in change or some type of average rate of change (Rogosa 1987, p. 9).




The repeated collection of data from consumer panels creates both opportunities and problems. Panel studies offer three major advantages over so-called "one-shot" surveys (Levenson 1968; Mosteller 1968). First, because panels yield linked data on the same individuals on more than one occasion, the researcher can analyze the data in greater depth. For example, the researcher who notes an overall group change in attitudes or purchasing behavior can determine whether the change represents a unidirectional shift for the whole sample or whether the overall change in fact reflects overlapping changes for subgroups. Second, additional measurement precision is gained from matching an individual's responses from one interview/data collection point to another. Often, having only aggregate measures obscures important changes that may be occurring within individuals. Third, panel studies offer considerable flexibility, in that the researcher who notes a particular trend or relationship can make later inquiries to the same respondents in order to obtain explanations for earlier findings. One other advantage is that costs of doing research may be less than that using other approaches, particularly when a commercial panel organization is used. In this situation costs are spread among all clients, particularly when carried out by syndicated research services. Finally, there is considerable information available on each panel.

Because panel data are collected at two or more times, the researcher assumes that "something" happens or can happen (i.e., changes may occur) during the time interval of interest. In fact, it is just such changes, analyzed in the form of a turnover table, which provide the heart of panel analyses (Levenson 1968; Nicosia 1965). To illustrate the use of a turnover table, assume a company changed the package in one market for a brand of paper towels called Strong and that the company conducted a survey of 400 people purchasing the product two weeks before the change (T1) and a similar measure for the week after change (T2). The results are shown in Table 1. Both (A) and (B) tell us that the gross increase in sales of Strong over X (this represents all other brands) is 40 units (or 10%). However, only the turnover table from the panel in (B) can tell us that 40 former buyers of Strong switched to X and that 80 former buyers of X switched to Strong. In those instances where there is experimental manipulation (e.g., introduction of a new product or the use of split-cable advertising), the manipulation is presumed to cause changes between Time T and Time T+1.

But where there is no experimental manipulation, the measurement and interpretation of changes observed in successive waves of consumer panel data can be more problematic. Where the causal factors are unknown and controlled (and they are, in fact, typically uncontrollable), the researcher can (1) investigate the relationship between some hypothesized causal factor, such as family size, life cycle stage, or attitudes toward product attributes, and subsequent purchase or other behavior, or (2) assume that any changes between Time T and Time T+1 are due to history and/or maturation effects. One matter that is always of concern, particularly in the latter situation, is reliability of measurement. This concern permeates all marketing research. We will present evidence that reliability and related measurement issues have not received their due attention in connection with most consumer panel studies. In addition, Peter (1979) observed that marketing researchers in general seldom assessed the reliability of their measuresCless than 5 percent of the more than 400 consumer behavior studies he surveyed indicated any type of measurement reliability.

Research using consumer panels presents both the opportunity and the need to examine reliability. Since the same respondents are contacted on at least two occasions, the researcher can readily link responses for individuals at two or more times to measure test-retest reliability. Because maturation and history are such plausible causes (and/or threats to validity) in longitudinal studies, the researcher should be particularly concerned to assure that the instruments/measures are reliable.

Since panel studies are a special case of longitudinal research, respondents are typically conscious of their ongoing part in responding to the same or similar questions over a period of time. This consciousness of continuing participation can lead to "panel conditioning," which may bias responses relative to what would be obtained through a cross-section study. As in any effort at scientific measurement, the researcher should be concerned with threats to internal validity, since internal validity is a precondition for establishing with some degree of confidence the causal relationship between variables. But since panel conditioning affects the validity of measurement rather than its reliability, it is not addressed in this paper. Conceptually, reliability is to measurement theory what internal validity is to experimental design, as both are concerned with "how good" the method used was as a method and not whether it provides "true" values.

Another issue of concern is panel attrition, the extent of nonresponse that occurs in later waves of study interviewing. Some persons interviewed at the first time may be unable or unwilling to be interviewed later on. This may be on an ad hoc basis or it may represent a case of panel mortality, the permanent dropping out of a member. In general, attrition and mortality can be held to a low level by using incentives (e.g., money, donations to charity, token gifts, free products, etc.) and/or by not overusing panel members. Attrition and mortality affect the representativeness of a panel. Even without these problems some panels per se may not be representative of any particular population, although it is possible to develop representative samples from the panel. To illustrate, Peterson (1988, p. 123) has observed that relative to the United States population, consumer panel members sometimes tend to be more middle class, white, less mobile, possess larger families, and be more interested in marketing generally.


The studies analyzed for this article consist of empirical consumer behavior research studies which used a panel approach and which were published during the period 1975-1991. All articles and papers published in 12 major journals or proceedings during the stated period were examined to identify studies suitable for inclusion in the present investigation. From these sources, 165 studies relating to and/or using panels were identified.

Each study was examined to determine whether a panel was used simply as a subject pool for a one-shot survey. A substantial number of studies used panels in this way. These studies were eliminated from further analysis, usually because their use of panel members as respondents was based on access, representativeness, and convenience, not on the use of the panel respondents as a panel with the same variables measured on more than one occasion. The "one-shot" surveys typically involved adding a few items to a larger instrument, a sort of "omnibus" survey.

The remaining studies were further examined to determine whether the study employed a panel (or panels) or panel data principally to develop a measurement instrument or to test a model, whether the published report provided insufficient description of the panel to merit inclusion, or whether the study met specified criteria to qualify for further analysis.

To be included in the set for further analysis, a study had to meet three criteria. First, it had to involve a study conducted on a substantive issue in consumer behavior. Studies of the panel approach as a methodology and studies which made use of panel data to refine measurement instruments or to test other methodological techniques, although interesting and important, were excluded from consideration. Second, to qualify as panel research, the study must have obtained responses from the same group of respondents of substantially the same topic on at least two occasions. Third, studies where respondents were contacted only twice were included only if authors clearly identified them as panel studies. This means that the researchers had designed their study as panel research. The third criterion was used to eliminate the plethora of pre-post experiments reported in the consumer behavior literature. Sudman and Ferber (1979, p. 1) note that "from a conceptual point of view, even two interviews on the same topic with the same respondent would qualify as a panel study. . . From this point of view, therefore, the usual before-and-after experiment is a special case of the panel study."

A total of 71 published reports met all three criteria to be classified as panel research. Table 2 summarizes by publication the studies using panels, giving the distribution by source of the 71 studies which met the three criteria as well as of the 94 studies which did not. [In a few instances, authors of panel studies have published two or more studies which appear to draw on the same data set but address somewhat different questions. By including all these studies, there is some risk of "double-counting" in that authors who measured reliability in one reported study will report reliability in the second. Churchill and Peter (1984) in a recent meta-analysis employed the following decision rule: "If any essentially identical study was reported in more than one publication, only the most current one was included in the investigation." Since quantitative outcomes were not combined in the present study, we have elected not to delete studies which may have some overlap.]

For each included study the following variables were identified: the type of panel (ad hoc or continuous), panel sponsorship, method of data collection employed, use of diary, research design (experiment or nonexperiment), respondent characteristics, sampling method, extent of panel coverage, number of data collection points, type of dependent variable(s), and reported reliability. With the exception of the last three variables, each represents a major dimension of panel structure and as such can influence results obtained, particularly measures of effect. The last three variables are relevant to the specific project at hand or the research itself. All have a bearing on measurement, which is of utmost importance and can complicate the analysis task (Wilson 1980, p. 231).

The set of articles and proceedings papers reviewed for this study may not exhaustively include every consumer behavior research study reported during the 1975-1991 period. But since 12 major sources were examined, the articles and papers included in the investigation can be presumed to constitute a representative and comprehensive sampling of the relevant literature during the 1975-1991 period. Since the publications reviewed are the major outlets for publication of consumer behavior research, they provide a relatively unbiased perspective on the nature and status of consumer behavior research using the panel approach during that period.


The characteristics of the 71 panel studies are summarized in Table 3. Almost three-fourths of the studies used data from continuous panels, panels set up to collect data for a number of research studies. Slightly more than one-half of the studies relied on data collected by commercial panel organizations or companies. Most of the data was collected by mail, but only one-third of the studies were based on purchase or time-use diaries. The majority of studies focused on the household or family as a buying unit, and the samples drawn typically were nonprobability in nature (54.9% of the studies). Most panels were non-national in scope. Over 50 percent acquired data from the same respondents on at least four occasions. Finally, regarding content, about 86 percent of the studies had some measure(s) of behavior as the dependent variable(s) of interest.



Research Design

Few published panel studies employ experimental or quasi-experimental designs. Change over time, therefore, must be presumed to be the result of history, family changes, or other factors. Of the 71 panel articles studied, only 12 (16.9%) involved some experimental manipulation. The main feature of quasi-experimental designs is the purposeful manipulation or introduction of one or more stimuli into a real-life context (Nicosia 1965). Such studies entail exposing all or some of the respondents to advertising (or other marketing communication) or to a product and measuring changes in purchases, media habits, or attitudes attributable to the stimulus. In these instances the researcher seeks to control for threats to validity. However, most panel studies do not seek to manipulate or introduce stimuli, but rather employ "natural designs." The distinguishing feature of natural designs is that the researcher intends to record changes in one or more marketing events that occur in the natural course of life (Nicosia 1965). In such natural or nonexperimental studies it is precisely what are termed "threats to validity" in experimental studies which are believed to cause the changes observed at the various reinterviews (see Table 4).


Ten of the 71 panel studies reported some form of reliability. The ten studies are summarized in Table 5.

What is striking about these ten studies is the considerable variation in how the term "reliability" is defined and how reliability is operationalized. For purposes of a meta-analysis, the researcher looks for commonCor comparableCmeasures in each study.

As an operational concept, reliability has two major and distinct meanings: (1) an index of sampling-error variance and (2) an index of measurement or response-error variance. Measurement error pertains to the consistency of results of repeated measures on the same people; sampling error pertains to the consistency of the results of the same type of observation on different people (Broedling 1974).



Although both uses, by definition, are based upon random error rather than systematic error and both refer to the generalizability of measurement, they are distinct. Sampling variance is concerned with generalizability of measurement taken on a sample of a larger population, whereas "measurement variance is used as an estimate of the generalizability of one person's scale scores as comprising a sample of the population of all possible scale scores independently obtained on that individual" (Broedling 1974, p. 373). [More broadly, conceptualizing and measuring all types of error variances in research projects has lead to development of a theory of measurement known as generalizability theory. Reformulating reliability as generalizability theory for use in marketing research is discussed by Peter (1979).] It is clear that the reliability of most immediate relevance for consumer panel studies is measurement reliability, which typically refers to the accuracy or precision of a measurement instrument (Kerlinger 1973). Measures low in reliability cannot be depended upon to register true changes, because unreliability inflates standard errors of estimates (Cook and Campbell 1979).

This variation in usage raises the question of what types of reliability are appropriate for what types of panel research studies. For example, more than one-half of the studies employing panels deal exclusively with observations or self-reports of purchase behaviors, such as how many tubes of Crest the respondent purchased during the most recent grocery shopping trip. Test-retest reliability is typically of little relevance in such studies. The researcher should be concerned principally with accuracy of recording as a source of measurement error and with the use of data collection methods which encourage the greatest accuracy. The recording accuracy issue has been addressed in several studies, notably Sudman (1964, 1964b), Wind and Lerner (1979), and Stanton and Tucci (1982). Accuracy of recording is related to equivalence and stability of measurement reliabilities.





Four of the ten studies reporting reliability used a split-sample approach to cross-validate results, test for consistency, or test models (Bearden and Teel 1983; Calantone and Sawyer 1978; Ghosh, Neslin and Shoemaker 1983; Richins and Bloch 1991). The four studies describe the split-sample approach in terms of reliability, but none specifically mentions that this approach to assessing reliability deals with sample, not measurement, reliability.

The analysis of the ten studies and the issue of recording accuracy together raise the idea of the reliability of the panel approach. There are two important and related questions:

1. Conceptually, is there such a characteristic as panel reliability which is distinct from measurement reliability?

2. If there is such a concept, how can it be measured operationally?

The standard measures of reliabilityCcoefficient alpha and other split-half correlations, equivalent measures, and test-retest correlationCare used to assess measurement reliability (Peter 1979). If there is a characteristic of panel reliability there must be something unique about panel-based studies in research approach or method. At the very least, the nature of the data obtained (i.e., measurements of the same variables from the same respondents at two or more different times) is unique to the panel method. In addition, Nicosia (1965) has pointed out that a panel study must also make use of so-called panel methods in obtaining panel data. These methods relate to types of designs (natural and quasi-experimental), procedures for implementation, and methods of selecting appropriate types of analyses, all of which vary greatly in use. Descriptions of changes that are shown in a turnover table is a case in point. The nature of panel data above is sufficient to argue that there is a concept of panel reliability. However, panel reliability has two components: sample reliability and measurement reliability. This must be so since the panel method is unique in that each sample is drawn from a more or less fixed (at least in the short run) pool of potential sample members.

The next, and perhaps more important, question is how can the concept of panel reliability be operationally measured. In addition to the usual methods for measurement reliability, at least two additional approaches might be used to measure panel reliability. First, a split-sample approach should provide a measure of sample reliability for the panel method. Four of the studies reviewed for this paper used such an approach although the researchers did not identify their results in terms of panel reliability per se. A second approach involves a longer time for assessment. A cohort approach could be used to compare reliabilities and other effect measures. One subsample from a cohort would be used as an ad hoc panel while comparative groups would be individual random samples drawn from the same cohort over time.

The intent of the present exposition is not to criticize past work using panels for consumer research. Rather, the objective has been to describe the state of practice in the use of the panel approach, with specific mention of the use and reporting of reliability. Since such a small number of studies reported any type of reliability, it is not possible to generalize about the characteristics of such studies. While Peter (1979) found that fewer than 5% of consumer behavior studies report reliability, the present study suggests that researchers using panels are somewhat more diligent in considering reliability.

Nonetheless, significantly more needs to be done. In our view, all panel studies should measure both dimensions of panel reliability. At a minimum, panel studies using scales should include some measure of measurement reliability, preferably Cronbach's alpha. Optimally, panel studies would also assess sampling reliability, using a split-sample or test-retest correlation. The use of comparable measures would allow the development of a comprehensive meta-analysis of reliability in consumer panel studies in the tradition of Sawyer and Peter (1981) and Peterson, Albaum, and Beltramini (1985).


Bearden, William O. and Jesse E. Teel (1983), "Selected Determinants of Consumer Satisfaction and Complaint Reports," Journal of Marketing Research, 20 (February), 21-28.

Bearden, William D. and Arch B. Woodside (1982), "Brand Attitudes and Consumer Choice Over Successive Time Periods," Proceedings, American Institute for Decision Science, 196-198.

Broedling, Laurie A. (1974), "On More Reliably Employing the Concept of 'Reliability'," Public Opinion Quarterly, 38 (Fall), 372-378.

Burnett, John J. and Julie Baker (1989), "The Robert-Worezel Hierarchical Model: An Extension Through Methodological and Variable Delineation Considerations," Proceedings, Educators' Conference of American Marketing Association, 265.

Calantone, Roger J. and Alan G. Sawyer (1978), "The Stability of Benefit Segments," Journal of Marketing Research, 15 (August), 395-404.

Churchill, Gilbert and J. Paul Peter (1984), "Research Design Effects on the Reliability of Rating Scales: A Meta-Analysis," Journal of Marketing Research, 21 (November), 360-375.

Cook, Thomas D. and Donald T. Campbell (1979), Quasi-Experimentation: Design and Analysis Issues for Field Settings, Chicago, IL: Rand McNally.

Ferber, Robert and Linda B. Lannom (1980), "Research Panels in Consumer Behavior," in Advances in Consumer Research, Vol. 8, ed. Kent B. Monroe, Ann Arbor, MI: Association for Consumer Research, 238-244.

Ghosh, Avijit, Scott A. Neslin, and Robert W. Shoemaker (1983), "Are There Associations Between Price Elasticity and Brand Characteristics," 1983 AMA Eucators' Proceedings, eds. Patrick E. Murphy et al., Chicago, IL: American Marketing Association, 226-230.

Kerlinger, Fred N. (1973), Foundations of Behavioral Research, 2nd ed. New York: Holt, Rinehart and Winston.

Klenosky, David B. and Arno J. Rethans (1988), "The Formation of Consumer Choice Sets: A Longitudinal Investigation at the Product Class Level," Proceedings, Association for Consumer Research, Vol. XV, 13-17.

Levenson, Bernard (1968), "Panel Studies," in International Encyclopedia of the Social Sciences, ed. David L. Sills, Macmillan and the Free Press, Vol. 11, 371-379.

Mittal, Banwari (1989), "Must Consumer Involvement Always Imply More Information Search?" Proceedings, Association for Consumer Research, Vol. 16, 167.

Moschis, George P. and Roy L. Moore (1983), "A Longitudinal Study of the Development Purchasing Patterns," 1983 AMA Educators' Proceedings, eds. Patrick E. Murphy et al., Chicago, IL: American Marketing Association, 114-117.

Moschis, George P. (1982), "A Longitudinal Study of Television Advertising Effects," Journal of Consumer Research, 9 (December), 279-286.

Mosteller, Frederick (1968), "Errors: Nonsampling Errors," in International Encyclopedia of the Social Sciences, ed. David L. Sills, Macmillan and the Free Press, Vol. 5, 113-132.

Nicosia, Francesco M. (1965), "Panel Designs and Analyses in Marketing," in Marketing and Economic Development, ed. Peter D. Bennett, Chicago, IL: American Marketing Association, 222-243.

Peter, J. Paul (1979), "Reliability: A Review of Psychometric Basics and Recent Marketing Practices," Journal of Marketing Research, 16 (February), 6-17.

Peterson, Robert A. (1988), Marketing Research, 2nd ed. Plano, TX: Business Publications, Inc.

Peterson, Robert A., Gerald Albaum, and Richard F. Beltramini (1985), "A Meta-Analysis of Effect Sizes in Consumer Behavior Experiments," Journal of Consumer Research, 12 (June), 97-103.

Richins, Marsha L. and Peter H. Bloch (1991), "Post-Purchase Product Satisfaction: Incorporating the Effects of Involvement and Time," Journal of Business Research, 23 (September), 145-158.

Rogosa, David (1987), "Myths About Longitudinal Research," Stanford University, Center for Educational Research, Working Paper 87-CERAS-23.

Sawyer, Alan G. and J. Paul Peter (1983), "The Significance of Statistical Significance Tests in Marketing Research," Journal of Marketing Research, 20 (May), 122-133.

Stanton, John L. and Louis A. Tucci (1982), "The Measurement of Consumption: A comparison of Surveys and Diaries," Journal of Marketing Research, 19 (May), 274-277.

Sudman, Seymour (1964a), "On the Accuracy of Recording Consumer Panels: I," Journal of Marketing Research, 1 (May), 14-20.

Sudman, Seymour (1964b), "On the Accuracy of Recording Consumer Panels: II," Journal of Marketing Research, 1 (August), 69-83.

Sudman, Seymour and Robert Ferber (1979), Consumer Panels, Chicago, IL: American Marketing Association.

Wilson, R. Dale (1980), "Scientific Advancement in Consumer Research: Some Problems Encountered When Using Consumer Panel Data," in Advances in Consumer Research, Vol. 8, ed. Kent B. Monroe, Ann Arbor, MI: Association for Consumer Research, 227-232.

Wind, Yoram and David Lerner (1979), "On the Measurement of Purchase Data: Surveys Versus Purchase Diaries," Journal of Marketing Research, 16 (February), 39-47.



Karen F. A. Fox, Santa Clara University, U.S.A.
Gerald Albaum, University of Oregon, U.S.A.
Sujata Ramnarayan, University of Oregon, U.S.A.


E - European Advances in Consumer Research Volume 1 | 1993

Share Proceeding

Featured papers

See More


Trusting the data, the self and “the other” in self tracking practices

Dorthe Brogård Kristensen, University of Southern Denmark, Denmark

Read More


Institutional Influence on Indebted Consumers’ Understanding of Wants and Needs

Mary Celsi, California State University Long Beach, USA
Stephanie Dellande, Menlo College
Mary Gilly, University of California Irvine, USA
Russ Nelson, Northwestern University, USA

Read More


Market Structure and Firm Engagement in Divisive Political Issues

Chris Hydock, Georgetown University, USA
Neeru Paharia, Georgetown University, USA
Sean Blair, Georgetown University, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.