Scientific Advancement in Consumer Research: Some Problems Encountered When Using Consumer Panel Data

ABSTRACT - Numerous studies have been conducted over the past twenty years which utilize diary panel data in testing particular components of consumer behavior theory. This paper addresses the use of such data and provides empirical illustrations of some of the important problems which may improperly alter the development of theoretical advances in consumer behavior.


R. Dale Wilson (1981) ,"Scientific Advancement in Consumer Research: Some Problems Encountered When Using Consumer Panel Data", in NA - Advances in Consumer Research Volume 08, eds. Kent B. Monroe, Ann Abor, MI : Association for Consumer Research, Pages: 227-232.

Advances in Consumer Research Volume 8, 1981      Pages 227-232


R. Dale Wilson, Batten, Barton, Durstine & Osborn, Inc.

[This paper was supported by a Faculty Projects Grant from the Center for Research, College of Business Administration, The Pennsylvania State University, University Park, Pennsylvania. The author gratefully acknowledges the assistance of Manoj Hastak and Larry Newman (doctoral candidates at Penn State) in the preparation of this paper.]

[The author is Director of Marketing Sciences, BBDO, Inc., 383 .Madison Avenue, New York, NY 10017.]


Numerous studies have been conducted over the past twenty years which utilize diary panel data in testing particular components of consumer behavior theory. This paper addresses the use of such data and provides empirical illustrations of some of the important problems which may improperly alter the development of theoretical advances in consumer behavior.


The widespread availability of diary panel data for use in consumer research has presented many opportunities for the creative researcher. During the past twenty years, studies in consumer research have used panel data to track consumer brand choice decisions, to investigate shopping behavior strategies, and to draw inferences about consumer response to marketing variables (such as short-term promotional programs and price differentials among national and private-label brands) among others.

Unfortunately, however, significant problems are likely to be encountered when diary panel data are used for these purposes. Some, primarily those dealing with traditional measurement error, are well documented. Other problems, however, are largely ignored in the consumer behavior literature. This latter class of difficulties include operational problems such as multiple purchases, time duration between purchases, quantity (i.e., size of package) purchased, and purchase of multiple brands at one purchase occasion.

While some researchers dismiss these problems casually, their influence on empirical findings (and thus consumer behavior theory) may be substantial. The purpose of this paper is to (1) address these problems, (2) document some of these problems with empirical data, and (3) suggest the limitations of panel data in consumer research from an advancement of science perspective. As such, the conflict between requiring near-perfect data for empirical work and accepting "less-than-perfect" panel data for testing theoretical constructs is illustrated.


There seem to be several concepts from philosophy of science which may be useful in understanding the problems associated with many studies using consumer diary panel data. This section of the paper provides a brief discussion of these concepts--first as they relate to construct measurement in general and then as they relate to construct measurement when using panel data.

Empirical Indicators and Their Validity

The importance of measurement in consumer behavior research cannot be denied. Yet, at the same time, many consumer behavior studies are conducted without proper attention to validation of the construct being measured. Dubin (1978, p. 182) uses the term "empirical indicator" to label the act of deciding the type of operation a researcher employs to secure a measurement of value on the unit under investigation. Empirical indicators simply serve as an observable representation of the theoretical construct of interest. The question remains, however, whether a particular empirical indicator is a valid representation of the theoretical construct.

Dubin (1978) addresses the issue of validity by pointing out that the term "validity" refers only to the consensus or lack thereof that a particular empirical indicator measures values on a stated unit. He continues by stating that:

"This consensus is a man-made consensus and is nothing more than a conventional agreement among a group of interested students and spectators that the empirical indicator and theoretical unit whose values it measures are homologous. We may therefore expect that what is a valid measure at some time may lose this status if the consensus upon which it is based is supplanted.''

"The breakdown of a consensus usually occurs when an investigator raises questions about the empirical indicator based upon evidence that is independent of the circumstances of its employment'' (p. 200).

Since construct validation is concerned with (1) what the measurement of a variable is in fact measuring and, more importantly, (2) the deductions that are being made about the theory underlying the measured variable (Churchill 1979 p. 258-9; Green and Tull 1978, p. 198-9), Dubin's emphasis on a "consensus" approach to measurement cannot be slighted. Another point worth mentioning is that the potentially large degree of measurement error in the behavioral sciences causes the issue of the validity of empirical indicators to be especially important. In the natural sciences, where there is a much closer correspondence between reality and appearance, validity is of little concern (Dubin 1978, p. 202).

Since the study of consumer behavior is a social science, the importance of measurement in our discipline cannot be denied. Yet, in many cases, we behave as if measurement issues are beyond the realm of concern. For example, Jacoby and Chestnut (1978) identify a total of fifty-three separate measures of "brand loyalty." Many of these measures, when evaluated critically, probably have little relationship to the theoretical construct of brand loyalty. Other examples from the consumer behavior literature are also available which point to the real lack of concern for construct validity. Likely candidates include many attempts to measure such variables as deal proneness, advertising effectiveness, innovative behavior, perceived risk, information processing, values, personality, brand or store preference, etc. The point here is not to criticize those researchers who have attempted to measure illusive constructs. Rather, it is to draw attention to the special problems of construct validity when measuring variables that fall outside the realm of what might be termed "absolute indicators. ["Absolute indicators" are those that are ". . absolute in the sense that there can be no question as to what they measure" (Dubin 1978, p. 193). Examples, according to Dubin, include all demographic characteristics. One may argue, however, that because measurement error can never be totally eliminated, the definition of absolute indicators should include some relative notion of the degree to which there can be no question as to what they measure.]

One insightful way to view the problems encountered when measurements are taken on a theoretical construct is to view the measured response as the sum of the true response plus possible measurement errors (see Lehmann 1979, p.106). Even more exasperating is the realization that when a statistical analysis is performed on the measured concept, the sources of potential error are many. This problem can be viewed as:

Statistical test of a measured concept = True value + Construct validation error + Measurement error + Statistical error


"True value" represents the actual, yet unobservable, response which one is attempting to measure;

"Construct validation error" represents the unobservable difference between the true value of the response and the measurement of that response (disregarding measurement error);

"Measurement error" represents such partially unobservable factors as measurement process error, instrumentation error, and respondent error (see Lehmann 1979, p. 103-6); and

"Statistical error" represents error in the definition and selection of the sample as well as the error in the execution of the sampling plan. Statistical error also tends to be unobservable.

While many researchers have actively sought methods for limiting or controlling measurement error and statistical error, far less attention has been given to the problem of construct validation error in consumer behavior.

Construct Measurement Via Panel Data

For the purpose of this paper, discussion is limited only to measurement issues in continuous, diary panels which allow respondents to use self-administered questionnaires to record prespecified information on a regular basis (see Tull and Hawkins 1976, p. 397-401 for a concise analysis of the various types of panels, their characteristics, and their uses). Sudman and Ferber (1979), in the most comprehensive description of consumer panels and resulting data that is available in the literature, provide an analysis of the types of consumer and market research studies conducted with panel data.

It is important to note that discussions of consumer diary panels by and large ignore measurement problems associated with construct validity. Sudman and Ferber (1979) discuss measurement problems in consumer panels as including only sample representativeness, data accuracy, and panel conditioning. Similarly, others, such as Morrison, Frank, and Massy (1966) and Buck et al. (1977) mention the limitations of panel approaches to data collection but do not delve deeper into construct validity questions. Powers, Goudy, and Keith (1978), in addition to discussing the traditional limitations, compared panel data with recall data and found major inconsistencies. In an article which discusses methods that may ultimately be useful in shedding light on validity issues, McCullough (1978) reviews four methods which are appropriate for determining causal effects in panel data studies. These methods include Lazarfeld's 16-fold table, Coleman's four-state continuous-time Markov processes, cross-legged panel correlation, and path analysis.

Thus, the literature indicates that for the most part, there is little concern for construct measurement. But as indicated by McCullough's (1978) review, it may be possible to design studies to determine cause and effect relationships and clarify measurement issues by testing alternative definitions of constructs where the causal relationships are clear. If such cause and effect relationships are thought to be known with a high degree of certainty, then several definitions of a construct could be substituted for the purpose of selecting the best definition.

Although it is not entirely clear from reading the literature on the collection and analysis of panel data, the major factor which adversely affects construct validity is the fact that researchers rarely have the opportunity to measure constructs in the way in which they desire. Thus, with rare exception in the published literature, secondary data is being used. Another factor, nearly as important as lack of control in data collection, is the tendency toward using self-reported behavioral data to attempt to measure variables that may consist either fully or partially of attitudinal or cognitive components. Jacoby and Chestnut's (1978) criticism of much of the existing brand loyalty literature is a case in point. These two problems, one associated with the inability to obtain empirical indicators in exactly the way one wishes and the other associated with the necessity of using strictly behavioral data to measure attitudinally- or cognitively-oriented constructs, causes users of consumer panel data to be less than perfect in their data requirements. Unfortunately, however, these problems probably have had an adverse effect on the growth and development of consumer behavior theory.


The question, "Are we really measuring what we think we're measuring?", may never be answered with any degree of certainty for many constructs. However, it may be instructive to point out how the quality of measurement is affected by what may be "oversimplistic" empirical indicators. It is hoped that the few empirical illustrations included here will stimulate more critical thinking on the hazards of using panel data in cases where the quality of the empirical indicator is in doubt.

Illustration 1--Measuring Brand Purchase and the Problem of Multiple Units

A priori, one would expect that the measurement of actual brand choice would present no problem to the research given the ease with which purchase behavior could be pulled from the diary records. Unfortunately, such simplicity does not exist due to the problem of multiple purchases at a single point in time. Because of convenience, the availability of deals, or for other reasons, many consumers often buy multiple units (or even multiple brands) on a given shopping trip. For example, Wilson, Newman, and Hastak (1979) present panel data collected over a 24-month period which indicates that for the bar soap category, 68.5% of all purchases consisted of two or more units. In addition, 31.8% of the purchases consisted of three or more units and 22.5% consisted of four or more units. The most regrettable aspect of this situation is that most researchers have not revealed their methods for handling multiple-unit purchases. The available literature implies that this problem is usually handled in one of two ways--either through the truncation of all units purchased after the first (or a randomly-selected one) or the consideration of each unit as independent purchase occasions (Wilson, Newman, and Hastak 1979). Regardless of the option chosen by a particular researcher, it is clear that the difference between the true value of brand purchase and the measured value may be great.

Illustration II--Measuring Deal Purchases

Although the trend toward purchasing in multiple quantities is interesting, there may be more to the data than is originally apparent. It stands to reason that one may be interested in determining why such behavior took place and, therefore, would want to determine how many purchases occurred in conjunction with dealing activity. In these cases, data on multiple units purchased would also contribute to an understanding of the effects of a particular type of deal. Again, however, the number of units purchased is rarely (if ever) considered in studies of consumer dealing activity even though it provides a possible explanation for the true impact of such deals.



Table 1 is designed to help clarify how the type of deal relates to multiple purchases. As can be seen from the table, the row percentages point to several differences in the number of units purchased across the various types of deals. Further, these trends tend to be accentuated as the number of units purchased increases. For example, four units are purchased in 22.3% of the cents-off marked coupon deals but in only 10.9% of the package coupons. The large chi-square statistic indicates that a strong relationship exists between the deal type and the number of units purchased. Thus, if a consumer researcher attempted to measure the impact of various types of deal situations without considering the purchase of multiple units, major errors could be made due to the oversimplification of the purchase measure.

Illustration III--Types of Deals Purchased Over Time

One of the problems often associated with studies involving consumer panels is that interesting trends in the data are camouflaged by the level of aggregation of that data. Because of the high costs of processing panel data, researchers sometimes tend to draw conclusions about trends in the data without a detailed analysis. In working with panel data, it is abundantly clear that "how you cut the data" will have a large influence on conclusions that are drawn. In a philosophy of science sense, this problem is especially important whenever cause and effect relationships are being specified.

As an illustration of this problem, first consider the data in Table 2. This table presents data in the bar soap category for the total market in 1976 and 1977. Specific data are also presented for five brands, each having a relatively large market share among panel respondents over the two-year period. Table 2 points out clearly the temporal effects of sales. At first glance, the total market for the category seems to be declining over time since total units purchased declined from 4,976 to 2,959. Further, this trend is not explained when specific brand data (at least for Brands A through E) are analyzed. One interesting feature of the inter-brand relationship is that the units purchased figures for Brand E show dramatic increases in time. Like all of the brands displayed in Table 2, however, Brand E sales seem to be declining quickly after a peak of 834 units in the first quarter of 1977.



The key to understanding the bar soap data, however, may be in the last two rows of Table 3, which represent the total number of units and the percentage of units purchased at regular price. Across all brands, 13,901 (or 34.5%) of the 40,297 units purchased were bought on deal. For some brands, such as Brands A and E, over 50% of the total units were purchased at a deal price. Thus, the simple inclusion of dealing activity may inspire us to look further into the data for explanations of the cause-effect relationships.

Table 3, 4, and 5 present the purchase data for Brands A, B, and C, respectively. These tables are designed to more clearly establish the relationship between dealing activity and units purchased across time. Except for the Brand A data in 1977, the chi-square statistics presented in the tables indicate that the hypothesis of independence between dealing activity and time period must be rejected. Since there does seem to be an identifiable relationship here, it seems appropriate to suggest further study of the cause and effect relationships between units purchased and dealing activity. Particularly for Brand A, it appears that the lack of deal availability during calendar year 1977 may have caused, or at least accentuated, the decline in units purchased among panel members.





Illustration IV--Measuring Store Choice

Table 6 presents shopping and purchase expenditure data for a panel of 719 respondents who resided in the Chicago area. These data were collected over a four-week period in 1970 by a national consumer panel organization. The interest in these data lie in the fact that the determination of market share is quite difficult. For Store A, a large regional supermarket chain based in Chicago, the data indicate a monotonically increasing relationship between the dollar amount category and the percentage of trips made to Store A. If one considers "market share" to be equal to the number of shopping trips made to Store A divided by the total number of trips made to all stores, then the question becomes, "What is the relevant measure of store shopping trips?".



The data in Table 6 indicate that when all purchases are considered, Store A's market share is 25.2%. However, a more suitable measure of Store A's actual impact on the marketplace may obtained by considering only those trips in which a particular minimum amount was spent. For example, if the relevant cutoff is at an expenditure level of $5.00 the market share figure increases to 33.9%.

The point of this illustration is that a great deal of thinking must be done in order to establish the point at which minor, "filler" shopping trips become larger, "full-scale" trips. Using the data presented in Table 6 as justification, Wilson (1977) used the $5.00 point as the relevant cutoff point. Similarly, Frisbie (1980) chose the $5.00 point as one of his three criteria for establishing filler trips. In these cases, there are no firm theoretical reasons for defining filler trips in a particular way although the concept does have some support in the literature (e.g., MacKay 1973). However, as Table 6 illustrates, this problem is meaningful if a clear picture of consumer shopping behavior is to emerge from the panel data.


One of the main points that, hopefully,' will come from this paper is that because of measurement problems, the analysis of consumer panel data is no simple task. Although this point has been illustrated with only one example (i.e., the problem of multiple purchases), many other far more difficult definitional problems come to mind. For example, the early work of Kuehn and Rohloff (1967) indicates consumers surprisingly maintain a high degree of "size loyalty" and, in fact, their findings include evidence that loyalty to package size is occasionally more typical than brand loyalty. Yet, it is not surprising that researchers have avoided using panel data to explore size loyalty since variations in within-brand and across-brand package sizes would be virtually impossible to control. Not only would one have to consider the package size, but other factors such as the specific brand, deal availability, number of units purchased, and perhaps consumers' usage rate would become relevant to the analysis. Control procedures for such a study would have to be massive.

Another point to be made is that variable measurement is of utmost importance. However, the use of convergent validity, which requires the measurement of multiple dependent variables in panel studies, should be useful in limiting the extent of this problem (see Dubin 1978, p. 195-200; Jacoby 1978).

Lastly, another problem of no small significance is the fact that the huge quantities of data available on the typical magnetic tape of panel records sometimes hides relevant data. Wiggins (1973) addresses this problem when he discusses one of the three main uses of panel data--to study change on the behavior of individuals, as in studies of buying behavior. He states that:

"[this type of application]...has great significance in its own right, and its own variety of unique problems. A tremendous amount of potentially valuable findings may have been ignored by trying to eliminate what for the purpose at hand was garbage (or merely irrelevant to the problem), even though a detailed examination ...may have yielded other kinds of information of genuine value .... The problem there has generally been a shortage of funds and possibly a lack of sophistication in the analysis in some instances" (p. 188).

Also related to problem discussed by Wiggins is the fact that extremely large sample sizes in most panels (see Sudman and Ferber (1979, p. 9-11) may mislead the researcher by causing immaterial deviations to be statistically significant. More care needs to be take to avoid the problem of "failing to see the forest for the trees."


This paper has attempted to point out a few of the many pitfalls of using diary panel data in consumer research. More than anything else, the paper was designed to provide cautionary comments to current and future data users.

While it is clear that the availability of panel data presents many opportunities for meaningful, creative research, panel data can also lead to unthinking data massaging that yields non-productive or even counter-productive conclusions. The goal of advancing the discipline of consumer research dictates that panel researchers engage in more than rote number crunching with high-powered hardware.




Buck, S. F., Fairclough, E. H., Jephcott, J. St. G., and Ringer, D. W. C. (1977), "Conditioning and Bias in Consumer Panels--Some New Results," Journal of the Market Research Society, 19, 59-75.

Churchill, Gilbert A., Jr. (1979), Marketing Research: Methodological Foundations, Second edition, Hinsdale, Illinois: Dryden Press.

Dubin, Robert (1978), Theory Building, Revised edition, New York: Free Press.

Frisbie, Gil A., Jr. (1980), "Ehrenberg's Negative Binomial Model Applied to Grocery Store Trips," Journal of Marketing Research, 17, 385-90.

Green, Paul E. and Tull, Donald S. (1978), Research for Marketing Decisions, Fourth edition, Englewood Cliffs, New Jersey: Prentice-Hall.

Jacoby, Jacob (1978), "Consumer Research: A State of the Art Review," Journal of Marketing, 42, 87-96.

Jacoby, Jacob (1978) and Chestnut. Robert W. (1978), Brand Loyalty: Measurement and Management, New York: John Wiley and Sons.

Kuehn, Alfred A. and Rohloff, Albert C. (1967), "Consumer Response to Promotion," in Promotional Decisions Using Mathematical Models, ed. Patrick 3. Robinson, Boston: Allyn and Bacon, &J-laS.

Lehmann, Donald R. (1979), Market Research and Analysis, Homewood, Illinois: Richard D. Irwin.

MacKay, David B. (1973), "A Spectral Analysis of the Frequency of Supermarket Visits," Journal of Marketing Research, 10, 84-90.

McCullough, B. Claire (1978), "Effects of Variables Using Panel Data: A Review of Techniques," Public Opinion Quarterly, 42, 199-220.

Morrison, Donald G., Frank, Ronald E., and Massy, William F. (1966), "A Note on Panel Bias," Journal of Marketing Research, 3, 85-8.

Powers, Edward A., Goudy, Willis J., and Keith, Pat M. (1978), "Congruence Between Panel and Recall Data in Longitudinal Research," Public Opinion quarterly, 42, 380-9.

Sudman, Seymour and Ferber, Robert (1979), Consumer Panels, Chicago: American Marketing Association.

Tull, Donald S. and Hawkins, Del I. (1976), Marketing Research: Meaning, Measurements and Method, New York: Macmillan Publishing Company.

Wiggins, Lee M. (1973), Panel Analysis: Latent Probability Models for Attitude and Behavior Processes, San Francisco: Jossey-Bass Inc., Publishers.

Wilson, R. Dale (1977), "Generalized and Embedded Versions of Heterogeneous Stochastic Models of Consumer Choice Behavior: An Empirical Test and Statistical Evaluation in a Dynamic Store Selection Context," unpublished Ph.D. thesis, The University of Iowa.

Wilson, R. Dale, Newman, Larry M., and Hastak, Manoj (1979), "On the Validity of Research Methods in Consumer Dealing Activities: An Analysis of Timing Issues," in 1979 Educators' Conference Proceedings, eds. Neil Beckwith et al., Chicago: American Marketing Association, 41-6.



R. Dale Wilson, Batten, Barton, Durstine & Osborn, Inc.


NA - Advances in Consumer Research Volume 08 | 1981

Share Proceeding

Featured papers

See More


Preferences for Insight and Effort Differ across Domains and Audiences

Gaetano Nino Miceli, University of Calabria
Irene Scopelliti, City University of London, UK
Maria Antonietta Raimondo, University of Calabria

Read More


Conjuring Creativity: The Impact of Fear

Ilgım Dara Benoit, Appalachian State University
Elizabeth Miller, University of Massachusetts, USA

Read More


Worse is Bad: Asymmetric Inferences on Items and Assortments From Logically Equivalent Comparisons

Yoel Inbar, University of Toronto, Canada
Ellen Evers, University of California Berkeley, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.