Sampled Survey Data: Quota Samples Versus Probability Samples

ABSTRACT - Marketing decisions are often based upon measurements of the market place, which are usually constructed from sampled data. These data are obtained from mail surveys, telephone surveys, door-to-door interviews and from interviews at shopping centers and cent allocation facilities. The recommended procedure is to capture data from a probability sampling scheme based on the assumption that the observations will be well balanced against all variables, independent of experimenter bias and necessary for accurate inferences about the unknown population. The discussion in this paper compares the relevance of probability sample inference to model based inference. We argue that inferences based upon models are more useful than probability sampling inferences and that quota samples are most informative for providing data for the estimation of the models. This is especially true in the presence of nonresponses.


E. L. Melnick, R. Colombo, R. Tashjian, and K. R. Melnick (1991) ,"Sampled Survey Data: Quota Samples Versus Probability Samples", in NA - Advances in Consumer Research Volume 18, eds. Rebecca H. Holman and Michael R. Solomon, Provo, UT : Association for Consumer Research, Pages: 576-582.

Advances in Consumer Research Volume 18, 1991      Pages 576-582


E. L. Melnick, New York University

R. Colombo, New York University

R. Tashjian, New York University

K. R. Melnick, D'Arcy, Masius, Benton & Bowles


Marketing decisions are often based upon measurements of the market place, which are usually constructed from sampled data. These data are obtained from mail surveys, telephone surveys, door-to-door interviews and from interviews at shopping centers and cent allocation facilities. The recommended procedure is to capture data from a probability sampling scheme based on the assumption that the observations will be well balanced against all variables, independent of experimenter bias and necessary for accurate inferences about the unknown population. The discussion in this paper compares the relevance of probability sample inference to model based inference. We argue that inferences based upon models are more useful than probability sampling inferences and that quota samples are most informative for providing data for the estimation of the models. This is especially true in the presence of nonresponses.


Most market research textbooks extol the virtues of probability sampling and suggest that non-probability sampling in general, and quota sampling in particular, is scientifically suspect. For example, Aaker and Day (Marketing Research, 1989, p. 349) state that "Probability sampling has several advantages over non-probability sampling. First, it permits the researcher to demonstrate the representativeness of the sample. Second, it allows an explicit statement as to how much variation is introduced because a sample is used instead of a census of the population. Finally, it makes possible the more explicit identification of possible biases". Churchill (Marketing Research, 1986, p. 433) in considering whether quota samples can be considered representative even though they accurately reflect the population with respect to the control characteristics makes three points. "First, the [quota] sample could be very far off with respect to some other important characteristic likely to influence the result...Second, it is difficult to verify whether a quota sample is indeed representative... Third, interviewers left to their own devices are prone to follow certain practices. They tend to interview their friends in excessive proportions..."

In spite of such positions many, or perhaps most, market research studies use some form of explicit non-probability sampling such as mall intercepts or quota sampling. Jacoby and Handlin (Trademark Reporter, 1991) report a study by the Council of American Survey Research Organizations, conducted in 1985, that showed that the overwhelming proportion (95% or more) of in-person interviews did not involve probability selection. They further report, on the basis of a sampling of the academic literature, that a similar proportion of (94-97%) of academic studies do not employ probability sampling designs.

In the light of such a discrepancy between the prescription of the textbooks and the custom of practitioners we pose two questions. First, why does such a discrepancy between theory and practice exist? And second, who is right - the textbooks or the practitioners?

One possible answer to the first question is not hard to find. Much of the theory of survey sampling has been developed and expounded by statisticians working for government agencies (Deming; Hansen, Madow and Tepping; Hansen and Hurwitz) or for large survey organizations (Kish; Cochran). Surveys conducted by these organizations tend to differ from those conducted by marketers. Typically, government surveys or those conducted by large social research organizations have the following characteristics:

they are carried out to provide simple descriptive statistics of the survey Population;

the survey findings are required to be as objective as possible;

they are multi-purpose - many variables are collected;

the surveys often end up in the public domain (since they are often financed from public funds) where they are analyzed by different researchers for different purposes;

sample sizes are typically quite large (often 2000 or more);

time from commissioning a survey to reporting is often quite long (a year or more).

In contrast, cross-sectional studies carried out by market research companies and advertising agencies typically have the following characteristics: [Diary panel and scanner panels have characteristics more similar to governmental-type studies. Their longitudinal nature makes them especially suitable for measuring change (e.g. change in sales due to a promotion or advertising campaign) and assessing causal relationships. Academic studies have more or less the same characteristics as market research studies but with the difference that interest is more analytical than descriptive in character.]

they are designed for a specific purpose - to help managers make a decision;

they are often a part of a whole program of research (e.g. as part of the research carried out for a new Product introduction);

the sample sizes are quite small (often 200300):

time from commissioning the survey to reporting the findings is often quite short (a few weeks);

non-response is relatively high (35-40%) or more

These differences are important. We will argue that the relatively small sample sizes, high non-response, single-purpose nature of market research studies conspire to make non-probability sampling theoretically and practically more attractive than probability sampling. Thus our answer to the second question, "who is right the textbooks or the practitioners?" is, "it depends on the type of survey and the resources available." For large scale multi-purpose surveys, well designed probability samples and inferences that do not rely on a model for the population may be advantageous. For small scale surveys and especially where the non-response may be quite large, non-probability sampling and inference based on models for the population have the advantage.

We will support his view in the following way. First we will briefly compare the classical finite population approach to survey sample design and analysis as exemplified in the textbooks with the newer model based approach. Then, by means of a simple example and by simulations we show that model-based sampling inference outperforms design-based sampling inference. Finally we conclude with some general advice for market researchers on how to design and analyze survey studies.


Classical finite population sampling theory eschews making any assumptions about the distribution of a variable in the population. The main reason for this seems to be the acceptance that human and animal populations are not "perfectly mixed" so that the values of variables describing the population will be clustered or clumped. Since the population does not come "pre-randomized", the survey sampler cannot, it is argued, rely on a set of random variables representing observations from the population to be independently and identically distributed. Thus, in the absence Of a probability distribution for the population the survey sampler, if he is to use statistical theory, has to induce a probability distribution by means of a selection mechanism. More formally, the classical finite population approach assumes that a finite population is a set of values Yi, i = 1,...,N (for example, Yi is the income for the ith person in the United States where there are N individuals). The values are assumed to be constants (there is an exact income number associated with each individual) and for each label i there exists a value Yi. A population parameter, say the population mean, Y is defined by the set of all Y's,


so that no probability function is invoked to generate the data. A sample, s, is a subset of n units of the population denoted i1, i2,...,in. A sample design assigns a probability p(s) to each s where the sum of the probabilities defined over all samples occurring is one. The only probabilities in this scheme are those induced by the sample design and different sampling designs will induce different probabilities. The sample design is therefore a vital part of the analysis of the sample data.

A serious problem with this approach is that no direct mechanism is specified by which we can know the values of elements not in the sample and therefore no mechanism by which we can deduce population characteristics. Finite population theory gets around this difficulty by invoking the Central Limit Theorem. If the sample size is sufficiently large and the sample was drawn at random, then the sample mean, being a linear combination of random variables, should be approximately normally distributed with mean Y and variance


The classical infinite population approach to inference proceeds in a different way. A model is postulated for the data, for example that the Yis are a sample from a normal distribution with mean m and variance s2. The data from the observed sample are used to estimate the parameters m and s2 The model provides the link from sample to population and the sample design plays no role in the inference - the probability model and the data are sufficient to estimate the population parameters. (See Example).


The inference problem under probability sampling is the estimation of a population quantity based upon a known sampling distribution and a sample of size n. In the presence of nonresponse, modelling is necessary to relate these measurements to those obtained from the respondents since the nonrespondents are not under control of the sampler. Based upon an assumed model, the EM algorithm was demonstrated under general conditions by Dempster, Laird and Rubin (JRSS series B, 1977) to produce an estimator that converges to the maximum likelihood estimator of the population quantity, even in the presence of non-response. In the absence of a model, inference can be made on the observed data if the inference can be assumed to be equivalent to an inference based on the full distribution (this type of sample design is called ignorable by Rubin (Biometrika, 1976) and Little (JASA, 1982)). If the nonresponses are a function of the sample design, the estimator and its variance might have a large bias thus invalidating the inference. This situation is very common in probability samples based upon human populations. However, data collected from individuals sampled from a panel are more likely to be based upon ignorable designs. An individual refuses to become part of the sample because he/she does not want to participate in a sample survey, not because of the specific components of a particular sample design. In a probability sample there is no way of knowing whether the nonresponse is due to an unwillingness to participate in any survey and/or an unwillingness to participate in a particular survey. The latter situation is more likely to create a nonignorable design.


Probability sampling serves two main objectives: (1) Generate a representative sample from the population free of any selection bias and (2) Provide a good balance on uncontrolled ancillary variables. These objectives are laudable and on average achievable for large samples when all sampled units participate in the survey. Once the sample is selected inferences based on the observations are made to the unobserved individuals' characteristics. These inferences can be quite inaccurate if the sample is not balanced and the situation is not improved by knowing that this would not happen if many samples had been taken and the inferences had been based upon the average Of those computed from all hypothetical samples not selected.

All advantages of randomization disappear in the presence of non-response. First, there is no way of knowing the reason for the non-response and therefore one must suspect that the design is nonignorable. Second, the non-respondents might have different characteristics from those responding to the survey resulting in an un-representative sample. Thirdly, poor representation of certain subsets of the population produce unbalanced samples. Finally, a truly balanced sample requires balance on many characteristics. This cannot be guaranteed for any probability sample, even stratified sampling designs, because of the requirement of impractically large samples. None of these problems exist with samples drawn from a well constructed panel, that is, if we assume that the panel is the population to which we wish to make inferences, or is representative of that population.


Two data sets were used as populations for this study. Each data set was balanced against the United States population on 6 demographic characteristics. The parameter of interest in the first study is the total number of households having personal computers. This parameter, T, is computed as



Yi =    1 if the ith household has a PC

           0 otherwise

The covariate Xi is the combined salary for household i. The dependent variate, Yi, in the second study is

Yi =   1 if the ith household has cable TV

          0 otherwise

In the first study the population size, N was 12,609 and in the second study, the population size was 12,651. Simulations were run in each study to represent both probability and quota samples. Probability samples were selected by taking 1000 samples of 600 observations; the individuals in each sample were selected using a random number generator. Sampled information was compared based upon data selected by both replacement and also without replacement designs. Negligible differences were detected so that with replacement designs were used for the comparisons since they were obtained more economically. Quota samples were simulated by selecting the first 2400 individuals that satisfied a design that was balanced on income and geographic dimensions. These individuals were selected from a list in the order that they joined the quota sample. This select on process attempted to capture an unknown correlation structure, induced by the selection mechanism, which is often suspected to exist in quota samples. The larger quota sample was arbitrarily set at 4 times the size or a random sample and was intended to reflect the lower cost for conducting the survey. The 10()0 probability samples selected were used to show that although any one sample could provide poor estimates of population parameters, averaging over many samples would result in accurate estimators. This illustrates the statistical concepts of unbiased, consistent estimators but is of no practical importance since only one sample is ever drawn.

The results of the simulations comparing 1000 combined random samples of size 600 to one quota sample of size 2400 are presented in Figures 1, 2 and 3. The average of the probability based estimators has small error but the range of individual estimates reflects a greater variability than quota sample based estimates. Sampling theory states that if the CDF of the covariates of a sample is similar to the population CDF of the covariates, then the sample is a good representation of the population. The average covariates associated with the extreme estimates in Figure 2 show that similar averages is not a sufficient condition for similar CDF's. The only way to guarantee this property is by purposive sampling and this can best be achieved with quota panel data.

In the second simulation study, the population was partitioned into sixty cells classified by ten income levels and six geographic locations. The six geographical regions were: 1) New England and Middle Atlantic 2) North Central, 3) South Atlantic, 4) East South Central, 5) West South Central and 6) Mountain and Pacific. Under the assumption that the response rate is poor for low and high income families, a nonresponse pattern in the study omitted all households with incomes less than 58000 dollars or more than $70,000 dollars. Further, in regions 1, 3, and 4, nonresponse was set at under 12,000 dollars and in regions 1 and 6 nonresponse was set at above $50,000 dollars.

The results from the two studies are presented in Figure 3. Comparing these results to the population characteristics (Figure 1) shows the larger bias with the probability based estimators. For example, in Study 1, the population total number of positive responses is 5941 whereas the quota sample based estimate is 6004 and the average probability sample based estimators is 6184. Further, the probability sample estimates range from 5376 to 7004.

Probability sample estimates may be extremely biased especially when the sample has a large proportion of nonrespondents. Suggested strategies for determining the effect of missing data arc follow-up studies and applications of models relating responses from respondents to those of nonrespondents. The first strategy is rarely useful. It is expensive, thus limiting the size of the follow up study. Also, based upon documented evidence, it seldom supplies the required information. Introduction of models after obtaining missing data is questionable since the formulation is confounded with the sampler's bias. If modelling is to be performed, it should be done before the data is collected, consistent with the scientific method.

In practice, the presence of missing data renders a probability sample as suspect. This problem does not exist for model based samples, especially those constructed from quota data. Firstly, the model is constructed before obtaining the data so that the responses from the nonrespondents can be in imputed from the model. Secondly, quota samples are formed so that the individuals selected have covariates balanced against the population's characteristics. Therefore, the characteristics of the nonrespondents are known and this information can be exploited when imputing the missing data. In the two data sets considered in this section, the quota samples were drawn balanced on the income and geographic variables. Assume that the purchase rate of a product as a function of income is independent of geographical region, but that the number of products purchased is a function of the geographical region. These assumptions can be modelled by considering partitioning of the population into the cells A(i,j) i=1,...,10 and j=1,...,6 representing the income and geographic categories. A logistic model is used to estimate the nonresponse rate since the basic input variable is binary, either the household purchased or did not purchase the product. Let Pij be the proportion of positive responses in the (i,j)th cell and the log of the odds ratio be Zij = log(Pij/1 - Pij). Based upon the assumption that the relationship between Zij and income is independent of geographical region, the nonresponse model is Zij = aj + b (Income) + eij. The seven parameters b and aj j=1,...,6 were estimated by least squares. The estimated model was used to estimate the nonresponding cells and smooth the data within the cells. The adjusted data were then used to produce an accurate estimate of the number of positive responses from the population. This methodology is not available within the probability sampling framework since there is no available information on the characteristics of the nonrespondents. Further, the randomization principle which states that the probability sampling plan creates the only probability distribution for reliable statistical inference must be discarded in the presence of nonrespondents unless the nonrespondents have the same characteristics as the respondents, which is very unlikely. The nonresponse problem is not serious for samples drawn from panels, if the panel is assumed to be representative of the population.. The nonrespondents are easily determined and their characteristics are known. Models can be constructed for estimation of the responses from the nonresponding units.








Most surveys are multipurpose so that globally optimal designs rarely exist and optimal designs for different purposes are usually in conflict. 11 is even difficult to design optimal sampling plans for single purpose studies in market research where well defined sampling frames rarely exist. Probability sampling plans do not attempt to describe the process generating the data, and in fact, probabilities are induced by the selection of the sampling plan. In this setting there are no unknown parameters, no discussion of goodness of fit and the design is primarily based on cost efficiency considerations. Of major concern is the sampling variance, which is usually the smallest source of error in a survey. The distinction between probability samples and model based inference is described by Little (JASA, 1982). "In the randomization approach the population values are treated as fixed, and inferences are based on the probability distribution used to select the sample. In the modeling approach, the population values are treated as realizations of random variables that are distributed according to some model. The model distribution forms the basis of inferences, and the sample selection procedure has an ancillary role, namely to avoid selection bias." Quota sample based surveys are constructed within the restrictions of multistage designs where members of the panel are selected from a well defined sampling frame (for example, households in the United States). Data selected from these panels are randomly chosen from the strata having the same socio-demographic characteristics present in the population to which the product attitude inferences are to be made. The hypothesized models are constructed and then perhaps checked against independently obtained data for model consistency. This methodology is consistent with usual scientific inquiry. That is: (1) a model is proposed to describe the random phenomena generating the data, (2) data are obtained, (3) parameters are estimated. (4) the model fit is tested, (5) modified and (6) used for making inferences to the population. In this process randomization is used to guard against selection bias, but once a sample is selected it is unique and the selection process is unimportant. Inferences are based upon the constructed model, which can be used to estimate unobserved data and error sources. The model smooths the data and does not require the assumption of large sample normality. Thus high quality quota data has advantages over probability sampled data since it is plentiful, inexpensive and useful input for the formulation of mathematical models. Once the models have been verified they can be used to explain the process generating the data and to predict characteristics of the sampling frame. In the presence of non-response, randomization loses its important properties. The design might be nonignorable, the respondents might not be representative and the balance on ancillary variables might be lost. None of these problems occur with quota samples. Even in the presence of non-response, the non-respondents and their characteristics are known so that models can be developed. Further, since the quota sample was developed to guarantee balance this data is useful for the development of models describing the characteristics of and projecting the responses to the non-respondents in order to generate accurate parameter estimators and their mean square errors.


Aaker, D. A. & George S. Day, "Marketing Research", Wiley, New York, 1989.

Churchill, G. A., "Marketing Research", Dryden, Chicago, 1986.

Deming, W. E. (1953), "On a probability Mechanism to Attain an Economic Balance Between the Resultant Error of Response and Bias of Nonresponse," Journal of the American Statistical Association 58 (December), 766-783.

Dempsler, A. P., N. M. Laird and D. B. Rubin (1977), "Maximum Likelihood from Incomplete Data Via the EM Algorithm (with Discussion)," Journal of the Royal Statistical Society Series B, 39, (1), 1-38.

Godambe, V. P. (1955), "A Unified Theory of Sampling from Finite Populations," Journal of the Royal Statistical Society Series B, 17 (2), 269-278.

Hansen, M. H., W. G. Madow and B. J. Tepping (1983), "An Evaluation of Model-Dependent and Probability-Sampling Inferences in Sample Survey," Journal of the American Statistical Association 78 (December), 776-807.

Horvitz, D. G. and D. J. Thompson (1952), "A Generalization of Sampling without Replacement from a Finite Universe," Journal of the American Statistical Association 47 (September), 663-685.

Jacoby, J. and Hanlin, A. (1991), "Non-probability Sampling Designs for Litigation Surveys", Trademark Reporter, In press.

Lipstein, B. (1975), "In Defense of Small Samples," Journal of Advertising Research 15 (February), 33-40.

Little, R. J. A. (1982), "Models for Nonresponse in Sample Surveys," Journal of the American Statistical Association 77 (June), 237-250.

Royall, R. (1970), "On Finite Regression Models," Biometrika 57 (August), 377-387.

Royall, R. and W. G. Cumberland (1978), "Variance Estimation in Finite Population Sampling," Journal of the American Statistical Association, 73 (June), 351-358.

Royall, R. and W. G. Cumberland (1981), "An Empirical Study of the Ratio Estimator and Estimators of the Variance," Journal of the American Statistical Association 76 (March), 66-88.

Rubin, D. R. (1976), "Inference and Missing Data. Biometrika 63 (December), 581-592.

Smith, T. M. F. (1976), "The Foundations Of Survey Sampling: A Review (with Discussion)," Journal of the Royal Statistical Society Series A, 139(2), 183-204.

Sudman, S. (1964), "On the Accuracy of Recording of Consumer Panels," Journal of Market Research 1 (May), 14-20.

Wind, Y. and D. Lerner (1979), "On the Measurement of Purchase Data: Surveys Versus Purchase Diaries," Journal of Market Research 16 (February), 39-47.

Wiseman, F. and P. McDonald (1979), "Noncontact and Refusal Rates in Consumer Telephone Surveys," Journal of Market Research 16 (November), 478-484.

Your Opinion Counts (1986), "Refusal Rate Study," Chicago, Illinois: Marketing Research Association.



E. L. Melnick, New York University
R. Colombo, New York University
R. Tashjian, New York University
K. R. Melnick, D'Arcy, Masius, Benton & Bowles


NA - Advances in Consumer Research Volume 18 | 1991

Share Proceeding

Featured papers

See More


Teaching Old Dog New Tricks… and Old Bottles New Jeans. The Role of Implicit Theories in the Evaluation of Recycled Products

Alessandro Biraglia, University of Leeds
J. Josko Brakus, University of Leeds
Lucia Mannetti, Sapienza University of Rome
Ambra Brizi, Sapienza University of Rome

Read More


G1. Enchantment through Retro Product Consumption in a Digital World

Varala Maraj, City University of London, UK
Fleura Bardhi, City University of London, UK
Caroline Wiertz, City University of London, UK

Read More


System Justification and the Preference for Atavistic Products

Minju Han, Yale University, USA
George Newman, Yale University, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.