# Discriminating Between Stochastic Models of Brand Choice: Minimum Chi-Square and Bayesian Methods

##### Citation:

*
Robert Blattberg and Subrata Sen (1972) ,"Discriminating Between Stochastic Models of Brand Choice: Minimum Chi-Square and Bayesian Methods", in SV - Proceedings of the Third Annual Conference of the Association for Consumer Research, eds. M. Venkatesan, Chicago, IL : Association for Consumer Research, Pages: 240-257.
*

[Research financed in part by National Science Foundation Grant GS-2347. The authors wish to thank Professor Arnold Zellner for his helpful comments.]

During the last ten years, several stochastic models of consumer brand choice have been proposed by various researchers. These models range from the Bernoulli, Markov, Linear Learning, and Probability Diffusion models described in Massy, Montgomery, & Morrison (1970), to the more recently developed New Trier model (Aaker, 1971) and Dual Effects model (Jones, 1971). One of the difficult problems in .model development is measuring how well a particular model "fits" a set of empirical data (e.g., consumer panel data). The underlying model assumptions often do not clearly indicate which model is appropriate in a given situation. Hence, it is important to develop systematic techniques for discriminating between alternative models of brand choice.

During the last few years, the most commonly used measure of model fit in marketing has been the minimum chi-square statistic. This statistic was first used in marketing by Morrison (1966) to estimate the parameters of a stochastic brand choice model. The model parameters are estimated by minimizing this chi-square statistic. The minimum chi-square statistic can then be used as a measure of model fit. Recently, however, researchers have proposed alternative criteria for model fit. For example, Aaker (1970) has suggested that model tests should not be restricted to the usual goodness-of-fit test but that each model should be judged in terms of how well it can predict market shares in future time periods.

This paper will describe two techniques of model discrimination: (1) Rao's chi-square technique, and (2) a Bayesian technique. Two-state heterogeneous Bernoulli and Markov models will be used to illustrate the discriminatory power of these two techniques. The two-state heterogenous Bernoulli model is merely a constrained version of the two-state Markov model. If one of the parameters of the Markov model is constrained, the Bernoulli model is obtained. This makes it possible to use Rao's chi-square test to discriminate between these two models (Rao, 1961). The Bayesian discriminatory procedure consists of a Jeffreys test (Jeffreys, 1948). The test results will consist of the posterior odds of obtaining an observed set of consumer purchase data, given the two models.

Initially,the two model discrimination techniques will be tested on artificial data (simulating consumer panel data) generated from a Bernoulli process and a Markov process. Since the model generating the data is known, the success of the two discrimination techniques can easily be determined. The experimentation will be conducted over a wide range of parameters of the Bernoulli and Markov processes. This paper will only report the results obtained using the artificial data. Application of the two discrimination techniques to actual consumer panel data consists of the next phase of the study and will be reported later.

LITERATURE SURVEY

This section describes the chi-square goodness-of-fit test developed by Morrison (1966) as well as its amended version used by other researchers like Aaker (1970) and Jones (1970). This description is followed by a discussion of other criteria proposed to evaluate stochastic brand choice models. Aaker (1970) focuses on the importance of a model's ability to predict future market shares while Jones (1970) stresses the importance of a model's ability to provide important insights into the actual mechanisms occurring in the market place.

The Chi-Square Goodness-of-Fit Test

Consumer panel data typically consists of the purchase records over time of a representative sample of N families for a wide variety of product classes. For a given product class, each family's purchase record can be represented by a string of 1's and O's. A "1" represents purchase of the brand under consideration (henceforth denoted as Brand 1), while "O" represents purchase of any other brand.

Assume that data are available for five consecutive purchases for each of the N families in the panel. Consider the first four purchases. The N families in the panel can be segmented by these four purchases into 24 or 16 mutually exclusive and exhaustive categories (e.g., 0000, 0001, 0010, ..., 1011, 0111, 1111). If a five-purchase sequence is used, 25 or 32 such categories would be obtained. The panel data can now be represented in terms of Ni (the number of families whose pattern of purchases is represented by category i) and Ri (the number of families with past history i which purchased Brand 1 on the fifth or most recent purchase. For a four-purchase sequence, i ranges from 1 to 16. In general, i ranges from 1 to 2m where m is the purchase sequence length. The expected number of families (for a particular stochastic brand choice model) with past history i which purchase Brand 1 on the fifth trial is denoted by E , where:

E

_{i}= N_{i}P(1/i).

P(1|i) is the theoretical probability that a family with past history i will purchase Brand 1 on the next trial for a particular model. P(1|i) is called the conditional model probability and has to be computed for each model.

Morrison's goodness-of-fit statistic is:

where Ri = the number of families with purchase history i who did not purchase Brand 1 on the (m + 1) purchase = (Ni - Ri), Ei = (Ni - Ei) and is similarly defined. The statistic XM is asymptotically distributed chi-square with 2m degrees of freedom. Usually, the model's parameters are estimated from the data by minimizing this XM statistic. If q parameters are estimated from the data, the degrees of freedom are reduced to (2m - q).

Aaker (1970) and Jones (1970), along with a few other researchers, use a slightly different chi-square statistic. This statistic is defined as follows:

where N = total number of families in the Panel = EQUATION.

Vi = probability of purchase sequence i for a particular stochastic brand choice model.

XA is asymptotically distributed as chi-square with (2m - 1) degrees of freedom which are reduced to (2m - 1 - q) when q model parameters are estimated from the data.

The numerical value of the minimum chi-square statistic cannot be used to compare models with different numbers of parameters. To correct for the different degrees of freedom, one must use the p-level associated with a chi-square statistic. The p-level is defined as:

where f(x) is the chi-square distribution with the appropriate degrees of freedom.

A low p-level indicates that the particular model is not a viable representation of the process. However, if the sample size is sufficiently large, only a "perfect" model will avoid a low p-level. Thus, the relative sizes of the p-levels should be used in evaluating the fit of several alternative models to a set of consumer panel data. This is the approach taken by Aaker (1970) in comparing the Linear Learning model with the New Trier model. Jones (1970) uses a similar approach to compare three versions of the Dual Effects model with the Probability Diffusion model and the Linear Learning model.

Alternative Approaches to Model Evaluation

An important objective for most stochastic brand choice models is the prediction of future market shares. However, the chi-square goodness-of-fit test involves only the purchases used to estimate the model parameters. Hence, a model could provide a good fit to the first few purchases used to estimate the model's parameters and yet be quite inaccurate in predicting future market shares. Aaker (1970) provides an interesting example of this possibility. In terms of the chi-square goodness-of-fit test, the Linear Learning model appeared to be superior to the New Trier model for two sets of data from a frequently purchased consumer good product. However, the p-levels for the New Trier model (0.52 and o.88 compared to 0.54 and 0.99 for the Linear Learning model) were quite respectable. Thus, the New Trier model could not be rejected out of hand. Market share predictions told a different story. After the first four purchases (which were used to estimate its parameters), the Linear Learning model was consistently inferior to the New Trier model in predicting the future market shares of the two brands. A model (like the Linear Learning model) which contains several parameters can often be fitted quite well to a given set of data. However, a true test of a model's viability is its performance over a hold-out sample which was not used to estimate its parameters. The Bayesian technique described in this paper and Aaker's method are both designed to evaluate a model in terms of its Predictive ability.

Jones (1970) used panel data on Crest toothpaste to evaluate several models of brand choice. The data covered the period following the August 1960 endorsement of Crest toothpaste by the American Dental Association (ADA). Jones found that the Linear Learning model and the Probability Diffusion model Provided better fits to the data compared to the three versions of the Dual Effects model. Though the p-values of the Dual Effects models were not low enough to discard them immediately, the chi-square goodness-of-fit test did portray them in an unfavorable light. However, a comparative analysis of the parameters of the different models showed that the Dual Effects models were able to indicate complex consumer behavior patterns which could not be adequately modeled by either the Linear Learning model or the Probability Diffusion model. Essentially, the Dual Effects models were able to separate the effect of the ADA endorsement of Crest from the effect of the feedback obtained from purchasing and using the brand. Thus, a model which can provide a better understanding of complex consumer behavior should not be discarded merely because it does not fare as well as alternative models in a goodness-of-fit test.

The next sections describe two other model discrimination techniques: (1) Rao's chi-square technique, and (2) a Bayesian technique. Rao's technique facilitates discrimination between two models, one of which is a constrained version of the other. The Bayesian technique is in the spirit of Aaker's suggestion of judging a model in terms of its power to predict future market shares. The Bayesian technique uses half the data to estimate parameters and uses the remaining data to predict posterior odds of the two models.

MARKOV AND BERNOULLI MODELS

Before discussing the two discrimination techniques, brief descriptions of the Markov and Bernoulli models are provided below. These models are used to illustrate the discriminatory power of the two techniques.

Markov Model of Consumer Behavior

The first-order Markov model states that the last purchase, and only the last purchase, influences the family's current purchase decision. As before, each purchase occasion is represented by a "1" if Brand 1 (as defined earlier) is purchased and by a "0" if some other brand is purchased. This implies that the model of consumer behavior is a two-state, first-order Markov process. If the nth purchase occasion results in the purchase of Brand 1, the probability of purchasing Brand 1 on the next purchase is P1 while the probability of purchasing some.other brand is (1 - P1). Similarly, if the nth purchase occasion does not result in the purchase of Brand 1, the probability of purchasing Brand 1 on the next purchase is P2 while the probability of purchasing some other brand is (1 - P2). These probabilities are known as transition probabilities and can be written in terms of a transition probability matrix:

Bernoulli Model

The Bernoulli model states that the current purchase decision is not influenced by any past purchases. For each purchase occasion, the family has a probability, p, of purchasing Brand 1 and a probability, (1 - p), of not purchasing Brand 1. The transition probability matrix is:

Clearly, the Bernoulli model is a special case of the two-state, first-order Markov model. If P1 = P2 = P in PM, the Bernoulli model is obtained.

RAO'S CHI-SQUARE MODEL DISCRIMINATION TECHNIQUE

Rao's chi-square model discrimination technique is illustrated in terms of the Brand Loyal Markov model postulated by Morrison (1966) and the Compound Beta Bernoulli model described in Massy et al. (1970, pp. 60-68). The Brand Loyal Markov model states that each family follows a first-order 0-1 process with transition matrix:

Compared to PM, the transition matrix of the general Markov model, p = kp. k is a parameter of the model and lies between O and 1. k is the same for each family. On the other hand, p has a beta distribution over the families in the population, i.e.,

where r(.) is the gamma function and a, B > O.

The mean and variance of p are:

Clearly, if k = 1, the Brand Loyal Markov model reduces to the Compound Beta Bernoulli model. Hence, the Compound Beta Bernoulli model is a constrained version (where the parameter k is constrained to be 1) of the Brand Loyal Markov model.

If efficient estimates of the parameters of the two models are obtained, Rao's test (Rao, 1961, pp. 32-33) can be used to discriminate between the two models. The parameters of the two models are estimated in this paper by minimizing Morrison's chi-square statistic, XM. Minimum chi-square estimates meet the criteria of efficiency, satisfying the requirements for Rao's test. Rao's test consists of the computation of the following test statistic:

where EiM and EiM refer to the expected frequencies (as defined before) for the Markov model and EiB and EiB are the expected frequencies for the Bernoulli model. The x2 statistic is distributed as chi-square with (q - r) degrees of freedom. q is equal to the number of parameters of the unconstrained model while r refers to the number of parameters of the constrained model. In this particular situation, q = 3 (i.e., a, B, and k) for the Markov model, while r = 2 (i.e., a and B) for the Bernoulli model.

JEFFREYS' TEST FOR DISCRIMINATING BETWEEN MARKOV AND BERNOULLI MODELS

To discriminate between a two-state Markov process and a two-state Bernoulli process, we can use Jeffreys' test (Jeffreys, 1948, Chapter V). Jeffreys' test is a Bayesian technique which is briefly outlined below.

Description of the Jeffreys Test

Panel data provide a history of the purchasing behavior (represented by a series of l's and O's as defined earlier) of each family in the panel. Let X represent a particular family's purchasing behavior. Hi represents hypothesis i, i = 1, 2. Let H1 represent the hypothesis that the family's purchasing behavior is generated by a Bernoulli process. H2 signifies that a Markov process generates the family's purchasing behavior. The vector of model parameters (i.e., the elements of the transition probability matrices, PM and PB) is represented by 0.

Let

p(Hi) = prior probability that hypothesis i is true

p(0|Hi) = prior distribution for the parameter 0, given that Hi is true

p(x|0, Hi) = likelihood function, given the parameter values 0 and that Hi is true

p(Hi|X) = posterior probability that hypothesis i is true.

When Hi is true, the correct action is defined as ai and thus our loss, L(ai, Hi) = 0. When Hi is true and we mistakenly take action aj, we incur a loss denoted by L(aj, Hi) for i = j. Thus, we define the following loss table:

To decide which action to take, we use minimum expected loss as our criterion. After observing the sample, X, we choose a1 if

We can compute the posterior probability that hypothesis i is true from

where p(X|Hi) is the predictive probability distribution for X given that Hi is true, and

The decision rule (1) can be rewritten as: choose a1 if

Decision rule (4) implies that we should choose action a1 if our posterior odds ratio favoring H1 is greater than the loss ratio. Substituting equation (2) for P(Hi|X) in (4), the above decision rule can be expressed as:

The terms on the right-hand side of (5) are assumed to be known prior to observing the sample, X. The term on the left-hand side of (5) is the ratio of the predictive probability distributions for X, given that Hi is true.

Application of Jeffreys' Test to Markov vs. Bernoulli Process

Application of Jeffreys test requires the development of the predictive distributions for the Markov and Bernoulli models. Equation (3) indicates that development of the predictive distributions requires the definition of (a) a prior distribution, P(@|Hi), and (b) a likelihood function, P(X|G, Hi) for each of the two models. For the Markov model, a matrix beta distribution is used for the prior while the likelihood function is represented by a Whittle distribution. These distributions are explained in the Appendix where it is shown that the resulting predictive distribution for the Markov model is the beta-Whittle distribution. For the Bernoulli model, the prior is represented by a beta distribution while a binomial distribution is used for the likelihood function. This results in a beta-binomial predictive distribution for the Bernoulli model (see the Appendix).

Both prior distributions require knowledge of the parameters of the distributions. For the matrix beta prior used for the Markov process, the parameter matrix, M, must be inputted (M is described in the Appendix). Similarly, the parameters, r' and n', must be inputted for the beta prior distribution used for the Bernoulli process. Our method for determining these parameters is to split the data into two parts. Assume that we have data on (N1 + N2) successive purchases for each family in the panel. The first N1 purchases are used to determine the parameters of the prior distributions while the remaining N2 purchases constitute the observed sample, X, which is used for the Jeffreys test.

Natural Conjugate Distribution

To understand how the Jeffreys test is applied, the concept of natural conjugate distributions must be understood. A natural conjugate family of distributions of the parameters 0 has the following property: If the prior distribution of 0 belongs to a family of natural conjugate distributions, then for any sample n and any values of the observations in the sample, the posterior distribution of 0 must also belong to the same family (DeGroot, 1970, p. 159).

To take advantage of this property, we begin with a relatively diffuse prior for each process, observe a sample (the first N1 purchases of the family), and compute a posterior distribution. The priors we actually use are not diffuse priors but are "uniform" priors. For the Bernoulli process, our prior will be the uniform distribution which is a beta distribution with r' = 1 and n' = 2, i.e., fB(p|1, 2). For the Markov process, we will use an analogous prior, a matrix beta prior with parameter matrix EQUATION. The reason for not using diffuse priors is to avoid zero elements in the parameter matrix of the posteriors. The priors chosen are "close" to diffuse priors and yet avoid the problem of zero elements.

The matrix beta distribution is a natural conjugate prior for P, the transition matrix of a Markov chain. Similarly, the beta distribution is a natural conjugate prior for p, the probability of a success for a Bernoulli process. Hence, the posteriors computed from the first N1 purchases of the consumer will be of the same family as the priors. Thus, the posteriors computed from the first N1 purchases will be a matrix beta and a beta distribution for the Markov and Bernoulli models,respectively. These posteriors can now be used as the priors, P(0|Hi), for the Jeffreys test which will be run on the last N2 purchases of the family. The process is illustrated by the following diagram for the Bernoulli model:

Jeffreys' Test Results

Instead of reporting whether we choose H1 (the BernoUlli model) or H2 (the Markov model), we will report the posterior odds ratio for the two models which is:

This posterior odds ratio for the two models is obtained by multiplying the odds ratios obtained for each family in the panel. The prior odds for the two models will be assumed to be equal, i.e., p(H1) = p(H2) = 1/2.

A COMPARISON OF MODEL DISCRIMINATION TECHNIQUES

This section consists of a brief discussion of the advantages and disadvantages of the Bayesian model discrimination technique described in the previous section. The Bayesian technique is compared first with the chi-square method and then with Aaker's predictive method.

Jeffreys ' Test vs. Minimum Chi-Square Method

Compared to the minimum chi-square (MCS) method, the Jeffreys test requires longer purchase histories per family. Applications of the (MCS) method have typically used five purchases per family. In contrast, the Jeffreys test requires 15 to 30 purchases per family. This is necessary because the first part of the purchasing history is used to estimate parameters which are then used to predict the posterior odds ratios of the two models, given the family's last N2 purchases. However, the information provided by the Jeffreys test is of greater value since it does not merely fit parameters to data (as is done by the MCS method) but also provides an indication of the predictive ability of a model by the use of a hold-out sample, As Aaker (1970) has pointed out, a model which fits the data well (in terms of the MCS method) for the first few purchases may be a poor.predictor of future market shares.

Though the MCS method requires only a few purchases per family, the number of families in the sample must be quite large. In most studies, this number has ranged from 500 to over 5,000. This is necessary because the MCS method is a large sample, asymptotic technique. On the other hand, the Jeffreys test requires a sample of only about 50 families.

Because the Jeffreys test requires longer purchase histories, it does not appear to be a useful model discrimination technique for new products. The manager of a new product generally has to determine the underlying model of consumer-behavior (and thereby makes predictions about its ultimate success) within a short period of the product's introduction. He usually has to make a go--no-go decision about the product long before he can accumulate purchase histories ranging from 15 to 30. However, for existing products, where long purchase histories are already available, the Jeffreys test appears to be very promising for model discrimination purposes.

Jeffreys' Test vs. Aaker Market Share Prediction Method

The Jeffreys test is very much in the spirit of Aaker's Market Share Prediction method. Both methods use a split-sample approach, using part of the data for parameter estimation and the remaining data for model prediction. However, the Jeffreys test provides a concrete indicator of the predictive ability of the two models: the posterior odds ratio. Aaker (1970, p. 305) provides graphs of the empirical brand shares and the mean value functions of the models. One must judge the viability of the two models by a visual examination of the graphs. One could devise various indicators of predictive accuracy for Aaker's graphs, e.g., mean square error, mean average deviation, etc. In our opinion, such measures will not provide an indicator as intuitively satisfactory (from the viewpoint of interpretability) as the posterior odds ratio provided by the Jeffreys test.

In summary, the Bayesian Jeffreys test appears to be a useful model discrimination technique for existing products. It does not require a large number of families, and provides an easy-to-understand indicator of a model's predictive ability. Its one apparent shortcoming is that it requires long purchase histories. On the other hand, it appears to be a very general technique. For instance, the application of the Jeffreys test outlined in this paper can handle n-state Markov processes (where n > 2) quite easily, whereas Morrison's Brand Loyal Markov model (Morrison, 1966) is essentially limited to two states.

MONTE CARLO SIMULATION RESULTS

The previous section offered a theoretical comparison of the Bayesian and Classical approaches. In this section we study how the two approaches discriminate between the Bernoulli and Markov models.

The data we used are artificially generated Markov data following the heterogeneous Markov process with parameters k, a, and B. When k = 1, the data follows a heterogeneous Bernoulli process. For fixed values of a and B the data generated for different values of k are not independent. The same set of uniform random numbers was used to generate the data with numbers in each cell (1111, 1110, ...,etc.) differing only due to different values of k. The reason this was done was to isolate the effect of k on our tests. If, for each value of k, a different set of random numbers was used to generate the data, the results might be attributable to fluctuations in the random numbers rather than changes in k.

For the classical approach, we used Morrison's minimum chi-squared method described earlier to estimate the parameters of the Bernoulli and Markov model. We also computed the chi-squared statistic for Rao's test. The number of observations in our sample is 200.

Table 1 gives the results. We see for a = 3, B = 2 and k = 1 the parameters estimates for a and B are slightly upward biased, k is accurately estimated, and the Rao's test would not reject independence at any reasonable significance level. As k decreases our parameter estimates for a and p fluctuate slightly for the Bernoulli model. For the Markov model we see that k is always overestimated. Rao's test never rejects the null hypothesis at reasonable significance levels (the p-value is always less than .73).

The results for a = 6 and B = 4 contrast those of a = 3 and B = 2. For k = 1 we see our estimates of a and B are extremely low for the Bernoulli and Markov models compared to the actual values of 6 and 4. The estimate of k is also low, .89, and the p-value for Rao's test is .78. As k decreases from one, the estimates of a and 0 for both models are still extremely low. The estimates of k are less than the true value for all values of k. We also reject Rao's test of k = 1 for all values of k except when k = 1.

These above results indicate great ambiguity. In one case, the estimates consistently underestimate k, whereas in the other case, they consistently overstate k. Because for a fixed value of a and B, the same random numbers are used for all k, the results do not offer independent evidence on estimates of k. However, they indicate if there is a bias in k, it persists for all k.

An important result is that Rao's test may not reject k = 1 even though k is far from 1. See a = 3, B = 2. For k = .6, the x2 statistic is still only 1.23, which has a p-value of .73. Unfortunately, due to the limited number of simulations, we can not give general results about the power of Rao's test.

Turning to the Bayesian procedure, three sets of runs were made. The number of observations used to develop the parameters for the prior is N1; the number of observations to compute the odds ratio is N2. The data were generated exactly the same as for the classical approach.

The results are given in Table 2. They indicate for k = .6 and k = 1 with N1 = 15, N2 = 15, or N1 = 10, N2 = 10, the correct model was chosen. However for k = .85 and k = .75, the Bernoulli model was chosen when the true model was the Markov. As the sample size for N1 and N2 increases from 10, 10 to 15, 15, this result persists. However, as the sample sizes for N1 and N2 become large, it can be shown that this bias will disappear. The reason it exists for small samples is that in any cell the frequency count is small for the Markov model. To discriminate between a Markov and Bernoulli model we need to observe a difference between the (1,1) cell and the (2,1) cell as well as the (1,2) and the (2,2) cells. With only 15 observations, we can never observe a large discrepancy. Thus, k has to differ from 1 by enough to make the expected difference between cells (1,1) and (2,1) and between (1,2) and (2,2) large enough to differentiate between the two models.

Summarizing the results, the classical approach for 200 family histories may lead to inaccurate estimates of k, the parameter identifying whether the data come from a Markov or Bernoulli process. For one of the two samples studied, the Bernoulli process is not rejected for any values of k studied. However, the number of replications used in the simulation is so small that it is impossible to generalize these conclusions.

The Bayesian approach discriminates accurately when k = .6. For larger values of k(.75, .85) and N1 = 15, N2 = 15, the Bayesian procedure inaccurately gives the posterior odds in favor of the Bernoulli model. These results are due to the small sample sizes chosen. For panel data, it is unreasonable to expect more than 30 observations per family, however. Thus, when using actual marketing data, the Bayesian approach may lead to incorrect classification if k is greater than .75 but less than one. However, when k is sufficiently different from one, the Bayesian approach correctly predicts the true model.

REFERENCES

Aaker, D. A. A new method for evaluating stochastic models of brand choice. Journal of Marketing Research, 1970, 7, 300-306.

Aaker, D. A. The New Trier stochastic model of brand choice. Management Science, 1971, 17, B435-450.

DeGroot, M. H. Optimal statistical decisions. Hightstown, N.J.: McGraw-Hill, 1970.

Jeffreys, H. Theory of Probability. (2nd ed.) London: Oxford Press, 1948, Chapter V.

Jones, J. M. A comparison of three models of brand choice. Journal of Marketing Research, 1970, 7, 466-473.

Jones, J. M. A stochastic model for adaptive behavior in a dynamic situation. Management Science, 1971, 17, 484-497.

Martin, J. J. Bayesian decision Problems and Markov chains. New York: Wiley, 1967.

Massy, W. F., Montgomery, D. B., & Morrison, D. G. Stochastic models of buying behavior. Cambridge, Mass.: M.I.T. Press, 1970.

Morrison, D. G. Testing brand-switching models. Journal of Marketing Research, 1966, 3, 401-409.

Rao, C. R. A study of large sample test criteria through properties of efficient estimates. Sankhya, 1963, A23, 25-40.

----------------------------------------

##### Authors

Robert Blattberg, Graduate School of Business, University of Chicago

Subrata Sen, Graduate School of Business, University of Chicago

##### Volume

SV - Proceedings of the Third Annual Conference of the Association for Consumer Research | 1972

##### Share Proceeding

## Featured papers

See More#### Featured

### The Re-Mediation of Consumer/Brand Relationships Through Voice Shopping: The Case of Amazon Echo

Johanna Franziska Gollnhofer, University of Southern Denmark, Denmark

#### Featured

### The Effects of Breadth of Product Categories on Budgeting

An Tran, University of La Verne

John Lynch, University of Colorado, USA

#### Featured

### Walking the Thin Edge: The Dark Side of Brand Communities and Collecting

Emily Chung, RMIT University

Marcia Christina Ferreira, Brunel University

daiane scaraboto, Pontificia Universidad Católica de Chile