The Role of Meta-Analysis in Consumer Research

Michael J. Houston, University of Wisconsin, Madison
J. Paul Peter, University of Wisconsin, Madison
Alan G. Sawyer, Ohio State University
ABSTRACT - Meta-analysis is a quantitative approach to the integration of findings from several individual studies of a research question. It is the statistical summary of these findings and seeks to explain the observed variations, if any, in findings across studies. This paper attempts to offer insights into the conduct of a meta-analysis by discussing it in the context of the stages of a primary research study, offering useful examples of existing meta-analyses, and pointing out some limitations of meta-analytic research.
[ to cite ]:
Michael J. Houston, J. Paul Peter, and Alan G. Sawyer (1983) ,"The Role of Meta-Analysis in Consumer Research", in NA - Advances in Consumer Research Volume 10, eds. Richard P. Bagozzi and Alice M. Tybout, Ann Abor, MI : Association for Consumer Research, Pages: 497-502.

Advances in Consumer Research Volume 10, 1983      Pages 497-502


Michael J. Houston, University of Wisconsin, Madison

J. Paul Peter, University of Wisconsin, Madison

Alan G. Sawyer, Ohio State University


Meta-analysis is a quantitative approach to the integration of findings from several individual studies of a research question. It is the statistical summary of these findings and seeks to explain the observed variations, if any, in findings across studies. This paper attempts to offer insights into the conduct of a meta-analysis by discussing it in the context of the stages of a primary research study, offering useful examples of existing meta-analyses, and pointing out some limitations of meta-analytic research.


As consumer behavior increasingly takes on the nature of a discipline in and of itself, a vast amount of consumer research is accumulating. Subareas within consumer behavior each with several active researchers have developed or are emerging. As their research efforts proliferate, it becomes difficult for the individual student to keep abreast of the findings that are reported, let alone sort them out. When such a situation exists, a review paper that summarizes the existing findings and attempts to resolve any inconsistencies among them is a welcome addition to the literature.

The typical review paper is narrative in its presentation. After accumulating (perhaps selectively) several studies on an issue, the author attempts to extract the findings from each study. The findings are then summarized and integrated for the reader usually with an accompanying interpretation of the entire body of findings. Several narrative or qualitative reviews exist in the consumer behavior literature, perhaps the best known of which is that provided by Wilkie and Pessemier's (1972) integration of the issues and findings on multi-attribute models of consumer attitudes.

The narrative review might suffice if the number of studies being integrated is not too large (although any review of only a few studies might be considered premature). However, as the number of studies being integrated increases, several difficulties can be encountered in the narrative approach. First, the results or several studies can be likened to the many data points that result from a single primary research project. Extracting meaning from the many studies in narrative fashion is not unlike an attempt to extract meaning from a raw data matrix prior to reduction. It is a prodigious task usually beyond the objective capabilities or any one individual, especially when conflicting findings exist within the body of studies. Second, this situation enhances the likelihood that the reviewer will impose his or her own biases on the integration task. Thus, the conclusions of any one review might be as much a function of the biases of the reviewer as the studies themselves. Miller (1977) provides an excellent example of different reviewers of research on the psychological benefits of drug therapy reaching unique conclusions. The disparity in the conclusions or each review was attributed to two main factors: differences in the sets of studies included in each review and differences in the interpretation of the findings or studies common to the reviews.

The purpose of this paper is to describe an alternative to the narrative form of review that minimizes the difficulties and subjective factors that plague such an approach. This approach is known as meta-analysis which is really a label for a set of attributes that characterize an integrative effort rather than a specific technique. Meta-analysis offers a way of integrating the findings of a large number of studies of an issue in a more objective fashion without limiting the creative input of the reviewer.


Meta-analysis is a quantitative approach to the integration of findings from several individual studies of a i research question. It is the statistical summary or these findings and seeks to explain the observed variation, if any, in findings across studies. Meta-analysis treats the findings of individual studies as a dependent variable and examines these findings as a function of one or more independent variables. Thus, meta-analysis is the application of the principles or primary research methodology to the review and integration of the findings of a body of studies. Its closest analogue in the primary research domain is probably the sample survey.


If we view meta-analysis as the application of primary research procedures to the integration of findings from a set of studies, the implementation of meta-analysis can be viewed as a set of stages similar to those that occur in a primary research investigation. The implementation of meta-analysis can thus be discussed in terms of problem selection, identification of variables, operational definition of variables, sampling, data analysis, and interpretation. Our discussion of meta-analysis is necessarily superficial at each of these stages. With an eye towards presenting a flavor of the meta-analytical perspective, central issues and procedures are discussed. More penetrating treatments of many of these issues are provided by Cooper and Rosenthal (1980), Glass (1976, 1977, 1980), Rosenthal (1978), and, especially, Glass, McGaw, and Smith (1981). Table 1 summarizes the key stages in a meta-analysis and offers a framework for the discussion in this section.



Problem Selection

The fundamental criterion for problem selection in a meta-analysis is similar to that for a primary research study, i.e., the importance of the issue. In primary research we seek a meaningful issue in need of empirical investigation, especially one that has received little or no previous empirical attention. In meta-analytic research we seek a meaningful issue in need or a synthesis of previous research findings, ergo one that has received considerable previous empirical attention such that an additional primary research study will have little marginal utility.

A useful proxy for the importance of an issue in a meta-analytic sense is, of course, the mere number of studies that have examined it. A research area ripe for meta-analytic treatment is one characterized by a dependent variable that has been examined as a function of a common set of independent variables across studies. In consumer research it is common for a particular construct to become a popular dependent variable in many different studies. However, the commonality of independent variables across these studies is often minimal. This makes the meta-analytic task more difficult. The ideal meta-analysis is one which quantitatively synthesizes the findings of several studies examining the same functional relationship.

"Problem selection" is perhaps somewhat of a misnomer for meta-analysis. It implies that the researcher will peruse the literature in search of an issue ripe for meta-analysis. While a useful result might occur from such behavior, the ideal met -analyst is one who has been an active researcher in the area. In this way a meta-analysis will emerge from the previous efforts of an individual intimately involved with the topic and more likely to include a more complete set of studies.

Identification of Variables

As with primary research, the scope of variables in a meta-analysis can be few or many. Regardless of the number, the central variable is the findings of each study included in the review. Each study's result is an individual data point and the interest lies in the pattern or these data points across studies. It is the nature of the pattern that suggests the need for additional variables. When the findings of the studies are consistent (e.g., all or most of the studies reveal a positive effect), the focus may be restricted to describing the magnitude of this effect using a summary statistic (e.g., average effect size).

When the pattern of findings across studies reveals inconsistent or conflicting results, meta-analysis becomes a more meaningful and powerful approach to integration and requires additional variables to help explain or resolve the inconsistencies. In this situation the findings or studies become a dependent variable that is examined as a function of a set of independent variables designed to disentangle the disparity in findings. Characteristics or the studies are included as independent variables to determine if there are common features to studies exhibiting similar findings.

Glass, McGaw, and Smith (1981) suggest two general categories of study characteristics to consider as independent variables in a meta-analysis. Substantive characteristics refer to those features of a study that are specific to the problem being studied. For example, consider a meta-analysis that might be done on studies that have examined the effectiveness of the foot-in-the-door persuasion technique. An important substantive variable across studies might be the nature of the behavior being sought. In consumer studies the behavior will be of a buying nature. In other studies it will be of a nonbuying nature (e.g., contributing to a political campaign). The inclusion of this substantive variable might reveal variations in the effectiveness of the foot-in-the-door technique across behavioral contexts

Methodological characteristics refer to the research design features of the studies being integrated. The inclusion of such variables allows the assessment of the effect of research design variations on the findings of studies. The scope of methodological characteristics to include as independent variables encompasses virtually all dimensions of research design. Obvious candidates in consumer research studies would include sample size, probability vs. nonprobability sample designs, student vs. nonstudent subjects, random assignment vs. matching, reliability estimates of measures, single-item vs. multiple-item measures, and so on.

While the nature of substantive and methodological variables differs, the purpose of their inclusion as independent variables in a meta-analysis is the same:

. . . one wants to learn whether the findings differ depending on certain of the characteristics of the studies. A meta-analysis seeks a full, meaningful statistical description of the findings of a collection of studies, and this goal typically entails not only a description of the findings in general but also a description of how the findings vary from one type of study to the next (Glass, McGaw, and Smith 1981. pp. 78-9).

Operational Definitions of Variables

As with primary research, the variables included in a meta-analysis must be operationalized. The findings and, if appropriate, the characteristics of the studies must be expressed on common scales if a statistical integration of the studies is to occur. A number of alternative procedures are available for operationalizing the dependent variable of a meta-analysis. i.e., study findings. If interest Lies in merely summarizing the body of findings without regard to now they vary across different types of studies, several straightforward procedures are available. Such procedures include counting the number of studies exhibiting statistically significant positive, statistically significant negative, or statistically insignificant relationships (voting method), adding p-values, adding logs of D, adding t-values, adding Z-values (weighted or unweighted), testing mean p-values, and testing mean Z-values. Rosenthal (1978) provides a useful discussion of the advantages and limitations of these methods.

A key concern in measuring study findings is to capture the strength as well as the direction of findings. Glass, McGaw, and Smith (1981) argue that the most meaningful measure of the strength and direction of the findings of experimental studies is effect size, measured as follows:

A = (XE - XC)/SC


A = effect size

XE = mean of the experimental group

XC = mean of the control group

SC = control group standard deviation

The average effect size across studies can be used to summarize the strength and direction of the entire body of findings. If the reviewer is interested in examining study findings as a function of the substantive and methodological characteristics of the studies, the effect size from each individual study can serve as the dependent variable in such an analysis.

When correlational rather than experimental studies are being integrated, the above direct measure of effect size is obviously not amenable to meta-analysis. The results of individual studies will usually be represented by correlation coefficients. Values of r or r2 (or some transformation of them) will enter the meta-analysis as dependent variables.

When a meta-analysis involves independent variables representing study characteristics, the measurement of the independent variables obviously becomes an issue. Unfortunately, there are no straightforward a priori guidelines for determining or measuring the independent variables of a meta-analysis. Three-general points can be made, however. First, determination and measurement of variables for meta-analytic purposes requires that the studies be investigated before the variables are selected. In other words, the determination of study characteristics to be included can only occur after the studies have been screened for commonalities (as well as differences). With the knowledge obtained the reviewer can then identify independent variables for study and develop a coding scheme for them. This is why an active researcher in the area may well be a more efficient and appropriate meta-analyst.

Once the independent variables have been determined the reliability and validity of their representation must be considered. Thus, the second and third key points deal with the psychometric aspects of a meta-analysis. Reliability issues in a meta-analysis can be thought of in terms of intercoder reliability. Steps must be taken to ensure unambiguous coding instructions in order to reduce coding errors. Finally, the validity of meta-analytic measures is perhaps their most elusive aspect of measurement. There are no technical procedures specifically designed for assessing the validity of meta-analytic measurement, although some conventional procedures may be capable of being extended to this type or research. Insights into validity issues may be obtained from a thorough reading of existing meta-analyses before beginning data collection.


Meta-analytic sampling is concerned with the selection or studies to be included in the quantitative review. Rules of sampling that apply to survey research are relevant here. At a minimum, a representative sample of studies is desired. Yet the population of interest must be determined. Thus, the meta-analyst should probably think in terms of attempting a census of studies concerning the research question.

It is in the development or the sampling frame where controversy surrounding meta-analytic sampling arises. The issue lies in whether all studies on a relationship should be eligible or only those meeting certain criteria (e.g., only those conducted in the last 15 years, only those meeting a minimum quality standard such as appearing in a refereed journal). Glass, McGaw, and Smith (1981) offer the most encompassing resolution of this issue. They argue that all studies be eligible for inclusion and that differences between studies regarding age, quality, etc., be incorporated as independent variables in the analysis. This approach results in a more complete meta-analysis but does magnify the search for and coding of studies.

Data Analysis

The essence or meta-analysis is data analysis. It is the reason that studies were brought together in the first place--to statistically summarize their findings as a body of data in and of itself and to extract meaning from it. The scope of statistical procedures available to the primary researcher is available to the meta-analyst. Depending on the research question, univariate, bivariate, or multivariate techniques can be used in a meta-analysis. Moreover, the reviewer's perspective on the data can be exploratory or confirmatory, descriptive or inferential. Whatever the perspective, meta-analysis seeks data reduction for the purpose of interpretation. The primary objective is to synthesize the literature on a topic and, hopefully, offer generalizations concerning the current status and future needs for research in the area.


The intended outcome of a meta-analysis is a more objective, impartial basis for interpreting the findings of many studies than a narrative approach provides. Nonetheless, a meta-analysis and a meta-analyst are not without imperfections and limitations at this stage. Meta-analysis should not be accepted as providing generalizations beyond the body of research that was reviewed. For example, if only laboratory studies were reviewed, a meta-analysis of their findings does not make these findings generalizable to other settings. Finally, the liberties the meta-analyst has in attaching meaning to statistical results is no greater than in primary research. As Cotton and Cook (1982, p. 182) in a comment on meta-analytic procedures point out, "their interpretation like that of any statistical procedure, depends as much_on the wisdom of the investigator as on the outcome of the test itself."


Glass et al. (1981, pp. 24-26) cite over 40 examples of meta-analysis from the social science literature through 1980. Two of these, Gutman and Bradburn (1974) and Schwab, Olian-Gottlieb and Heneman (1979), are discussed here as well as four recent meta-analyses from the marketing/consumer behavior literature and one recent example from psychology.

Sudman and Bradburn (1974) performed a meta-analysis on several hundred studies of response effects in surveys. A total of 46 independent variables were coded and divided into three groups: task variables, interviewer role, and respondent role. These variables were investigated to determine their ability to explain response effects in a variety of marketing and other social science studies. As a measure of relative effect size, Sudman and Bradburn computed an index of how much difference a particular variable (e.g., race of interviewer) makes to a particular response category relative to the standard deviation of the responses for the sample as a whole. By employing this formal, quantitative approach, it was determined that in general, the variables derived from the nature and structure of the task are more important than respondent or interviewer characteristics. The authors also pointed out that unlike qualitative approaches, this quantitative approach gives a less biased indication of the importance of variables, forces more careful attention to variable definition, and most importantly, allows ranking or the importance of the independent variables. This study well illustrates the value of meta-analysis for methodological as well as substantive research issues.

Schwab, et al. (1979) performed a statistical review of between-subjects expectancy theory research where variance explained in effort and performance was the dependent variable and various characteristics of effort and performance and force-to-perform measures served as the independent variables. One-hundred sixty (160) observations were derived from 39 studies. A multiple regression analysis found that four variables (self-report or quantitative measures of effort and performance rather than ratings by someone else, 10-15 outcomes in the force measure rather than a greater or smaller number, outcome valence scaled with positive numbers and described in terms of desirability rather than importance, the force measure contained either no assessment of expectancy or an assessment that confounded expectancy and instrumentality) accounted for 42% of the variance in the results obtained in the studies. By combining studies quantitatively and investigating these methodological concerns, the authors supported "the nagging suspicion that expectancy theory over-intellectualizes the cognitive processes people go through when choosing alternative actions (at least insofar as choosing a level of performance or effort is concerned)" (p. 146).

A marketing meta-analysis that focused on effect size was Clarke's (1976) review of research assessing the duration of advertising effects on sales. Clarke analyzed 69 studies, which included some for which the effects of advertising were not statistically significant. Although Clarke did not calculate R or partial r as a measure of the effect size, he did present both the regression coefficient of the lagged dependent variable and an estimate of the implied duration of 90% of advertising's effect. After the elimination of eleven studies that yielded duration estimates longer than ten years (which Clarke believed implied a nonsensical result), the quantitative meta-analysis yielded several important insights not available from a more traditional qualitative literature review (e.g., Pollay 1979). First, the results indicated that the estimate of the duration of advertising effect was contingent upon the data interval. Shorter intervals (weekly, monthly, or bimonthly) indicated shorter estimates of the duration of advertising effects than longer data intervals (quarterly, annually). From econometrics and advertising theory, Clarke devised a test for interval bias; this test suggested that the annual data intervals were more apt to produce estimates that were biased upwards. Perhaps most important, Clarke was able to conclude that, contrary to past beliefs, advertising effects are likely to last for no more than three to nine months and not years. Clarke summarized by stating that, although he had to make some subjective decisions in order to produce comparable model specifications, "in isolation, none of the papers gives a satisfactory answer to the question of how long advertising affects sales. By putting them together, as has been done here, one achieves greater confidence in the result" (p. 355).

Yu and Cooper (forthcoming) conducted a meta-analysis of techniques used to increase response races to questionnaires. Conclusions were drawn by combining 497 response races found in 93 journal articles. Both the statistical significance of the accumulated results for a particular technique and the effect size (phi-coefficient) associated with a technique were computed. It was found that response races were increased by personal and telephone (versus mail) surveys, the use of prepaid or promised incentives, nonmonetary premiums and rewards and increasing amounts of monetary rewards. Other facilitators were preliminary notification, foot-in-the-door techniques, personalization and follow-up letters. The authors noted that the vast literature on the topic of response races makes "qualitative reviews extremely difficult to perform and their results necessarily imprecise in nature" and that quantitative reviewing helps increase the objectivity and reliability or review conclusions.

Two consumer research studies which can be considered under the rubric of meta-analysis were conducted by Farley, Lehmann and Ryan (1981a, b). In Farley et al. (1981a), both MANOVA and discriminant analysis were used to investigate the findings of 37 tests of the Fishbein Behavioral Intentions Model. The three criterion variables were the average beta weights for attitudinal and normative components and the average multiple correlation coefficient. The five predictor variables were the form of the attitudinal variable, the form of the normative variable, whether the study was experimental or not, the researcher's dominant discipline affiliation and whether subjects were students or "real world" respondents. Of the five main effects in the analysis, only the discipline of the researcher was concluded to have a large effect.

In Farley, et al. (1981b), the elasticities from estimates of parameters in four studies of the Howard and Sheth model were pooled in order to assess systematic differences related to variables and to study characteristics. Few elasticities were found to differ from the overall mean other than those associated with controllable exogenous variables such as price and distribution. Situational factors such as socio-demographics and study-specific factors had little impact on the elasticities.

A final example of used systematic approach to review is Hyde's (1981) meta-analysis of previous studies of whether males or females are superior in terms of several dimensions of cognitive ability. Previous qualitative literature reviews had concluded that differences in various abilities were "well-established." Hyde described the obtained effect size in each study in terms of both X --an estimate of the percentage of total variance explained by the sex variable--and d--the ratio of the difference in group means to the standard deviation across groups. Hyde found 27 studies or verbal ability, 16 studies of quantitative ability, 10 studies of visual-spatial ability, and 20 studies of field articulation. For the studies which offered sufficient information to calculate the effect size estimates, the respective median X and d values were .01 and .24, .01 and .43, .043 and .45, and .025 and .51 for the four X measures of intellectual ability. Hyde suggested that of the traditional qualitative literature reviews based simply or the number of studies which found statistically significant results may have misleadingly communicated the impression that the moderately consistent statistically significant sex differences were large when in tact they explained only from 1 to 42 or the variance and averaged less than .5 of the population standard deviation. Hyde concluded that, "Of course, a small effect might still be an important one. But at least the reader would have the option of deciding whether a statistically significant effect was large enough to merit further attention, either in teaching or in research" (p. 900).

These examples illustrate the wide range of topics which can be addressed with meta-analysis as well as the wide range of statistical procedures which can be employed. In addition, they illustrate how both substantive and methodological factors can be used to attempt to explain research findings.


There are a number of problems in conducting meta-analyses. Among these difficulties are the quantification, interpretation, and generalization of various types of effect size measures. For example, some such measures estimate the ratio or explained to total variance (such as R2 or w2). In quantifying the percentage of explained variance, researchers should recognize that total variance is increased by measurement and treatment unreliability, heterogeneous subjects, and poorly controlled research procedures (Sechrest and Yeaton 1981a,b). Experimental researchers can also influence the amount of explained variance by restricting or magnifying the manipulation of an independent variable.

Independent variables which are qualitative or categorical present particular interpretation problems. Such variables often have no conceptually meaningful or practically important characteristics in common within or across studies; the number of "levels" of such variables is infinite and any estimates of the "size" of their effects are very difficult to interpret. Finally, although estimates of percentage of explained variance may provide a common index for comparison, the above problems of the influence of individual characteristics of particular studies and manipulations within a study make it very difficult to meaningfully generalize effect sizes or to compare them across a set of different studies as in a meta-analysis. However effect sizes are estimated, these descriptive statistics are more generalizable if the levels of the independent variables are a random subset of all levels of interest (Glass and Hakstian 1969) and orthogonal to other independent variables (Green, Carroll and DeSarbo 1978; LaTour 1981a).

Fortunately, other approaches and measures of effect size are available for quantitatively summarizing research. As previously noted, Rosenthal (1978) has discussed the advantages and limitations of nine relatively simple approaches to summarizing results. LaTour (1981a, b) recommends the use of a contrast estimate to quantify effect size since it eliminates many of the problems of explained variance estimates. However, these methods seem most appropriate for the common 2 x 2 research design and are difficult to use and interpret with more complex designs (Glass and Hakstian 1969). Glass, McGaw and Smith (1981, p. 102) recently concluded that, "The findings of comparative experiments are probably best expressed as standardized mean differences between pairs of treatment differences." They further recommend against pooling of within-treatment variances of both control and treatment conditions and suggest that, usually, the control group variance should be used.

In addition to the problem or meaningfully summarizing and comparing study results, a meta-analysis often encounters other formidable obstacles. One problem involves the search for a census of studies including the unpublished ones that likely have smaller effect sizes. For studies that are available, there is often insufficient information to be able to calculate effect sizes and study authors must be contacted. Unfortunately, it is also often difficult to obtain sufficiently detailed descriptions of study methods and to code these study characteristics so their effects can be assessed in the meta-analysis. Small samples of studies and confounded study characteristics also make it difficult to disentangle main effects. (See Farley, Lehmann and Ryan 1981b for an example of how to deal with the potential negative degrees of freedom issue.) Main effects across studies are much easier to detect than most complex interactions. An opposite problem is that, if all surveyed studies use the same procedure, the effect of that method cannot be assessed (e.g., Cartwright 1973). One important outcome of a meta-analysis might be a specification of types or studies that would fill an existing void and allow an examination of the effects of variables that cannot currently be meaningfully evaluated.

It should now be obvious that a meta-analysis, though quantitative, depends on many subjective researcher decisions and there is much opportunity for disagreement (e.g., Cotton and Cook 1982, and Johnson et al. 1982). Perhaps because the publication of a meta-analysis carries an aura or finality, it seems very common for researchers to disagree about the many decisions involved in a meta-analysis and, hence, challenge the conclusions. For example, Stanley and Benbow (1982) challenged Hyde's meta-analysis of gender differences in quantitative ability. By analyzing only males and females who achieved high scores on a standardized mathematical achievement test, Stanley and Benbow found that males were much more likely to score high than females. They conclude that, "It seems to us that much research into causes and remedies is sorely needed, rather than further efforts trying to minimize the magnitude of sex differences" (p. 972). Weinberg and Weiss (1981) have disputed some of the analysis decisions in Clarke's meta-analysis of advertising carryover as well as the statistical validity of his conclusion about data interval bias. Weinberg and Weiss's criticisms include a failure of Clarke's analysis to allow for situational contingencies such as product class, the combination of brand loyal models with Koyck models when the former do not distinguish nonadvertising effects from advertising effects, model misspecification, and a publication bias in favor of statistically significant advertising carryover effects which, in turn, are related to the data aggregation level (Weiss and Windall 1977).


We predict that meta-analysis will have an important impact on consumer research in the next decade. Many of these influences are ones we hope will happen because they are clearly positive. However, there are other outcomes that we fear will happen.

On the positive side, we hope and predict that empirical researchers and journal editors will increasingly become aware of the need to more completely and precisely describe both study methods and results. If, in writing up methods and results sections, researchers asked themselves whether they could accurately code the method for use in a meta-analysis, method descriptions might be more complete. If journal space is a problem, perhaps, as Greenwald (1976) demanded, a copy of the data treatment means, standard deviations, correlation matrices, measures of effect size, and statistical power, along with a detailed description of the method and copies of various materials such as questionnaires and coding forms could be kept in archives in the journal's offices.

We also hope that more consumer researchers will undertake meta-analyses. Although consumer researchers usually do not study a given question in a programmatic way (Jacoby 1978), there still are many areas meta-analyses might be possible and profitable. These topic areas include the effects of involvement on cognitive processing, repetition and a host of dependent variables such as recall and cognitive response, foot-in-the-door research, brand and store loyalty studies, research about the determinants of the search for information and surveys of husband and wife influence. Surely, there are 10 or 20 other areas that we have not mentioned.

Finally, we must express our fear that meta-analysis will not have positive effects on our field and will furthermore die after a short-lived fad. After picking off a few "plums" that most easily lend themselves to meta-analysis, consumer researchers may decide that the hard work outweighs the returns of the results. However, to ignore any longer the value of quantifying the results of past research is a mistake consumer research can not afford.


Cartwright, Dorwin (1973), "Determinants or Scientific Progress: The Case of Research on the Risky Shift," American Psychologist, 28 (March), 292-31.

Clarke, Darral G. (1976), "Econometric Measurement of the Duration of Advertising Effect on Sales," Journal of Marketing Research, 13 (November), 345-357.

Cooper, H. M. and Rosenthal, R. (1980), "Statistical Versus Traditional Procedures for Summarizing Research Findings," Psychological Bulletin, 87, 442-449.

Cotton, John L. and Cook, Michael S. (1982), "Meta-Analyses and the Effects of Various Reward Systems: Some Different Conclusions from Johnson et al.," Psycho- logical Bulletin (July), 176-183.

Farley, John U., Lehmann, Donald R. and Ryan, Michael J. (1981a), "Generalizing from 'Imperfect' Replication," Journal of Business, 54 (October), 597-610.

Farley, John U., Lehmann, Donald R. and Ryan, Michael J. (1981b), "Patterns in Parameters of Buyer Behavior Motels: Generalizing from Sparse Replication," unpublished working paper, Columbia University.

Glass, G. V. (1976), "Primary, Secondary and Meta-Analysis Research," Educational Researcher, 5, 3-8.

Glass, G. V. (1977), "Integrating Findings: The Meta.Analysis of Research," in L. Schulman (ed.), Review of Research in Education, Vol. 5, Itasca, IL.: Peacock.

Glass, G. V. (1980), "Summarizing Effect Sizes," New Directions for Methodology of Social and Behavioral Science, 5, 13-32.

Glass, G. V. and Hakstian, A. R. (1969), "Measures of Association in Comparative Experiments: Their Development and Interpretation," American Educational Research Journal, 6 (May), 403-14.

Glass, G. V., McGaw, Barry and Smith, Mary Lee (1981), Meta-Analysis in Social Research, Beverly Hills: Sage Publications .

Green, Paul E., Carroll, J. Douglas and DeSarbo, Wayne S. (1978), "A New Measure of Predictor Variable Importance in Multiple Regression," Journal of Marketing Research, 15 (August), 356-60.

Greenwald, Anthony G. (1976), "An Editorial," Journal of Personality and Social Psychology, 33, 1-7.

Hyde, Janet Shibley (1981), "How Large Are Cognitive Gender Differences?: A Meta-Analysis Using X and d," American Psychologist, 36 (August), 892-901.

Jacoby, Jacob (1978), "Consumer Research: A State of the Art Review," Journal of Marketing, 42 (April), 87-96.

Johnson, D. W., Yama, G. Mara, Johnson, R., Nelson, D. and Skon, L. (1981), "Effects of Cooperative, Competitive, and Individualistic Goal Structures on Achievement: A Meta-Analysis," Psychological Bulletin (January), 47-62.

LaTour, Stephen A. (1981a), "Effect Size Estimation: A Commentary on Wolf and Bassler," Decision Sciences (January). 136-41.

LaTour, Stephen A. (1981b), "Variance Explained: It Measures Neither Importance nor Effect Size," Decision Sciences (January), 150-60.

Miller, T. I. (1977), "The Effects of Drug Therapy on Psychological Disorders," unpublished Ph.D. Dissertation, University of Colorado.

Pollay, Richard W. (1979), "Lydiametrics: Applications of Econometrics to the History of Advertising," Journal of Advertising History, 1, 3-18.

Rosenthal, Robert (1978), "Combining Results of Independent Studies," Psychological Bulletin, 85 (December), 185-193.

Schwab, Donald P., Olian-Gottlieb, Judy D. and Henneman III, Herbert G. (1979), "Between-Subjects Expectancy Theory Research: A Statistical Review of Studies Predicting Effort and Performance," Psychological Bulletin, 86 (January), 139-147.

Sechrest, Lee and Yeaton, William (1981a), "Empirical Bases for Estimating Effect Size," in Reanalyzing Program Evaluations: Policies and Practices, R. F. Boruch, P. M. Wortman, and D. S. Cordray (eds.), Ann Arbor: University of Michigan Institute for Social Research.

Sechrest, Lee and Yeaton, William (1981b), "Estimating Magnitudes of Experimental Effects," unpublished manuscript, Institute of Social Research, University of Michigan, Ann Arbor, Michigan.

Stanley, Julian C. and Benbow, Camilla P. (1982), "Huge Sex Ratios at Upper End," American Psychologist, 37 (August), 972.

Sudman, Seymour and Bradburn, Norman M. (1974), Response Effects in Surveys: A Review and Synthesis, Chicago: Aldine.

Weinberg, Charles B. and Weiss, Doyle L. (1981), "On the Econometric Measurement of the Duration of Advertising Effects on Sales," unpublished working paper, University of British Columbia.

Weiss, Doyle L. and Windall, Pierre (1977), "The Effects of Specification Error and Temporal Data Aggregation on Distributed Lag models of Advertising Sales Effectiveness-A Monte Carlo Approach," Proceedings, Sixth Annual AIDS Conference, Chicago.

Wilkie, William L. and Pessemier, Edgar A. (1973), "Issues in Marketing's Use of Multi-Attribute Attitude Models," Journal of Marketing Research (November), 428-441.

Yu, Julie and Cooper, Harris (forthcoming), "A Quantitative Review of Research Design Effects on Response Rates to Questionnaires," Journal of Marketing Research.