Source Effects in Persuasion Experiments: a Meta-Analysis

ABSTRACT - A meta-analysis of research findings is presented on the effects of communicator variables on attitude and behavior change from the 1950's to the present. On the average, 8 percent of the variance was explained by source variables.


Elizabeth J. Wilson (1987) ,"Source Effects in Persuasion Experiments: a Meta-Analysis", in NA - Advances in Consumer Research Volume 14, eds. Melanie Wallendorf and Paul Anderson, Provo, UT : Association for Consumer Research, Pages: 577.

Advances in Consumer Research Volume 14, 1987      Page 577


Elizabeth J. Wilson, Pennsylvania State University

[The author thanks Prof. Arno J. Rethans for helpful comments and the College of Business Administration, Penn State University, for research support. Address correspondence to Elizabeth J. Wilson, College of Business, Penn State Univ., University Park, PA 16803.]


A meta-analysis of research findings is presented on the effects of communicator variables on attitude and behavior change from the 1950's to the present. On the average, 8 percent of the variance was explained by source variables.


A review of the substantial body of the marketing, psychological, and communication research literature on source effects supports the conclusion that source variables, e.g., credibility, expertise, attractiveness, and similarity, influence the attitudes and behavior of target audiences. What are the relative effect sizes of various source variables? What laboratory and field conditions increase the amount of variance explained by source variables? Answers to these questions will help serve as benchmarks in interpreting future research findings on source effects.


Hunter, Schmidt, and Jackson (1932) note knowledge cumulation to be a two-step process. First, a cumulation of results across studies is necessary to establish facts. Second, theories should be formulated to place the facts into a coherent and useful form (p. 10). Meta-analysis accomplishes the first step by averaging results across source effect studies in order to explain the observed variation, if any, in measured responses of subjects. Research on source variables lends itself well to meta-analysis because many empirical studies have been done over the past 30 years. Narrative reviews have been done (Sternthal, Phillips, Dholakia 1978) and a quantitative review is needed.

Glass, McGaw, and Smith (1981) identify two categories of research characteristics to consider as independent variables in meta-analysis. First, substantive characteristics pertain to what was measured, i.e., beliefs, attitudes, intention, or behavior of respondents. For example, are effect sizes significantly different for measured beliefs versus attitudes? Studies can be grouped according to what source effect was manipulated, i.e., expertise, similarity, or attractiveness. Are effect sizes significantly larger for one subject group versus another? Second, method characteristics (lab versus field, students versus non-students as subjects) become independent variables in the meta-analysis. The dependent variable in the meta-analysis is the size of the statistical effect on the subject due to the source of a persuasive communication. Effect size is measured by w2, which is defined as the amount of total variance explained by the source manipulation and is computed as in Rays (1973).


Thirty-one articles on source effects from psychology and marketing journals contained information on 96 manipulations in which w2 could be computed. Although 96 effect size data points may seem small, the present research domain is narrow compared to other meta-analyses (e.g., Peterson, Albaum, and Beltrimini 1985) and more specific information is gained.


Four hypotheses were specified based on the research questions posed above. Rationales are stated briefly. H1: Effect sizes in studies with scaled responses as the dependent variable (measured beliefs or attitudes) will be larger than effect sizes in studies with overt behavior as the dependent variable. Overt behavior usually involves financial or time commitments of the subject who may be unwilling to complete such transactions versus checking a point on a scale. H2: An attractiveness manipulation will yield larger effect sizes than an expertise manipulation which will yield a larger effect size than the similarity manipulation. Attractiveness is an inherent characteristic of the source and readily perceived by the subject. Expertise and similarity are hypothesized in this order based on Busch and Wilson's (1976) findings. H3: Effect sizes in lab studies will be larger than those obtained in the field. Lab studies offer more control and obtained effects may be larger. H4: Effect sizes in studies using students as subjects will be larger than those in studies using non-students. Students are more likely to be used in the lab while consumers are more likely to be contacted in the field.


H1: Marginally supported. Effect size (w2) means for four levels of response were: beliefs (.12), attitudes (.07), intention (.07), and behavior (.08) F = 2.58, d.f. = 3/92, p <.10. Conclude that source effect sizes are generally homogeneous and comparable on this dimension.

H2: Not supported. No significant difference in the mean effect sizes for the three source types, however the means were in the hypothesized direction. Mean effect sizes were: attractiveness (.11), expertise (.086), and similarity (.07). Conclude that it is acceptable to combine source effect sizes over this dimension to obtain an average effect.

H3: Marginal support. Mean effect size for lab studies was .10 while field studies was .07 (F = 3.16, d.f. = 1/94 P < .10).

H4: Supported. The mean source effect size in studies with students as subjects was .10 versus .06 for nonstudent subjects (F = 5.56, d.f. = 1/94, p < .01).

On the average, the source manipulation explained eight percent of the total variance. This is consistent with Sawyer and Ball's (1981) observations of effect sizes in behavioral research studies. Researchers in the area of persuasion may find this benchmark useful in evaluating future findings.


Busch, P. and D.T. Wilson (1976), "An Experimental Analysis of A Salesman's Expert and Referent Bases of Social Power in the Buyer-Seller Dyad," Journal of Marketing Research, 13 (February), 3-11.

Glass, G., McGaw, B. and M. Smith (1981), Meta-Analysis in Social Research, Beverly Hills, CA: Sage Publications.

Hays, William (1973), Statistics for the Social Sciences, 2nd ed.. New York: Holt Rinehart, and Winston.

Remaining references available on request.



Elizabeth J. Wilson, Pennsylvania State University


NA - Advances in Consumer Research Volume 14 | 1987

Share Proceeding

Featured papers

See More


The Power of Pottymouth in Word-of-Mouth

Katherine C Lafreniere, University of Alberta, Canada
Sarah G Moore, University of Alberta, Canada

Read More


Doing Worse but Feeling Better: Consequences of Collective Choice

Nuno Jose Lopes, University of Navarra
Elena Reutskaja, IESE Business School

Read More


When the Face of Need Backfires: The Impact of Facial Emotional Expression on the Effectiveness of Cause-Related Advertisements

In-Hye Kang, University of Maryland, USA
Marijke Leliveld, University of Groningen, The Netherlands
Rosellina Ferraro, University of Maryland, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.