Computer-Assisted Print Ad Evaluation
ABSTRACT - The evaluation of advertising effectiveness and its determinants has long been a concern of advertisers and advertising agencies. This paper reviews the limitations of past research on this topic and describes a computerized ad testing procedure which allows precise estimates of the effects of print ad characteristics, environmental or contextual factors, and their interaction.
Citation:
Raymond R. Burke and Wayne S. DeSarbo (1987) ,"Computer-Assisted Print Ad Evaluation", in NA - Advances in Consumer Research Volume 14, eds. Melanie Wallendorf and Paul Anderson, Provo, UT : Association for Consumer Research, Pages: 93-95.
The evaluation of advertising effectiveness and its determinants has long been a concern of advertisers and advertising agencies. This paper reviews the limitations of past research on this topic and describes a computerized ad testing procedure which allows precise estimates of the effects of print ad characteristics, environmental or contextual factors, and their interaction. ADVERTISING EFFECTIVENESS RESEARCH Advertising researchers have investigated the relationship between various print ad characteristics and measures of advertising effectiveness for nearly 70 years. A wide variety of mechanical and content characteristics have been considered. m e mechanical aspects include ad size, number of colors, proportion of illustrations to copy, the absence of borders (bleed), and type size (Diamond 1968; Holbrook and Lehmann 1980; Valiente 1973). The content factors include message appeal (e.g., status, quality, fear, and fantasy; Holbrook and Lehmann 1980), attention-getting techniques (e.g., free offers, pointers, and women; Hanssens and Weltz 1980), and psycholinguistic variables (e.g., product or personal reference in headline, interrogative or imperative headline; Rossiter 1981), among others. The research cited above has provided a number of insights into the effects of executional factors on print ad performance (see the review by Percy 1983). however, several methodological limitations plague most of these studies. First, they tend to define "advertising effectiveness" in terms of just one or two measures such as ad recognition, recall, or inquiry generation. They have not examined advertising effectiveness in a thorough, multidimensional manner, involving constructs spanning the entire communication and decision process (i.e., exposure, attention, interest, comprehension, beliefs, preference, and choice). Without such a complete assessment, the value of the research results for advertising design decisions is extremely limited. This is a relevant concern given that various aspects of consumer response involve quite different psychological processes of memory and information processing which can be differentially affected by the various ad factors (cf. McGuire's, 1978, discussion of the "compensation postulate ). Second, the majority of print advertising effectiveness studies have used data from commercial organizations which measure consumer responses to ads that have actually appeared in various magazines. As such, ad characteristics are confounded with uncontrolled and unmeasured contextual factors (e.g., type of magazine, time and place of exposure, product relevance to the consumer) so it is difficult to evaluate the separate effects of these characteristics on effectiveness. Only a few studies have employed laboratory testing procedures which control for these "threats" to internal validity. Third, most of the research methodologies which utilize s dependent variable (effectiveness)/independent variables (print ad characteristics) scenario employing multiple regression, AID, etc., suffer from problems of structural multicollinearity in the set of independent variables measured (e.g., Diamond 1968; Assael, Kofron, and Burgi 1967). Because these measurements are extracted mainly from actual print ads, there is no control over the mechanical and content characteristics and their interrelationships. For example, the total size of the ad may be correlated with the number of words of copy. Therefore, precise main effect estimates of each independent variable's contribution to print ad effectiveness are impossible. In some cases, researchers have used factor analysis to reduce the set of ad characteristics to a smaller, uncorrelated set of variables. The factor scores are then related to effectiveness (e.g., Moldovan 1985; Twedt 1952; Valiente 1973). However, effectiveness weightings for the derived factors cannot be used to predict the performance of new ads which are described in terms of the original set of correlated dimensions. Finally, these studies typically use ads-that are already at the final stages in testing, and are therefore likely to evoke favorable responses. Consequently, there may be relatively little variation in some important ad characteristics (cf. Stewart and Furse 1986). For example, advertisers may not use certain at formats with product categories that are known to be incompatible. Therefore, the data analysis will not reveal the true interactive effect of format and product category on ad effectiveness. COMPUTERIZED AD GENERATION AND TESTING An alternative approach is to experimentally manipulate print ad characteristics and measure consumer responses in a laboratory setting (e.g., Mitchell 1986; Petty, Cacioppo, and Schumann 1983). This gives the researcher flexible control over the composition and presentation of ads, overcoming many of the limitations noted above. Unfortunately, the production of stimulus materials for advertising experiments is typically costly and time consuming, restricting the scope of experiments to the investigation of just a few at factors. We have developed a computerized procedure for the generation and testing of print advertisements which avoids the high production costs and time required to physically prepare materials for advertising experiments and allows the measurement of a broad range of consumer responses. This methodology permits precise control over stimulus material presentation, can perform automatic randomization or counterbalancing of non-experimental factors, and allows unobtrusive measurement of at exposure times and information acquisition behavior. Subjects' responses are automatically coded and stored by the computer. This procedure has evolved from methodology which is widely used in cognitive and social psychological research. For example, Ronis, Baumgardner, Leippe, Cacioppo, and Greenwald (1977) report on a computer-controlled system for studying the persuasiveness of verbal stimuli. Their time-sharing computer system sequentially presents messages (from a library of persuasive communications) on a video display and records opinions according to a Latin square design. This standardizes the experimental procedure, minimizes the interaction between the subject and human experimenter, and allows within-subjects estimates of the main effects of manipulations. Similar procedures have been used in marketing contexts to measure the impact of competitive advertising and information processing objectives on consumer memory for advertisements (Burke and Srull 1985), and to study the effects of various advertising claims on message comprehension, brand-attribute beliefs, and evaluation (Burke, DeSarbo, Oliver, and Robertson 1986). Computer Implementation A combination of hardware (IBM PC/AT microcomputer, AT&T Targa graphics controller, Sony analog RGB monitor) and software is used to create an electronic magazine" on a video display, where computer-generated print ads are combined with articles and other editorial material. The illustrations and other visual elements (e.g., logos, trademarks) which appear in the experimental ads can come from existing photographs, print ads, or television commercials, and are scanned into the computer using a video digitizer or "frame grabber" (see Robertson, 1986, for product information). A graphics editor is then used to combine the images, headlines, and body copy in standard ad layouts (Book and Schick 1984). The exact size and content of the ads depend on the specific ad factors under investigation. Experimental Design Consumers differ substantially in terms of their prior knowledge, preference for alternatives, personality, etc., and these variables can affect their responsiveness to advertising (e.g., Buchanan 1964). When the subject variance is high, a within-subjects design is required to have sufficient statistical power to detect modest effects. However, if a completely within-subjects design is used in an advertising experiment, it creates a rather unnatural magazine, where a number of different ads in a variety of formats appear for the same brand of product. It is therefore desirable to use a Latin square or confounded block design (Kirk 1968; Winer 1971) in which ad characteristics are manipulated across a set of ads for different brands appearing in different locations in a single magazine. Across subjects, the magazine contains different pairings of brands, ad characteristics, and locations. This permits the computation of within-subject estimates of the main effects of ad factors, brands, and locations, and can also provide partial information on their interactions. When advertising studies have used designs allowing the measurement of interactions, these effects are often fount to be significant (see, e.g., Edell and Staelin 1983; Petty, Cacioppo, and Schumann 1983). The computer procedure allows the investigation of a number of user-specified mechanical and content characteristics mentioned previously. In addition, contextual and process variables can be studied, including the recency and frequency of ad exposure, level and nature of competitive advertising, clutter (e.g., total volume of advertising, at-to-editorial ratio), ad placement, and editorial context. The effects of alternative headlines, brand claims, and illustrations can also be tested if the experimental ads are for brands from the same product class. Procedure Subjects are asked to participate in a research study to investigate how individuals react to the electronic distribution of magazines through computers (cf. Ray and Sawyer 1911). After completing a computer-administered questionnaire containing questions on demographics, magazine readership, product familiarity and usage, and attitudes towards computer usage, the subject is asked to browse through a magazine displayed on the computer screen as if it were an ordinary print magazine. The "Page Up" and "Page Down" keys of the computer keyboard are used to turn pages. "Some" turns to the cover of the magazine, "End" turns to the back page, and "Escape" leaves the magazine. The page presentation rate and exposure sequence is controlled by the viewer. Within the magazine, color and black and white print ads appear for a variety of different products and brands. Articles, excerpted from general-interest magazines, are interleaved with the ads. During the magazine review period, measures of ad exposure time, page exposure frequency, and the recency of exposure are unobtrusively recorded. After the subject has finished browsing through the electronic magazine, he/she is asked to complete a "Magazine Evaluation Questionnaire." Following this interpolated task, subjects are asked a number of questions about the experimental ads including brand name recognition, brand-attribute recall, ad comprehension, brand beliefs, liking, and preference. Analysis A multivariate analysis of variance is first conducted to examine consumers' overall responsiveness to the manipulation of ad characteristics. This is followed by univariate analyses for each of the dependent variables. Clearly, the goals of print ads differ (e.g., creating brand recognition, communicating information, stimulating interest, etc.), and it is necessary for the advertiser to identify the impact of ad characteristics on those responses which are most relevant to the current goals. The estimated model allows for the determination of "optimal print advertisements" from the perspective of maximizing one or more (e.g., a convex combination) of the various dependent measures (cf. Diamond 1968). As is traditional conjoint analysis, simulators can be constructed to evaluate other print ads not utilized in the design. This methodological schema can be seen as input to a potential expert system for print advertisement design. LIMITATIONS AND FUTURE DIRECTIONS While this approach circumvents many of the internal validity problems of past research, it raises a complementary set of questions about external validity. Subjects find the computerized procedure novel and interesting, and are presumably highly involved in the task. Print has often been characterized as a comparatively "high involvement" medium (Krugman 1965, 1977), so one might expect the levels of ad attention in the laboratory to be similar to those for ads appearing in real magazines or newspapers. However, individuals in the laboratory setting are aware that their behavior is being monitored. Thus, they may attend to a greater proportion of ads, attempt to memorize the materials, or behave in other "unnatural" ways in response to perceived demand characteristics. Furthermore, individuals can flip through the pages of a real magazine, tear out ads and articles, put the magazine on a coffee table and return to it in a few days, etc. In order to expire the effects of these differences on the external validity of findings from the computer procedure, it will be necessary to compare these results with data collected in more naturalistic exposure situations. For example, laboratory data on the effectiveness of individual ad characteristics can be used to predict the performance of advertisements which have been field-tested by commercial services. Another limitation of the compute testing approach is that it requires a computer workstation for each subject to be run concurrently. An alternative approach is to generate printed output using a thermal or inkjet printer or color film recorder for group-administered paper-and-pencil studies. This allows the research to run a number of subjects simultaneously. However, it becomes much more difficult to collect the process measures of ad exposure time, frequency, and page sequence. If the research is interested in studying the effects of a large number of ad characteristics and their interaction with subject variables, then it may be desirable to use a completely within-subjects design (as with conjoint analysis). Of course, if subjects are shown a large number of product advertisements that vary only slightly, then brand beliefs and evaluations should be measured at the time of ad exposure, rather than at the end of the exposure sequence, to reduce the possible confusion between ads. If individual difference variables are found to interact with executional variables, this would suggest that ads might be tailored to the response functions of specific market subsegments. With the recent availability of low-cost, high-resolution graphics equipment, computerized experimentation will undoubtedly see broader application in marketing research. For example, Reiling (1986) describes a system for the design and evaluation of product packaging. Packaging graphics are first laid out on a graphics workstation, modeled in three dimensions, and then displayed on a simulated store shelf among competitive products. At this point, the package is tested on visibility, apparent size, and the visual dominance of graphic elements. The same methodology could be used for examining consumers' responses to a variety of visual stimuli, including point-of-purchase displays, sale signs, shelf cards, shelf [frontage," and sales people. REFERENCES Assael, Henry, John H. Kofron, and Walter Burgi (1967), Advertising Performance as a Function of Print Ad Characteristics, Journal of Advertising Research, 7 (June), 20-26. Book, Albert C. and C. Dennis Schick (1984), Fundamentals of Copy and Layout, Chicago: Crain. Buchanan, Dodds I. (1964), How Interest in the Product Affects Recall: Print Ads vs. Commercials," Journal of Advertising Research, 4, 9-14. Burke, Raymond R., Wayne S. DeSarbo, Richard L. Oliver, and Thomas S. Robertson (1986), "Deception By Implication: An Experimental Investigation," working paper, Marketing Department, The Wharton School, University of Pennsylvania. Burke, Raymond R. and Thomas R. Srull (1985), "Competitive Interference and Consumer Memory for Advertising," Working Paper #85-016, Marketing Department, The Wharton School, University of Pennsylvania. Diamond, Daniel S. (1968), "A Quantitative Approach to Magazine Advertisement Format Selection," Journal of Marketing Research, 5 (November), 376-386. Edell, Julie A. and Richard Staelin (1983), "The Information Processing of Pictures in Print Advertisements," Journal of Consumer Research, 10 (June), 65-61. Hanssens, Dominique M. and Barton A. Weitz (1980), "The Effectiveness of Industrial Print Advertisements Across Product Categories," Journal of Marketing Research, 17 (August), 294-306. Holbrook, Morris B. and Donald R. Lehmann (1980), Form versus Content in Predicting Starch Scores," Journal of Advertising Research, 20 (August), 53-61. Kirk, Roger E. (1968), Experimental Design: Procedures for the Behavioral Sciences, Belmont, CA: Brooks/Cole Krugman, Herbert E. (1965), "The Impact of Television Advertising: Learning Without Involvement," Public Opinion Quarterly, 29 (Fall), 349-356. Krugman, Herbert E. (1977), "Memory Without Recall, Exposure Without Perception," Journal of Advertising Research, 17 (4), 7-12. McGuire, William J. (1978), "An Information Processing Model of Advertising Effectiveness," in Behavioral and Management Sciences in Marketing, eds. H.L. David and A.J. Silk, New York: Rondd, 156-180. Mitchell, Andrew A. (1986), "The Effect of Verbal and Visual Components of Advertisements on Brand Attitudes and Attitude Toward the Advertisement," Journal of Consumer Research, 13 (June), 12-24. Moldovan, Stanley E. (1985), "Copy Factors Related to Persuasion Scores," Journal of Advertising Research, 24 (January), 16-22. Percy, Larry (1983), "A Review of the Effect of Specific Advertising Elements upon Overall Communication Response," Current Issues and Research in Advertising, Vol. 6, 77-118. Petty, Richard E., John T. Cacioppo, and David Schumann (1983), "Central and Peripheral Routes to Advertising Effectiveness: The Moderating Role of Involvement," Journal of Consumer Research, 10 (September), 135-146. Ray, Michael L. and Alan G. Sawyer (1971), "Repetition in Media Models: A Laboratory Technique," Journal of Marketing Research, February, 20-29. Reiling, Lynn G. (1986), "Concept-to-Mechanical System Pares Product Package-Design Time Frame, Marketing News, August 1, 2. Robertson, Barbara (1986), "Micro-Based Video Development Surges," Computer Graphics World, January, 37-45. Ronis, David L., Michael a. Baumgardner, Michael R. Leippe, John T. Cacioppo, and Anthony G. Greenwald (1977), "In Search of Reliable Persuasion Effects: I. A Computer-Controlled Procedure For Studying Persuasion," Journal of Personality and Social Psychology, 35 (8), 548-569. Rossiter, John R. (1981), "Predicting Starch Scores," Journal of Advertising Research, 21 (5), 63-68. Stewart, David W. and David a. Furse (1986), Effective Television Advertising: A Study of 1000 Commercials, Lexington. MA: D. C. Heath. Twedt, Dik Warren (1952), "A Multiple Factor Analysis of Advertising Readership," Journal of Applied Psychology, 36 (3), 207-215. Valiente, Rafael (1973), "Mechanical Correlates of At Recognition," Journal of Advertising Research, 13, 13-18. Winer, B. J. (1971), Statistical Principles in Experimental Design, 2nd Ed., Nev York: McGraw-Hill. ----------------------------------------
Authors
Raymond R. Burke, University of Pennsylvania
Wayne S. DeSarbo, University of Pennsylvania
Volume
NA - Advances in Consumer Research Volume 14 | 1987
Share Proceeding
Featured papers
See MoreFeatured
When Does Being Paid an Hourly Wage Make it Difficult to Be a Happy Volunteer?
Sanford E. DeVoe, University of California Los Angeles, USA
Jieun Pai, University of California Los Angeles, USA
Featured
Consumers’ Trust in Algorithms
Noah Castelo, Columbia University, USA
Maarten Bos, Disney Research
Donald Lehmann, Columbia University, USA
Featured
R1. How Consumers Deal With Brand Failure-An Individual Differences Approach
Melika Kordrostami, California State University-San Bernardino
Elika Kordrostami, Rowan University