Levels-Of-Processing, Perceptual Bias, and Comparison Advertising

John G. Myers, University of California, Berkeley
[ to cite ]:
John G. Myers (1979) ,"Levels-Of-Processing, Perceptual Bias, and Comparison Advertising", in NA - Advances in Consumer Research Volume 06, eds. William L. Wilkie, Ann Abor, MI : Association for Consumer Research, Pages: 95-98.

Advances in Consumer Research Volume 6, 1979      Pages 95-98


John G. Myers, University of California, Berkeley

This paper reviews three Frontiers in Advertising studies presented at the Annual ACR Conference, Miami Beach, Florida, October 26-29, 1978 by: Joel Saegert, "A Demonstration of Levels-of-Processing Theory in Memory for Advertisements"; Peter H. Webb, "Perceptual Discrepancies in the Time Decision and Number of Television Commercials"; and Edwin C. Hackleman and Subhash C. Jain, "An Experimental Analysis of Attitudes Toward Comparison and Non-Comparison Advertising.'' The review identifies common themes and basic similarities and differences, provides critical commentary on each, and concludes with a brief section on implications and directions for future research.

The three studies represent recent work on advertising effectiveness research from what might be called a consumer information processing perspective. Saegert's focus is on testing some aspects of a new theory of learning called "levels-of-processing." Webb's work is motivated by a general advertising problem referred to as "clutter," and Hackleman and Jain focus on a second advertising problem--the relative effectiveness of comparison versus non-comparison advertising. An interesting departure point is the nature of the criterion variable used in each case. In both the Saegert and Webb studies, cognitive or perceptual levels are of central interest. Saegert uses recognition and recall measures and Webb uses measures of commercial time and number of commercials recalled. Jain and Hackleman concentrate on affective or evaluative levels and use a criterion best described as preference for alternative types of advertising stimuli.

Another common theme is a basic similarity in methodological perspective. All three studies are causal/ experimental research of some kind and test main and interaction effects of one or more treatment factors. Data collection is handled by manipulating stimulus materials (advertising copy) and/or the subject pool (e.g., sex), introducing various types of controls, and exposing subjects (consumers) to test materials in "laboratory" or simulated natural environment situations. Data analysis is largely confined to comparing group mean differences and analysis of variance. There is thus great methodological consistency across the studies even though the designs differ in important ways. None however is representative of other forms of consumer information process ing research such as protocol research or tests of the predictive power of a preference function or theory (e.g., expectancy-value, weighted beliefs, conjoint analysis, constant sum paired comparisons). Direct rather than unobtrusive measures (e.g., response latency, chronometrics, facial action coding) are used to get at recall and preference in all of the studies.

The substantive questions addressed in each case differ, and there is an interesting difference in re search perspective. The recent Commission on the Effectiveness of Research and Development for Marketing Management [For a review article on the Commission's work, see John G. Myers, Stephen A. Greyser, and William F. Massy, "The Effectiveness of Marketing's 'R. & D.' for Marketing Management," Journal of Marketing, forthcoming, January 1979.] proposed a classification in which all marketing research can be considered either: (1) Basic research; (2) Problem-oriented research; or (3) Problem-solving research. Saegert's study is probably close to what the Commission would call basic research, whereas Webb's and Hackleman and Jain's work appears more "problem-oriented." Some additional comments on this theme are given at the end of the review.


The major contribution of this work is to test some aspects of "levels-of-processing" theory attributable to Craig and Lockhart and recognized for its potential in understanding processes which underlie advertising recall by Olson. The basic argument is that memory is a function of the "level" to which material is processed and is to some degree independent of the amount of repetition or rehearsal of the stimulus. As Saegert points out, a precise definition of "level" has not been specified, but in general it refers to the degree to which material is subjected to elaboration in relating it to a viewer's prior experience and knowledge. Although not discussed in the paper, the implicit assumption appears to involve a linear relation between the amount of elaboration and the amount of memory or recall. Readers should not have to reflect hard to appreciate the underlying theoretical controversy here between cognitivists who argue for the quality of processing as the major determinant of learning, and s-r behavioralists more prone to argue for the quantity of rewarding or nonrewarding stimuli and who emphasize over-time learning, trials, repetition, decay, and so on.

The operationalization of levels-of-processing is interesting and reveals much about the study. Thirty male and female adults were asked to answer questions about forty brands (from magazine advertisements). The "deep processing group" answered questions concerning their personal experiences with the brands such as, "Do you have this brand in your home?" The "shallow processing group" were questioned on formal aspects of the brand name: "Is the brand name in script letters?" Deep and shallow questions were reversed for the first and second half of the subjects. A recall measure (write down as many of the brand names as you can re member), followed by a recognition measure (identify test names from a list of 80 possibilities) were used as the two main criterion variables or factors. Analysis involved a 2 x 2 factorial with type of question as the repeated measures variable. In both recall and recognition, the differences between the deep and shallow question items were highly significant. There were no significant order or interaction effects between order and question type.

This is a nice straightforward piece of work in which an important new theory is tested. One can say that the null hypothesis of no significant effects of processing level is rejected at a highly significant level. We have here some new insights into that obtuse construct known as "involvement" in understanding advertising effects and learning. [In an early formulation of a theory of advertising effects, personal involvement and stimulus relevance were considered the two principal processes associated with a perceptual filter conditioning the amount of attention or "psychological tension" aroused. See John G. Myers, Consumer Image and Attitude (Berkeley: Institute of Business and Economic Research, 1968), pp. 89-92.] The study stimulates a whole series of questions pertaining to the construct. Is the function linear or nonlinear? Is "personal elaboration" the same thing as Bogart's "connections"? Is Krugman's "low involvement" the same thing as "shallow processing"? Could deep processing lead to the wrong kind of learning (for ex ample, if one equates deep processing with "counter-arguing")? How might distraction or refutational advertising play a role in understanding these memory effects? What is the impact of deep processing on attitude and attitude change or on subsequent brand choice? Theoretical and empirical work addressed to any of these questions would extend the Saegert study, and could provide useful new insights to the nature of the process. From the practitioner's viewpoint, the recommendation that creative people should en courage deep processing and personal elaboration is reasonable. It will not, however, come as new news to many creative directors!


The term "clutter" has been used to refer to all the nonprogram elements on television including commercials, promotional announcements, public-service announcements, billboards, station breaks, and credits. It is a problem for advertisers because it potentially decreases the effectiveness of individual messages, for the broadcaster because it increases operating costs and absorbs program time, and for the viewer because frequent and lengthy program interruptions can generate frustration and interfere with program content.

Webb's paper reports on some special aspects of the clutter question, specifically whether or not consumers tend to exaggerate the amount of time actually devoted to commercials, and/or exaggerate the actual number of commercials aired in a particular time block. The paper is a spin-off of the clutter research project undertaken by Ray and Webb (1974, 1976) [See Peter H. Webb and Michael L. Ray, "The Effects of Television Clutter: An experimental Investigation," Marketing Science Institute Project Description (1974); and Michael L. Ray and Peter H. Webb, "Experimental Research on the Effects of TV Clutter: Dealing with a Difficult Media Environment," Report No. 76-102, Marketing Science Institute, April 1976.] sponsored by the Marketing Science Institute, and readers will find it useful to refer to the original references in under standing the broader issues involved. This article is mostly concerned with reporting the results of two questions that were asked during that project: How many commercials were there? (consider this the Number dimension); and How much time was devoted to commercials? (call this the Time dimension).

Motivation for this work stems from a general interest in learning more about the clutter problem (even though the authors report earlier that many advertisers consider it a "nonproblem"), and the more basic question of perceptual distortion generally. There is much evidence from surveys and public opinion polls that consumer beliefs are often at variance with "objective'' facts, and Webb is interested in exploring this phenomenon in the case of perceptions of time and number of commercials. His literature review concerns evidence on whether the amount of time devoted to commercials and the actual number of commercials has actually increased (in general, the answer to both questions is yes, although the evidence is skimpy--standard commercial minutes in nonprime time, for ex ample, increased from 9 1/2 minutes in 1952 to 16 minutes in 1978), and evidence on possible determinants of perceptual distortion with respect to time. It is difficult to believe that experimental psychologists and others have not studied perceptual time distortion, as Webb implies, and that there is not some evidence on number distortion which is net covered in the review. There is, for example, a vast literature on "selective perception" in social psychology, and indeed the concept of "perceptual bias" is a part of most comprehensive models of buyer behavior. The germ of Webb's theoretical explanation is based on recent dissertations concerning stimulus complexity as the major explanatory variable. The more complex the stimulus, the greater the tendency to exaggerate the amount of time spent in studying it. He acknowledges that other factors such as prior expectations of the viewer, relevance of the stimulus, affect (pleasant/unpleasant stimuli) and novelty may contribute to an explanation of why distortions take place.

A brief review of the major results from the two studies reported in his paper is given below:


As can be seen, there appears to be a consistent pat tern across the two studies in viewer tendencies to overestimate the amount of time devoted to commercials and to underestimate the actual number of commercials that were shown. What is more difficult to explain are the distortion patterns within and across the studies. In Study I, viewers more than doubled their estimate of actual time in the low clutter condition. In effect, the distortion was much greater in the low clutter than in the high clutter condition, and this would appear not to support the complexity hypothesis. Also, the relative degree of accuracy in number estimation across the two studies is disturbing. In Study I viewers were much more likely to give accurate estimates, while in Study II these estimates were considerably below the actual number.

Webb's courage in reporting on two studies of similar things in the same paper is to be admired, and the cross-methods/cross-traits spirit which underlies it is encouraging. The paper above all raises interesting and pertinent questions associated with perceptual distortion and clutter, even though some results appear inconsistent and contradictory. The tendency to pre sent hypotheses post hoc, to choose one type of analysis for one study (t-tests) and a different type for the other study (ANOVA), and to skirt the problems of operational pretesting of constructs like stimulus complexity should be avoided. Numerous competitive explanations for the results can be put forward. The accuracy of number estimation in Study I might be attributed to the greater ease of estimating number rather than time (but this is not well supported in the second study). Differences in prior expectations, experience, or amount of knowledge might explain some variance and suggests the need for covariance analysis. Sex differences in the samples (Study II was an all-female sample) may have been operating. The decision to keep as closely as possible to a simulated natural environment during the exposure situation (viewers could talk to each other, walk around, and so on) al though normally a positive factor in raising external validity (where interest is focused on the effective ness of the commercials), in this case may have led to other confounding effects (e.g., between those paying close attention and those merely guessing).


This study by Hackleman and Jain is interesting from the viewpoint of comparisons with the previous two be cause of its emphasis on experimental design and its comparative lack of attention to theoretical constructs, or theory development. For those interested in measurement and experimental design, this is an excellent exposition of the use of split-plot repeated measures factorials, latin squares, factor analysis, and other randomization techniques for testing and controlling various confounding factors. The authors have used exceptional care and painstaking effort in these aspects of their work. The principal limitations of the study relate to a lack of attention to theory and a tendency to over-generalize from the study results. Some generalizations do not in fact appear to be supported by the results presented in the paper.

The central question which motivates this work is that of the "effectiveness" of comparison advertising versus noncomparison advertising. The operational measure of effectiveness chosen can be called "preference for an advertisement." One can argue that this is not a very good measure if the viewpoint is that of the advertiser interested in building awareness, comprehension, or favorable attitude for his brand. Liking for an ad and favorable attitudes for the brand advertised are not the same thing. The other focus of the study is on possible determinants of relative effectiveness beyond advertising type, specifically sex and product, and the interactions between these three factors in predicting effectiveness. It might be said that the situation which motivates the study is that advertisers generally don't like comparison advertising, the FTC is encouraging it, and we know very little about what consumers like or want!

The authors' literature review is skimpy. There has been much more work done on comparison advertising than reported here. Also, the paper would be strengthened by recourse to some theoretical perspective to explain possible effects. What is a "comparison advertisement'' from the viewpoint of cognitive, affective, or motivational components on the receiver side or source and message components on the sender side? Why should comparison advertising be more or less effective than noncomparison advertising in terms say of counterarguing, distraction, refutation, and so on? What would congruity, balance, dissonance (consistency) theories, levels-of-processing, high- or low-involvement theories, complexity theories, expectancy-value, attribution, or any other theory predict about likely effects?

The real value of this paper is in studying the operational procedures used in implementing the split-plot design. Great care was taken in developing stimulus materials. Free response word association techniques were used to develop new brand names for the 12 products tested. Comparison ads were developed by the clever device of combining two noncomparison ads, and locations of the two rotated in test materials. A 10-item preference scale was developed by factor analyzing a larger bank and choosing items that appeared to have highest reliability and construct validity (it is not clear how these choices were made nor why they re-suited in higher reliability and validity). In addition to the 2 x 2 x 12 factorial with 120 replications (subjects), further controls were incorporated by stimulus rotation and latin square designs. The authors may be trying to "kill a fly with a bulldozer," but their experimental methodology is well worth careful study and review.

Some reported results unfortunately do not flow easily from the data. Consider, for example, the simplified recasting of their Table 5 reproduced below:


H (high) and L (low) in the table refer to the direction of preference in each case. It is difficult to conclude as the authors did that comparison ads are more "effective" in the case of shopping goods, al though the tendency (if effectiveness is assumed to be liking for the ad) is certainly there. There is, nevertheless, one case in which the shopping good non-comparison ad was preferred, and another case in which the convenience good comparison ad was preferred. An other stated conclusion is that there were no differences in attitude toward comparison and noncomparison ads. Although an F-test does support this overall, the preceding table can be interpreted to mean that non-comparison ads are preferred in seven product cases and comparison ads preferred in five cases. The cleanest finding may be that sex appears to have an overall effect (women preferred all ads more than men), but did not discriminate preference for comparison and noncomparison ads. The implications or explanations of this phenomenon appear, however, not to be related to the principal motivations for the study.

Several other types of questions can be raised. Is the trichotomy of convenience, shopping, and specialty goods really useful here? Should not other criterion variables such as claim believability, comprehension, and so forth have been included given the nature of the controversy over comparison advertising? The preference measure chosen is only one of numerous possibilities, and probably not the best one. Summing items over a semantic scale assumes each item is equally important and this may not be the case. One or two other methods to get at preference and to test construct validity could well be included. Effective ness is a nebulous concept in advertising. Much depends on the advertiser's objectives. Neither of the major conclusions and recommendations--that firms marketing convenience and specialty goods should avoid comparison advertising, and that there is no support for the FTC's claim that comparison advertising provides consumers with more information--are truly iron clad generalizations which can be made from these results.


Possible extensions to each of the three studies were given implicitly or explicitly in the preceding sections. The most basic issue concerning research directions is the nature of the kind of research that needs to be encouraged. Marketing scholars too often appear faced with the dilemma of doing "great research on a trivial problem," versus doing "trivial research on a great problem" when the "problem" is specified in terms meaningful to the decision-maker or manager. Basic research is often equated with the first option and problem-oriented research with the second. Al though the Commission study referred to earlier [For a review article on the Commission's work, see John G. Myers, Stephen A. Greyser, and William F. Massy, "The Effectiveness of Marketing's 'R. & D.' for Marketing Management," Journal of Marketing, forthcoming, January 1979.] concluded that all forms of research (basic, problem-oriented, and problem-solving) were vital and important to the development of the field of marketing, there was general agreement that the problem-oriented category is the kind that needs encouraging. It is not clear that this is good advice if the consequence is essentially a fostering of "trivial research" nor that a desired synthesis of "great research - great problem" perspectives is an attainable goal. There are the seeds here for some interesting debate on these issues.