ACR Presidential Address Consumer Research: Telling It Like It Is

Jacob Jacoby, Purdue University
[ to cite ]:
Jacob Jacoby (1976) ,"ACR Presidential Address Consumer Research: Telling It Like It Is", in NA - Advances in Consumer Research Volume 03, eds. Beverlee B. Anderson, Cincinnati, OH : Association for Consumer Research, Pages: 1-11.

Advances in Consumer Research Volume 3, 1976      Pages 1-11

ACR PRESIDENTIAL ADDRESS

CONSUMER RESEARCH: TELLING IT LIKE IT IS

Jacob Jacoby, Purdue University

[This address is dedicated to my students, past and present, the people who taught me much of what I know.]

[Apologia. To the captive audience that witnessed my frustrated and somewhat less than adept attempt to compress this paper into a 25 minute address, my apologies. As has been pointed out to me by others, those who study information overload are often the least likely to recognize it in practice.]

It was just last week that Cincinnati won the World Series. Among other things, this reminded me of the story about the three umpires who were standing around discussing how they determined whether the pitcher's throw was a ball or a strike. The first umpire said: "Some is balls and some is strikes, and I calls 'em the way they is." The second, perhaps a bit wiser for having had an introductory psychology course, said: "Some is balls and some is strikes, and I calls 'em the way I sees them." The third umpire, possibly an unemployed philosophy Professor, countered with: "Some is balls and some is strikes -- but they ain't nothin' till I calls 'em." A little bit afraid to tread in the area of philosophy, recognizing that this is primarily "the way I sees it," but knowing that there are others here who also see what I see, I've taken the liberty of entitling this talk: "Consumer research: Telling it like it is.'' [There have been five previous occasions for outgoing ACR officers to deliver a farewell address. In the days before we had an elected president, Jim Engel and Bob Pertoff, the first and second Chairmen of the Advisory Council {then, the governing body of ACR), both spoke about the developing organization and its future. Joel Cohen (1973), our first President, focused on the potential of ACR to serve as a facilitator of research and, in an all-too-brief section, commented on the trivial nature and poor quality of consumer research, circa 1972. Bob Pratt (1974), our second President, presented a thorough accounting of where this organization had been, what progress had been made, and what progress remained to be made. My immediate predecessor, Bill Wells, chose not to give any address at all. This author confesses that the full wisdom of Wells' decision did not sink in until he started to prepare this address two weeks before it had to be delivered.]

DEFINING CONSUMER BEHAVIOR

[This section is based upon Jacoby (1975).]

Let me begin by defining what I mean by the terms "consumer research" and "consumer behavior." Several writers -- including some who are members of this Association -- have asserted that consumer behavior is nothing more than a sub-domain of marketing (e.g., Arndt, 1968, p. S). Others contend that it "is an applied discipline...[which] attempt[s] to solve practical marketing problems" (Engel, Kollat, and Blackwell, 1973, p. 8). Conceivably, authors in other disciplines may claim consumer behavior as their own.

I prefer to view consumer behavior as being independent of any disciplinary orientation. To me, it is a fundamental form of human behavior; it simply exists. Moreover, it will continue to exist regardless of whether or not any discipline makes it the subject of formal inquiry. As I define it (cf. Jacoby, 1975, 1976a, 1976b), consumer behavior is the acquisition, consumption, and disposition of goods, services, time, and ideas by decision making units (e.g., individuals, families, firms, etc.). ["Behavior" is used here in a general sense to include both overt behavior and related cognitive behavior which may occur prior to, during, and subsequent to overt behavioral acts.] As such, it represents three broad classes of behaviors (namely, acquisition, consumption, and disposition) directed toward aspects of the environment.

Consumer behavior encompasses much more than just buying products and/or services. For example, acquisition can occur in a wide variety of ways, not all of which involve purchasing and the exchange of money. To illustrate, the sensuous young lady dispensing favors to her 50-year old "uncle" in return for a mink coat, a luxury apartment, and a vacation on the Riviera is, among other things, engaging in a very old form of consumer behavior.

Borrowing the neighbor's rake -- a form of temporary acquisition which does not involve the exchange of money -- can also be considered a form of consumer behavior. One could even argue that the chimpanzee who works for tokens and then exchanges them either for a banana or an opportunity to look through a window into another room is also engaging in consumer behavior (cf. Cowles, 1937; Wolfe, 1936). If you think this is a bit farfetched, you have only to look into last year's ACR Proceedings to find an empirical investigation which used laboratory animals -- and I don't mean students --to study economic behavior (Battalio and Kagel, 1975).

Quite clearly, many would end their definition of consumer behavior at this point. The most often cited models of consumer behavior (cf. Engel, Kollat, and Blackwell, 1968, 1973; Howard and Sheth, 1969; Nicosia, 1966; Hansen, 1972) focus predominantly on pre-purchase acquisition; they barely mention or discuss actual consumption (i.e., the decisions and behaviors involved in actually using or consuming the product) and completely ignore disposition. However, consumer behavior also includes these other basic categories of human behavior. Like acquisition, consumption and disposition are both complex decision processes having many facets. Among other things, consumption may be immediate, delayed, or extended through time, and the object of consumption may be entirely consumed (e.g., a cookie) or may remain in complete or partial form after consumption has ceased (e.g., a candy bar wrapper, an old shirt, an auto which is beyond repair). In the latter event, the consumer eventually becomes involved in a decision making and behavioral process regarding whether to throw the object away, give it away, sell it, rent it, convert it to another purpose, etc. Often, the acquisition, consumption, or disposition of one item requires the acquisition, consumption, or disposition of another item (e.g., buying a car usually requires that we purchase auto insurance; using a car requires that we purchase gasoline; selling a car usually requires that a new vehicle be acquired, or a new mode of transportation be employed). Thus, consumer behavior often assumes complex overlays of multiple decision making and choice behavior regarding acquisition, consumption, and disposition.

Consumer research, then, is simply research addressed to studying any aspect of consumer behavior. It is not necessarily applied, although it could be and often is. However, it is important to note the growing tendency to consider the study of consumer behavior as a worthy endeavor in its own right (cf. Jacoby, 1969a; Sheth, 1972). In other words, there are also "basic" as well as "applied" consumer researchers. Having now made a distinction between these traditional orientations to research, let me say that the distinction is more arbitrary and artificial than real. Where there are differences between the two, they are more in degree rather than in kind. Applied research almost invariably utilizes basic research concepts and is often concerned with being able to use the obtained information at later points in time (i.e., generalizing). Accordingly, I believe that the issues I am about to raise regarding consumer research are just as relevant for people who call themselves applied as for those who have a more basic orientation.

PROBLEMS IN CONSUMER RESEARCH

In the few passages of his outgoing Presidential Address that were devoted to the subject of consumer research, Cohen (1973, p. 4-5) made the following general observations: "...too much of the research is trivial, both theoretically and for problem solution. Simply put, the quality of our research is not as high as it should be, regardless of purpose." In another paper delivered at that same conference, Kollat, Blackwell, and Engel (1972), based upon reviewing five years worth of the published literature in order to update the Engel, Kollat, and Blackwell (1968, 1973) text, described several of these problems. In most instances, these were the same problems which prevailed when these authors prepared their first edition (Kollat, Engel, and Black-well, 1970). Perhaps the most telling passage in the Kollat, Blackwell, and Engel (1972, p. S77) paper is the following:

"...the consumer behavior literature has doubled during the last five years. This constitutes a remarkable achievement by almost any standard. Unfortunately, however, it would not be surprising if 90% of the findings and lack of findings prove to be wrong..."

Having myself recently prepared a chapter on consumer psychology for the Annual Review of Psychology (Jacoby, 1976b) and considered nearly 1000 articles in the process, my impression is that virtually no progress has taken place in the intervening years for most of the problems that Kollat et al. identified.

I would like to discuss these problems and also raise several new ones for your consideration -- not because they make pleasant reading or listening, but because it is all too apparent that much too large a proportion of the contemporary consumer research literature is not worth the paper it is printed on or the time it takes to read it. Unless we begin to take corrective action soon, we will all drown in a mass of meaningless junk! Let me document this assertion by considering five broad categories of problems: our theories (and comprehensive models), our research methods, our research measures, our statistical techniques, and our subject matters. [Let me shout it at the outset: MEA CULPA! I have committed many of the sins that I am about to describe. No doubt, I will continue to commit at least some of them long after this address is published and forgotten. There is no one of us without guilt. However, we have to begin casting stones about and break our false idols lest our collective guilt suffocate the periodic airing of our sins and, in so doing, also suffocate the impetus to improve. I would also like to note at this point that naming names and citing specific articles as illustrations of the problems I am iterating would probably serve few, if any, positive ends. The interested reader has only to examine the articles in our leading journals to find numerous examples of what I mean. On the other hand, and because they may serve a guidance function for some, I have named names and cited specific articles in order to illustrate positive examples addressed to the issue under consideration. It should be noted, however, that citing an article as being positive in one respect usually does not mean that it is void of other deficiencies

Finally, some of the positively cited articles will be my own. I beg the reader's forbearance for the human tendency to be most familiar with and cite one's own work.]

Consumer Behavior Theories, Models, and Concepts

The past decade has witnessed an increasing amount of attention devoted to the development, presentation, and discussion of relatively comprehensive theories and models of consumer behavior (Andreason, 1965; Nicosia, 1966; Engel, Kollat, and Blackwell, 1968, 1973; Howard and Sheth, 1969; Hansen, 1972; Markin, 1974). However, Kollat et al. (1972, p. 577) noted that: "These models have had little influence on consumer behavior research during the last five years. Indeed, it is rare to find a published study that has utilized, been based on, or even influenced by, any of the models identified above." Unfortunately, not much has changed since then.

Look Ma -- No Theory. Despite the availability of theory and the necessity for theory in any scientific endeavor seeking to extend understanding via empirical research, the impetus and rationale underlying much consumer behavior research seems to rest on little more than the availability of easy-to-use measuring instruments, the existence of more or less willing, subject populations, the convenience of the computer, and/or the almost toy-like nature of sophisticated quantitative techniques. Little reliance is placed on theory either to suggest which variables and aspects of consumer behavior are of greatest importance and in need of research, or as a foundation around which to organize and integrate findings. It is still true that nothing is so practical as a good theory. However, while most of us talk a good game about the value and need for theory, it is clear that we would rather be caught dead than using theory.

The Post Hoc, Atheoretic, Shotgun Approach to Conducting Consumer Research. A fundamental problem relating to the neglect of theory and theoretically derived concepts is that the researcher increases the likelihood that he will fail to understand his own data and/or be able to meaningfully interpret and integrate his findings with findings obtained by others. In a set of unpublished working papers now six years old (Jacoby, 1969a, 1969b; as well as in a subsequent empirical investigation, Jacoby, 1971), [Copies of these papers are still available on request.] I referred to the problem as "the atheoretical shotgun approach" and tried to illustrate its nature by considering empirical attempts to relate personality variables to consumer behavior. Reaching back into ancient history, the most frequently quoted and paraphrased passage from these papers is as follows:

Investigators usually take a general, broad coverage personality inventory and a list of brands, products, or product categories, and attempt to correlate subjects' responses on the inventory with statements of product use or preference. Careful examination reveals that, in most cases, the investigators have operated without the benefit of theory and with no a priori thought directed to how, or especially why, personality should or should not be related to that aspect of consumer behavior being studied. Statistical techniques, usually simple correlation or variants thereof, are applied and anything that turns up looking half-way interesting furnishes the basis for the Discussion section. Skill at post-diction and post hoc interpretation has been demonstrated, but little real understanding has resulted.

These papers went on to advocate and illustrate why it was necessary for consumer researchers to use theoretically derived hypotheses for specifying variables and relationships in advance. That is, they called on consumer researchers (1) to make predictions of differences and no differences, (2) to explain the reasons underlying these predictions, and (3) to do both prior to conducting their research. Look at it this way. You're sitting with a friend watching Pete Rose at bat in the World Series. Pete Rose hits a home run and your friend says: "I knew he was going to hit that home run." In fact, after this bit of post-diction, your friend even continues with a plausible explanation: "He always hits a home run off right-hand pitchers when he holds his feet at approximately a 70 degree angle to each other and his left foot pointing directly at the pitcher." Think of how much more confident you would have been that what your friend was saying was correct if he made this as a prediction just as Pete Rose was stepping into the batter's box. (Anticipating one of the issues I will raise below, namely replication, think of how much greater confidence you would have if your friend predicted Rose would hit home runs on two subsequent occasions just before Rose actually hit home runs, and also predicted Rose would not hit a home run on eight other instances where Rose did not hit a home run.)

Although considered in the context of relating personality variables to consumer behavior, these working papers also made it clear that almost every aspect of consumer research reflected the atheoretic shotgun approach, particularly when it came to utilizing concepts borrowed from the behavioral sciences. In a word, the problem was pandemic. Yet despite the fact that this passage was later liberally quoted and re-emphasized by such influential writers as Engel, Kollat, and Blackwell (1975, p. 652-53; Kollat, Engel, and Blackwell, 1972, p. 576-77) and Kassarjian <in his frequently cited review of personality research, 1971, p. 416), the impact of these calls for greater reliance on theory and less shotgunning in consumer research has been negligible. Most consumer researchers are still pulling shotgun triggers in the dark.

Concepts Misplaced, or Whoops! Did you Happen to See Where my Concept Went? Even in those instances where consumer researchers seem to be sincerely interested in conducting research based upon a firm conceptual foundation, they sometimes manage to misplace their concepts when it gets down to the nitty gritty. For example, the author of one recent article states: "...it is imperative that our definition of deception in advertising recognize the interaction of the advertisement with the accumulated beliefs and experience of the consumer." Two paragraphs later he provides a definition which ignores his imperative. He then goes on to propose plans for detecting deception which completely disregard the fact that deception may occur as a function of the prior beliefs of the consumer and not as a function of the ad (or ad campaign) in question.

Another equally frustrating example is provided by those who define brand loyalty as an hypothetical construct predicated upon the cognitive dynamics or the consumer -- and then proceed to base their measure of brand loyalty solely on the buyer's overt behavior. The consumer behavior literature contains an abundance of similar examples of our inability to have our measures of concepts correspond to these concepts.

The "Theory of the Month" Club. Interestingly, however, the failure to use and test existing theories and comprehensive models of consumer behavior has not discouraged some of us from proposing new theories and comprehensive models, thereby providing us with a different kind of problem. Several of our most respected colleagues seem to belong to a sort of "theory of the month" club which somehow requires that they burst forth with new theories periodically and rarely, if ever, bother to provide any original empirical data collected specifically in an attempt to support their theory. Perhaps those with a new theory or model should treat it like a new product: either stand behind it and give it the support it needs (i.e., test it and refine it as necessary) --or take the damn thing off the market!

Single Shot vs. Programmatic Research. Another theory-related problem evidenced in the contemporary consumer behavior literature is the widespread failure to engage in programmatic research. Judging from the literature published since the inception of ACR, there are fewer than a dozen individuals who have conducted five or more separate investigations in a systematic and sequential fashion which were addressed to providing incremental knowledge regarding the same broad issue. Instead, what we have is a tradition of single shot studies conducted by what one scholar has termed "Zeitgeisters-Shysters" (Denenberg, 1969).

Rarely, however, have single shot investigations answered all questions that need to be answered or made definitive contributions on any subject of importance. Yet many consumer researchers seem to be operating under the illusory and mistaken belief that they are capable of yielding payout of substance and duration. I am not advocating that we do only programmatic research. Having engaged in enough single shot studies myself (e.g., Kyner, Jacoby, and Chestnut, 1975), I full well appreciate the allure, excitement and challenge often inherent in single shot studies and the potential that such studies sometimes have for providing resolution to an applied problem of immediate concern. I also recognize that it is difficult to caution someone in the depth of an infatuation not to be beguiled. However, if we are to deserve the label "serious researcher" and make contributions of substance, it is necessary that a greater proportion of our efforts be programmatic.

Although I consider theory and concepts to be the proper and best starting point for most consumer research, I recognize that some of us consider theory to be irrelevant. So let me now direct some much needed attention to our methods, our measures, our statistics, and our subject matter -- topics which all consumer researchers, whether so-called applied or basic, must share in common.

Consumer Research Methods

Verbal Report vs. Actual Behavior. By far, the most prevalent approach to gathering data in consumer research involves eliciting verbal reports from subjects either via an interview or through the use of a self- administered questionnaire. Typically, these verbal reports assess either (1) recall of past events and behavior, (2) current psychological states (including attitudes, preferences, beliefs, statements of intentions to behave and likely reactions to hypothetical events), and/or (3) socio-demographic data. Of the 44 empirical studies in the published Proceedings of last year's conference (Schlinger, 1975), 39 (or 87%) are based principally or entirely on verbal report data collected from respondents. Similarly, of the 56 empirical studies found in the first six issues of the Journal of Consumer Research, 31 (more than 85%) were based primarily or solely on verbal report data. Even if the verbal reports were the best of possible methods, the following observation by Platt (1964, p. 251) would still remain true: "Beware the man of one method or one instrument...he tends to become method-oriented rather than problem-oriented." However, the verbal report is probably not the best of all possible methods. Given the numerous sources of bias in verbal reports and the known and all-too-often demonstrated discrepancy between what people say they do and what they actually do, it is nothing short of amazing that we persist in our slavish reliance on verbal reports as the mainstay of our research.

For the greater part, the problems inherent in the ubiquitous verbal report approach can be organized into one of three broad categories: interviewer error, respondent error, and instrument error. We will here disregard consideration of interviewer errors, since more than 75% of the verbal report studies (or two-thirds of our published empirical effort) are based upon the self-administered questionnaires.

Respondent Error in Verbal Reports. It is exceedingly important to note that verbal report data are predicated upon many untested and, in some cases, invalid assumptions. Many of these are in regard to the respondent. As examples, consider the following assumptions inherent in attempts to elicit recall of factual information: (1)Prior learning (and rehearsal) of the information has actually taken place; that is, something exists in memory to be recalled. (2) Once information is stored in memory, it remains there in accurate and unmodified form. (3) Said information remains equally accessible through time. (4) There are no respondent differences in ability to recall which should be controlled or accounted for. (5) Soliciting a verbal report is a non-reactive act; that is, asking questions of respondents is unlikely to have any impact on them and on their responses.

Analogous assumptions exist with respect to assessing psychological states via verbal reports [e.g., regarding attitudes, preferences, intentions, etc.). For example, in a paper published eight years ago -- which I believe should be required reading for all consumer researchers -- Leo Bogart noted that the simple act of asking the respondent a question often "forces the crystallization and expression of opinions where [previously] there were no more than chaotic swirls of thought" (1967, p. 335). It should be noted that the assumptions underlying recall of factual material are few and simple relative to the assumptions underlying using verbal reports as indicants of psychological states. Perhaps the most effective way to summarize the state of affairs is to simply say that many of the fundamental assumptions which underlie the use of verbal reports are invalid. The reader is asked to perseverate regarding the ramifications of this fact.

Instrument Error in Verbal Reports. If these problems are sobering, consider the fact that our paper and pencil instruments (either self-administered questionnaires or interview schedules) often contribute as much or more error than do our interviewers or our respondents. In general, most of our questionnaires and interview schedules are terrible and tend to impair rather than assist us in our efforts to collect valid data. More often than not, we provide respondents with questionnaires which, from their perspective, are ambiguous, intimidating, confusing, and incomprehensible. But questions and questionnaires are easy to prepare, right? Wrong! Preparing a self-administered questionnaire is one of the most difficult steps in the entire research process. Unfortunately, it is commonly the most neglected step. Formulating questions and developing the questionnaire seems like such a simple thing to do that we are usually lulled into a false sense of security, Everyone is assumed to be an expert here. Yet many of us never become aware of the literally hundreds of details that should be attended to in constructing questionnaires (cf. Erdos, 1970; Payne, 1951; Kornhauser and Sheatsley, 1959; Selltiz, Jahoda, Deutch, and Cook, 1959). We simply assume that because we know what we mean by our questions and we comprehend the lay-out and organization of our instrument, data collected using such an instrument are naturally valid. If the data are not valid, then the error is obviously a function of the respondent, not a function of our instrument. The result is that we are often left with what in computer parlance is referred to as GIGO, that is, garbage in-garbage out. In most instances, we ourselves are hardly even cognizant of the fact that this has occurred.

Please don't misinterpret what I am saying. I am NOT suggesting that we do away with verbal reports and self-administered questionnaires. This approach to gathering data is a valid and vital part of our methodological armamentarium. However, if we are to continue placing such great reliance on it, the least we ought to do is clean it up. Too many of us are caught up in the excitement and challenge of research and ignore the basics. One of the things I am most emphatically calling for is for us to get down to these basics, to learn how to formulate questions and structure questionnaires. I care not that a finding is significant, or that the ultimate in statistical analytical techniques have been applied, if the data collection instrument generated invalid data at the outset. Relative to other aspects of conducting research, more time must be devoted to developing and polishing our verbal report instruments. Perhaps if journal editors found it important to require publication of the instrument (or at least the critical questions used), it would stimulate improvement in this area.

Verbal Reports vs. Actual Behavior: Continued. But do we actually have to place slavish reliance on the verbal report? Certainly not! One alternative is to devote less time to studying what people say they do and spend more time examining what it is that they actually do do. In other words, we must begin to place greater emphasis on studying behavior, relative to the amount of effort we place on studying verbal reports regarding behavior. There have been several recent developments in this regard. Since a few of these were discussed at some length yesterday (cf. Jacoby, Chestnut, Weigl, and Fisher, 1975; Payne, 1975) and these remarks will also be available in our Proceedings, I'll not devote additional time to the subject here. Let me simply note that the verbal report and behavioral approaches each have their unique advantages and disadvantages. The optimal procedure would probably involve some combination of both (cf. Wright, 1974). Such an approach is most likely to provide us with a better fix on, and deeper understanding of our findings.

Consumer Behavior: A Dynamic Process Studied with Static Methods. In addition to the necessity of cleaning up our verbal reports and developing greater attention to alternative approaches, we also need to begin studying consumer behavior (which includes consumer decision making) in terms of the dynamic process that it is. Virtually all consumer researchers tend to consider consumer behavior as a dynamic, decision making, behavioral process. Yet probably 99-plus% of all consumer research conducted to date examines consumer decision making and behavior via static, post hoc methods. Instead of being captured and studied, the dynamic nature of consumer decision making and behavior is squelched and the richness of the process ignored. This is another issue which was treated in detail yesterday, and those interested will be able to pursue this subject in the Proceedings (cf. Jacoby, Chestnut, Weigl, and Fisher, 1975).

Roosters Cause the Sun to Rise. Another methodological issue I would briefly like to mention is the necessity for greater reliance on the experimental method, particularly in those instances where cause-effect assertions are made or alluded to. Examination of our literature reveals a surprising number of instances in which causation is implied or directly claimed on the basis of simple correlation. It bears repeating that no matter how highly correlated the rooster's crow is to the sun rising, the rooster does not cause the sun to rise.

More and Richer Dependent and Independent Variables. A final set of methodological issues I would like to raise at this point -- in part, because they are related to the issue of measurement (particularly validity) to which I will turn next -- concerns the need for research (1) which incorporates measures of a variety of dependent variables, (2) which explores the combined and perhaps interacting impact of a variety of independent variables, and (3) which moves away from using single measures of the same dependent variable. With respect to the former, it is often possible to measure a variety of different dependent variables at little additional cost (e.g., accuracy, decision time, and subjective states in Jacoby, Speller, and Berning, 1974). Unfortunately, opportunities for substantially enhancing understanding through the inclusion of a variety of dependent variables are generally ignored. Equally important, we live in a complex, multivariate world. Studying the impact of one or two variables in isolation would seem to be relatively artificial and inconsequential. In other words, we also need more research which examines the impact of a variety of factors impinging in concert.

It is also all too often true that conclusions are accepted on the basis of a single measure of our dependent variable. The costs involved in incorporating a second or third measure of that same variable are usually negligible, particularly when considered in terms of the increased confidence we could have in both our findings and concepts if we routinely used a variety of indices and found that all (or substantially all) provided the same pattern of results (e.g., Jacoby and Kyner, 1973). This second issue (namely, using multiple measures of the same variable) relates more to the validity of our measures than to our methods, and is elaborated upon below.

Consumer Research Measures and Indices

Our Bewildering Array of Definitions. Another problem which Kollat, Blackwell, and Engel (1972) referred to is the "bewildering array of definitions" that we have for many of our central constructs. As one example, at least 40 different and distinct measures of brand loyalty have been employed in the 500 studies comprising the brand loyalty literature (cf. Jacoby and Chestnut, 1975). Virtually no attempt has been made to weed out the poor measures and identify the good ones. Almost everyone has his own preferred measure and seems to blithely and naively assume that findings from one investigation can easily be compared and integrated with findings from investigations which use other definitions. The same horrendous state of affairs exists with respect to many of our other core concepts and constructs. There are at least four different categories of "innovator" definitions (cf. Kohn and Jacoby, 1975; Robertson, 1971) and three different categories of "opinion leadership" definitions (i.e., self designating, sociometric, and key informant). Each one of these categories can and usually is broken out into several specific forms of operationalizations. As examples, Rogers and Catarno (1962), King and Summers (1970) and Jacoby (1972) all provide different operationalizations of self designating opinion leadership.

More incredible than the sheer number of our measures is the ease with which they are proposed and the uncritical manner in which they are accepted as meaningful indicants. In point of fact, most of our measures are only measures because someone says that they are, not because they have been shown to satisfy the standard measurement criteria of validity, reliability, and sensitivity. Stated somewhat differently, most of our measures are no more sophisticated than first asserting that the number of pebbles a person can count in a ten-minute period is a measure of that person's intelligence; next, conduct a study and find that people who can count many pebbles in ten minutes also tend to eat more; and, finally, conclude from this: people with high intelligence tend to eat more.

Wanted, Desperately: Validity. A core problem is this regard in the issue of validity. Just how valid are our measures? Hardly anyone seems to be interested in finding out. Like our theories and comprehensive models, once proposed, our measures seem to take on an almost sacred and inviolate existence all their own. They are rarely, if ever, examined or questioned. Several basic types of validity exist, although often described with somewhat varying terminology (e.g., American Psychological Association, 1966; Angelmar, Zaltman, and Pinson, 1972; Cronbach, 1960; Heeler and Ray, 1972; Nunnally, 1973). The psychometrician Nunnally, in a highly readable and almost layman-like presentation of the subject, writes of three basic types of validity: content validity (which is generally irrelevant in consumer research), predictive validity, and construct validity. Face validity is a fourth, non-psychometric variety and refers to whether a measure looks like it is measuring what it is supposed to be measuring. Examination of the core [As considered from my biased perspective, i.e., "as I sees it."] consumer behavior journals (Journal of Consumer Research, Journal of Marketing Research, Journal of Marketing, Journal of Applied Psychology, Public Opinion Quarterly, Journal of Consumer Affairs, and Journal of Advertising Research) and conference proceedings (of the Association for Consumer Research, American Marketing Association, and the American Psychological Association's Division of Consumer Psychology) since 1970 -- a body of literature consisting of approximately 1000 published articles --reveals the following with respect to validity.

Face Validity. First, there are numerous examples of face validity. The measures being used almost always look like they are measuring that which they are supposed to be measuring. However, the overwhelming majority of studies go no further, i.e., provide no empirical support. In other words, face validity is often used as a substitute for construct validity.

Predictive Validity. There are also a sizable number of studies which suggest the existence of predictive validity, that is, the measure in question seems to correlate with measures of other variables as predicted. Unfortunately, many investigators do not seem to recognize that predictive validity provides little, if any, understanding of the reasons for the relationship. One can have a predictive validity coefficient of .99 and still not know why or what it means -- other than the fact that the scores on one measure are highly predictive of scores on a second measure. Indeed, the relationship may even be meaningless. As one concrete example, Heeler and Ray (1972, p. 364) note that Kuehn (1963):

...improved the ability of the Edwards Personal Preference Schedule (EPPS) to Predict car ownership. He did it with EPPS scores computed by subtracting "affiliation" scores from "dominance" scores. Such a difference really has no psychological or marketing significance; it is just a mathematical manipulation that happened to work in one situation.

Obviously, high predictive validity doesn't necessarily have to be meaningful.

However, there is one type of predictive validity which receives all too little attention, and that is cross-validity. "Whereas predictive validity is concerned with a single sample, cross validity requires that the effectiveness of the predictor composite be tested on a separate independent sample from the same population" (Raju, Bhagat, and Sheth, 1975, p. 407). It should be obvious that unless we can cross-validate our findings, we may really have no findings at all. Again, examination of the consumer behavior literature reveals few attempts at cross-validation (Kaplan, Szybillo, and Jacoby, 1974; Raju, Bhagat, and Sheth, 1975; Speller, 1973; Wilson, Mathews, and Harvey, 1975).

Construct Validity: A Necessity for Science. From the perspective of science, the most necessary type of validity to establish is construct validity. Examination of the recent published literature indicates that less than 2% of our productivity has been directed toward determining construct validity. A large part of the problem lies in the fact that many researchers appear to naively believe that scientific research is a game played by creating measures and then applying them directly to reality. Although guided by some implicit conceptualization of what it is he is trying to measure, the consumer researcher rarely makes his implicit concepts sufficiently explicit or uses them as a basis for developing operational measures. Yet virtually all con- temporary scholars of science generally agree that the concept must precede the measure (e.g., Massaro, 1975, p. 25; Plutchik, 1968, p. 45; Selltiz et al. 1959 p. 146-47).''

It is not my intention to get into a lengthy discussion of the nature of scientific research. [The interested reader is referred to Chapter 4 in Jacoby and Chestnut (1975) for an extended discussion of these issues.] I simply wish to point out that many of our measures are developed at the whim of a researcher with nary a thought given to whether or not it is meaningfully related to an explicit conceptual statement of the phenomena or variable in question. In most instances, our concepts have no identity apart from the instrument or procedures used to measure them. As a result, it is actually impossible to evaluate our measures. "To be able to judge the relative value of measurements or of operations requires criteria beyond the operations themselves. If a concept is nothing but an operation, how can we talk about being mistaken or about making errors?" (Plutchik, 1968, p. 47). In other words, clearly articulated concepts (i.e., abstractions regarding reality) must intervene between reality and the measurement of reality.

Probably the most efficient means for establishing construct validity is the Campbell and Fiske (1959) multi-method x multi-trait approach. Despite the fact that numerous articles refer to this approach as something that could or should be applied, considerably less than 1% of our published literature has actually employed this approach for systematically exploring construct validity (Davis, 1971; Jacoby, 1974; Silk, 1971). Yet if we cannot demonstrate that our concepts are valid, how can we continue to act as if the findings based upon measures of these concepts are valid? As Campbell and Fiske (1959, p. 100) note: "Before one can test the relationship between a specific trait and other traits, one must have confidence in one's measure of that trait."

Convergent Validity. One basic and relatively easy to establish component of construct validity is convergent validity. This refers to the degree to which attempts to measure the same concept using two or more different measures yield the same results. Even if there are few full-scale construct validity investigations available, it seems reasonable to expect that we should find many studies to demonstrate convergent validity. After all, and as noted above, many of our core constructs are characterized by numerous and varied operationalizations. Surely, there have been many investigations which have used two or more measures of these constructs, thereby permitting us to examine convergent validity. Examination of the literature reveals that such is not the case. Somewhat incredibly, only two (out of 500) published studies exist which administered three or more brand loyalty measures concurrently to the same group of subjects, thereby permitting an examination of how these measures interrelated. Our other core constructs fare equally poorly. Data that are available often indicate that different measures of the same construct provide different results (e.g., Kohn and Jacoby, 1973). Given that we cannot demonstrate adequate convergent validity, it should be screamingly obvious that we have no basis for comparing findings from different studies and making generalizations using such a data base. What we urgently need is more widespread use of multiple measures so that we can begin the relatively simple job of assessing convergent validity. We are being strangled by our bad measures. Let's identify and get rid of them.

Reliability. Another fundamental problem with consumer behavior measures is that data regarding their reliability, particularly test-retest reliability, are rarely provided. As an illustration, only a single study appears in the entire 300 item brand loyalty literature which measures the test-retest reliability of a set of brand loyalty measures. A similar state of affairs exists with respect to indices of other core constructs. In particular, consider the case of the test-retest reliability of recall data. In the entire literature on the use of recall data in advertising -- and I suspect that this takes into account several thousand studies --only two published articles can be found which provide data on the test-retest reliability of recall data (Clancy and Kweskin, 1971; Young, 1972). Alarmingly, one of these authors (Young, 1972, p. 7) notes that results obtained in ten retests were the same as those in the initial test in only 50% of the cases. Assuming we were ill and actually had a body temperature of 105E Fahrenheit, how many of us would feel comfortable using a thermometer if, with no actual change in our body temperature, this thermometer gave us readings of 97.0E, 100.6E, 98.6E, and 104.4E, all within the space of one 15-minute period. Yet we persistently employ indices of unknown reliability to study consumer purchase decisions and behavior. More sobering, we often develop expensive nationwide promotional strategies and wide-ranging public policies based upon findings derived from using such indices. Obviously, reliability should not only be a concern in Ph.D. dissertations and M.S. theses.

Open Publication Tradition. Let me digress for a moment -- because this seems to be as good a point as any -- to briefly touch upon my use of and stress upon the words "published literature." No doubt, work has been conducted by and for industry which addresses many of these fundamental issues. Much of this work is of high quality. Rarely, however, are the findings from these investigations permitted to enter the published literature. Although there are several reasons for this, a dominant reason is that industry is phobic. Firms are afraid that by permitting such data to be published, they will be giving up trade secrets and competitive advantages. I submit that, in the long run, industry probably has more to gain than lose by permitting this material to surface. No single firm has the resources necessary to make progress along all, or even a sizable proportion of the important research fronts. Contributing to the basic fund of knowledge would yield dividends to all.

Replication. There is a strong necessity for us to replicate our findings using different subject populations, test products, etc. Being able to predict that Johnny Bench will hit a home run on one occasion is not as impressive as being able to accurately predict on two or more occasions. The name of the game is confidence in our findings.

Measurement Based on House-of-Cards Assumption. Another problem which makes its appearance in the literature with alarming frequency is the tendency to have one's measures (or proposed measures) rest upon an intertwined series of untested and sometimes unverifiable assumptions so that the measures used are sometimes 5 or even 15 steps removed from the phenomenon of interest. The article on deceptive advertising noted earlier provides a good case in point. Interpreting data collected via such measures of measurement systems represents a form of specious logic. In such cases, if a single one of the many assumptions is rendered invalid, the entire measurement system must necessarily come cascading downward. However, perhaps there is a positive side to this problem in that it indicates consumer researchers are at least beginning to recognize that their measurements are predicated upon basic assumptions. Stating these assumptions in clear and explicit detail is a necessary and important step before meaningful progress can be made.

The Folly of Single Indicants. A final measurement problem I would like to note is perhaps most easily illustrated by posing the following question: "How many of us would feel comfortable having our intelligence assessed on the basis of our response to a single question?'' Believe it or not, that's exactly the kind of thing we do in consumer research. As examples, brand loyalty is often measured by the response to a single question. The same is true with respect to virtually all of our other core constructs. Just a few months ago I came across an exceedingly expensive, large scale multinational study of consumer information seeking which assessed opinion leadership on the basis of each subject's response to a single question. Examination of our literature reveals hundreds of instances in which responses to a single question suffices to establish the person's level on the variable of interest and then serves as the basis for extensive analysis and entire articles.

Just as is true of such constructs as personality and intelligence, most core concepts in consumer research (e.g., opinion leadership, brand loyalty, innovation proneness, shopping proneness, etc.) are multifaceted and complex. Intelligence and personality are generally measured through the use of a battery of different test items and methods. Even single personality traits are typically assessed by 30 or 40 item inventories. Given the complexity of our subject matter, what makes us think that we can use responses to single items (or even to two or three items) as measures of these constructs, then relate these scores to a host of other variables, arrive at conclusions based upon such an investigation, and get away calling what we have done "science"?

Statistics in Consumer Research

Let me now turn to a consideration of the manner in which we use statistics to analyze our data. In general, this is the area where we have the fewest number of problems and, in recent years, probably the greatest number of advances. However, as I sees it, we still do have three major problems which I will call "number crunching", "using calipers to measure melting marshmallows", and "static state statistics".

Number Crunching. I have finally reached the point where I am no longer automatically impressed by the use of high-powered and sophisticated statistics. Why? Be- cause too often the use of these statistics appears not to be accompanied by the use of another high-powered and sophisticated tool, namely the brain. For example, what does it really mean when the fourteenth canonical root is highly significant and shows that a set of predictors including size of house, purchase frequency of cake mix, and number of times you brush your teeth per day is related to age of oldest child living at home, laundry detergent preference, and frequency of extra-marital relations? Given the penchant that some have for coming up with brilliant interpretations of such findings, let me hasten to add that my question was simply rhetorical. Of course, this particular mindless application of high- powered statistics is only a way-out example -- or is it? A critical examination of the recent consumer re- search literature will reveal many more instances of such mindless and mindblowing applications.

Multilayered Madness. In its most sophisticated (a word which, it should be remembered, derives from sophism) form, number crunching involves the multilayering of statistical techniques so that the output from one analysis provides the input for the next analysis. Sometimes, this statistical version of musical chairs involves five to ten different techniques used in series. Again, given the nature of the data collected in the first place, what does the final output actually mean?

Measuring Giant Icebergs in Millimeters and Using Calipers to Measure Melting Marshmallows. Perhaps what is most surprising about this number crunching is the fact that the data being crunched are usually exceedingly crude and coarse to begin with. As already noted, the large majority of our data are collected using the self-administered questionnaire. Yet many consumer researchers don't have the foggiest idea about what the basic do's and don'ts are when it comes to questionnaire construction. Consider also the fact that the reliability and validity of the data we collect are often assumed, not demonstrated. Finally, also consider the fact that trying to measure diffuse, complex, and dynamic variables such as personality, attitudes, motives, brand loyalty, information seeking, etc. may be like trying to measure melting marshmallows with vernier calipers.

In other words, what are we doing working three and four digits to the right of the decimal point? What kind of phenomena, measures, and data do we really have that we are being so precise in our statistical analyses? I submit that our statistical methods are already too sophisticated for the kinds of data we collect. What we need are substantial developments in both our methodology (particularly in regard to questionnaire construction) and in the psychometric quality of our measures (particularly in regard to validity and reliability) before use of the high-powered statistics can be justified in many of the instances where they are now being routinely applied.

Static States Statistics. There is one area, however, in which our statistics can use some improving. By and large, most of our statistics are appropriate only for use with data which are collected using our traditional cross-sectional, static methodologies. However, just as we have a need for the further development of dynamic methodologies, we need the development of statistics for analyzing data collected using such methods. That is, we need statistics which do not force dynamic process data to be reduced to static state representations.. To a certain extent, trend analysis, and cross-lagged correlations can and have been used in this manner. However, our repertoire of statistical techniques for handling dynamic data needs to be expanded, either by borrowing from disciplines accustomed to dealing with dynamic data, or through the creative efforts of statisticians working within the consumer research domain.

Consumer Research Subject Matter

A final set of consumer research problems I would like to touch upon concerns our subject matter. Joel Cohen called much of it "trivial". In too many ways, he is still right.

Systematically Exploring the Varieties of Acquisition. To begin with, most definitions of consumer behavior tend to shackle us by confining our attention to the purchase of products and services. Aside from the fact that purchase can itself take a variety of forms (e.g., buying at list price, bargaining, bidding at auction), purchase is but one form of acquisition. There are many others. Receiving something as a gift, or in trade, or on a loan basis are three such examples. Each of these can have important economic, sociological, and psychological dynamics and consequences different from purchase, For example, on an aggregate level, if one million more Americans this year than last suddenly decided to borrow their neighbor's rake to handle their fall leaf problems, the impact on the rake industry could be enormous. For that matter, what are the dynamics underlying being a borrower or being a lender? What are the dynamics underlying giving or receiving a gift (cf. Hart, 1974; Weigl, 1975)? Hardly any published data exist regarding these facets of acquisition. Obviously, one thing we must do is systematically explore the realm of consumer acquisition decisions and behavior.

Putting Consumption Back into Consumer Behavior. Although considerable work has been done on consumption, particularly by the home economists, this fact is not adequately reflected in the predominant theories and textbooks of consumer behavior. The work dealing with consumption itself must be given greater salience and more tightly integrated with the existing consumer behavior literature.

And What about Disposition? The third major facet of consumer behavior, namely disposition, appears to have been completely neglected. This unfortunate state of affairs should be rectified for at least four reasons. First, from a purely scholarly perspective, disposition decisions deserve to be studied in their own right. The scientific approach requires that we study all aspects of a phenomenon, not just part of it. This is particularly important in this instance, since many disposition decisions have significant economic consequences for both the individual and society. Some disposition decisions (e.g., when and how to properly dispose of unused or outdated prescription drugs) may even have important health and safety ramifications. Second, on more practical grounds, much consumer behavior seems to be cyclical and a variety of marketing implications would most likely be forthcoming from an understanding of the disposition subprocess. Third, we are entering an age of relative scarcity in which we can no longer afford the luxury of squandering our natural resources. Understanding consumer disposition decisions and behavior is a necessary (and perhaps even logically prerequisite) element in any conservationist orientation. Finally, the study of consumer disposition could conceivably provide us with new "unobtrusive" (cf. Webb, Campbell, Schwartz, and Sechrest, 1966) macro indicators -- both leading and trailing -- of economic trends and the state of consumer attitudes and expectations. An empirical and taxonomic start toward exploring consumer disposition has recently been made (Jacoby, Berning, and Dietvorst, 1975).

Consumption and Production. Not only does the definition of consumer behavior have to be expanded and its various facets studied, but the relationship between consumption and production should be explored. As implied by the "leaf rake" example above, consumption and production are integrally related. Studies are needed which examine this interrelationship by considering both domains simultaneously.

Addressing Important Social Issues. Much of our subject matter is obviously a function of the pressing social issues which confront us. Or is it? Probably the most significant and potentially overwhelming problem that we as a nation -- and, indeed, the entire world -- have ever confronted is the emerging energy crisis. This problem dwarfs the VietNam war in its heyday, the Arab-Israeli situation, our economic stability, misleading advertising, nutrition labeling, and any other problem you can think of. These other problems are all pimples compared to the rogue elephant that is the emerging energy crisis. Yet the total contribution on this subject appearing in the consumer literature is fewer than five empirical and non-empirical papers. Even in those subject areas where we have supposedly been devoting attention, our record is not much better. Regardless of quality, how much empirical work, as opposed to rhetoric, has actually been addressed to the issues of consumer behavior and the elderly, product safety, deceptive and misleading advertising, nutritional labeling, etc., etc? In general, far fewer than ten published studies exist on each of these topics. As Cohen noted, we need to stop toying with the trivial and start addressing that which is significant.

EXHORTATION

This compendium (summarized for students in Table 1) is by no means an exhaustive iteration of all of the problems in and confronting consumer research. Among others, I have not touched upon the widespread tendency to over-generalize from our results, our relative inattention to cross-cultural comparisons, and the numerous avoidable or controllable problems which crop up in regard to the use of experimental designs in our research (cf. Campbell and Stanley, 1963; Rosenthal and Rosnow, 1969). The compendium does, however, cover what I view to be the most frequently occurring and severe problems which confront us.

TABLE 1

AN INCOMPLETE COMPENDIUM OF MAJOR PROBLEMS IN CONSUMER RESEARCH

Most of these have been previously discussed in print by one or more of us within the consumer research community. The problems are serious and bear periodic repeating. Some are easier to attend to than others. Hopefully, sensitization will produce awareness which, in turn, will provide the impetus for change.

Quite clearly, I think it's important to know that we don't know -- important so that we don't delude ourselves and others about the quality of our research and validity of our findings as providing sound bases upon which to make decisions of consequence. It is also important to recognize that we are in the midst of a consumer research information explosion and unless we take corrective action soon, we stand to become immersed in a quagmire from which it is already becoming increasingly difficult to extricate ourselves. Perhaps one of the things we most need to learn is that we must stop letting our existing methods and tools dictate and shackle our research. They are no substitute for using our heads. The brain is still the most important tool we have and its use should precede more than succeed the collection of data.

Because I have chosen to focus on our problems, the tone of this address has been rather negative. However, I would like to conclude on what I believe is a very legitimate positive note. Almost every one of the problems noted provides us with numerous opportunities to make meaningful contributions. Simply establishing the validity of a single one of our core constructs and shucking off our poor measures of this construct will require a substantial effort. Consider, also, the need to develop a process technology (incorporating appropriate process methods and statistics) for examining consumer behavior in terms of the dynamic, ongoing phenomenon that it is. As another example, we have need for reviews which not only summarize, but also critically evaluate the empirical evidence bearing on the adequacy of our concepts and measures. Numerous other opportunities become apparent from a consideration of our problems.

It is important to periodically take stock of where we are. However, it is probably more important that we give more than just lip service to these issues; we must begin doing something about them. The time is already overdue.

Having started with a story, let me end with one. Having just met each other for the first time, a young man and young woman are standing together in quiet conversation at a cocktail party. Without any prior indication, the young woman propositions the young man. "Fine. My place or yours?" came his reply. "Well, if it's going to be such a hassle, let's forget about it," said she. The point: it's really not such a hassle to improve consumer research. So why don't we get it on? [This address has hopefully been written so that advanced undergraduates would be able to comprehend the problems being described. If they do, and if they then begin to ask incisive and critical questions of their professors (e.g., "What does this really mean?" and "Why?"), progress is likely to come all the more rapidly. Let's hope so.]

REFERENCES

American Psychological Association, Standards for Educational and Psychological Tests and Manuals (Washing- ton: American Psychological Association, 1966).

Alan R. Andreason, "Attitudes and Consumer Behavior: A Decision model," in L. Preston, ed., New Research in Marketing (Berkeley, Calif.: Institute of Business and Economic Research, University of California, 1965), 1- 16.

Reinhard Angelmar, Gerald Zaltman, and Christian Pinson, "An Examination of Concept Validity," in M. Venkatesan, ed., Proceedings of the Third Annual Conference of the Association for Consumer Research,(1972), 586-93.

Johan Arndt, ed., Insights into Consumer Behavior (Boston, Mass.: Allyn and Bacon, Inc., 1968).

Raymond C. Battalio and John H. Kagel, "Experimental Studies of Consumer Demand Behavior: Towards a Technology of Making the Slutsky-Hicks Theory Technologically Applicable to Individual Behavior," in M. J. Schlinger, ed., Advances in Consumer Research: Vol. 2,(Proceedings of the Association for Consumer Research, Chicago: University of Illinois, 1975), 657-70.

Leo Bogart, "No Opinion, Don't Know, and Maybe No Answer,'' Public Opinion Quarterly, 31(Fall, 1967), 331-45.

Donald T. Campbell and Donald W. Fiske, "Convergent and Discriminant Validation by the Multitrait-Multimethod Matrix," Psychological Bulletin, 56(1959), 81-105.

Donald T. Campbell and Julian C. Stanley, Experimental and Quasi-Experimental Designs for Research {Chicago: Rand McNally, 1963),

Kevin J. Clancy and David N. Kweskin, "T.V. Commercial Recall Correlates," Journal of Advertising Research, 11(April, 1971), 18-20.

Joel Cohen, "Presidential Address," untitled, Association for Consumer Research Newsletter, 3(January, 1973), 3-5.

John T. Cowles, "Food Tokens as Incentives for Learning by Chimpanzees, Comparative Psychological Monographs, 4(5, 1957).

Lee J. Cronbach, Essentials of Psychological Testing, 2nd ed. (New York: Harper & Bros., 1960).

Harry L. Davis, "Measurement of Husband-Wife Influence in Consumer Purchase Decisions," Journal of Marketing Research, 8(August, 1971), 305-12.

Victor H. Denenberg, "Prolixities A. Zeitgeister, B.S., M.S., PHONY," Psychology Today, 3(June, 1969), 50.

James F. Engel, David T. Kollat, and Roger D. Blackwell, Consumer Behavior (New York: Holt, Rinehart & Winston, 1968).

James F. Engel, David T. Kollat, and Roger D. Blackwell, Consumer Behavior, 2nd ed. (New York: Holt, Rinehart & Winston, 1973).

Paul L. Erdos, Professional Mail Surveys (New York: McGraw-Hill, 1970).

Flemming Hansen, Consumer Choice Behavior: A Cognitive Theory (New York: Free Press, 1972).

Edward W. Hart, Jr., "Consumer Risk-Taking for Self and for Spouse," unpublished Ph.D. dissertation (Purdue University, 1974).

Roger M. Heeler and Michael L. Ray, "Measure Validation in Marketing," Journal of Marketing Research, 9(November, 1972), 361-70.

John A. Howard and Jagdish N. Sheth, The Theory of Buyer Behavior (New York: Wiley, 1969).

Jacob Jacoby, "Toward Defining Consumer Psychology: One Psychologist's Views," Purdue Papers in Consumer Psychology, No. 101, 1969. Paper presented at the 77th Annual Convention of the American Psychological Association, Washington, D.C., (1969a).

Jacob Jacoby, "Personality and Consumer Behavior: How NOT to Find Relationships," Purdue Papers in Consumer Psychology, No. 102, (1969b).

Jacob Jacoby, "Personality and Innovation Proneness," Journal of Marketing Research, 8(May, 1971), 244-47.

Jacob Jacoby, "Opinion Leadership and Innovativeness: Overlap and Validity," in M. Venkatesan, ed., Proceedings of the Third Annual Conference of the Association for Consumer Research,(1972), 632-49.

Jacob Jacoby, "The Construct Validity of Opinion Leadership,'' Public Opinion Quarterly, 38(Spring, 1974), 81-89.

Jacob Jacoby, "Consumer Psychology as a Social Psychological Sphere of Action," American Psychologist, 30 (October, 1975), 977-87.

Jacob Jacoby, "Consumer and Industrial Psychology: Prospects for Theory Corroboration and Mutual Contribution,'' in M. D. Dunnette, ed., The Handbook of Industrial and Organizational Psychology, (Chicago: Rand McNally, 1976a).

Jacob Jacoby, "Consumer Psychology: An Octennium," in P. Mussen and M. Rosenzweig, eds., Annual Review of Psychology 27(1976b), 331-58.

Jacob Jacoby, Carol K. Berning, and Thomas Dietvorst, "What about Disposition?" Purdue Papers in Consumer Psychology, No. 152, (1975).

Jacob Jacoby and Robert W. Chestnut, "Brand Loyalty Measurement: A Critical Review," monograph submitted for publication, (1975).

Jacob Jacoby, Robert W. Chestnut, Karl Weigl, and William Fisher, "Pre-Purchase Information Acquisition: Description of a Process Methodology, Research Paradigm, and Pilot Investigation," in B. B. Anderson, ed., Advances in Consumer Research: Vol. 3,{Proceedings of the Sixth Annual Conference of the Association of Consumer Research, Cincinnati, Ohio, October 30-November 2, 1975).

Jacob Jacoby and David B. Kyner, "Brand Loyalty vs. Repeat Purchasing Behavior," Journal of Marketing Research, 10(February, 1973), 1-9.

Jacob Jacoby, Donald E. Speller, and Carol Kohn Berning, "Brand Choice Behavior as a Function of Information Load: Replication and Extension," Journal of Consumer Research, 1(June, 1974), 35-42.

Leon B. Kaplan, George J. Szybillo, and Jacob Jacoby, "Components of Perceived Risk in Product Purchase: A Cross-Validation,- Journal of Applied Psychology, 59 (June, 1974), 287-91.

Harold H. Kassarjian, "Personality and Consumer Behavior: A Review," Journal of Marketing Research, 8(November, 1971), 409-18.

Charles W. King and John O. Summers, "Overlap of Opinion Leadership Across Consumer Product Categories," Journal of Marketing Research, 7(February, 1970), 43-50.

Carol A. Kohn and Jacob Jacoby, "Operationally Defining the Consumer Innovator," Proceedings, 81st Annual Convention of the American Psychological Association, 8(2, 1973), 837-38.

David T. Kollat, Roger D. Blackwell, and James F. Engel, "The Current Status of Consumer Behavior Research: Development During the 1968-1972 Period," in M. Venkatesan, ed., Proceedings of the Third Annual Conference of the Association for Consumer Research,(1972), 576-85.

David T. Kollat, James F. Engel, and Roger D. Blackwell, "Current Problems in Consumer Behavior Research," Journal of Marketing Research, 7(August, 1970), 327-32.

Arthur Kornhauser and Paul B. Sheatsley, "Questionnaire Construction and Interview Procedure," in C. Selltiz, M. Jahoda, M. Deutsch, and S. W. Cook, eds., Research Methods in Social Relations (New York: Henry Holt & Co., 1959), 546-87.

Alfred A. Kuehn, "Demonstration of a Relationship between Psychological Factors and Brand Choice," Journal of Business, 36(April, 1963), 237-41.

David B. Kyner, Jacob Jacoby, and Robert W. Chestnut, "Dissonance Resolution by Grade School Consumers," in B. B. Anderson, ed., Advances in Consumer Research: Vol. 3, (Proceedings of the Sixth Annual Conference of the Association of Consumer Research, Cincinnati, Ohio, October 30-November 2, 1975).

Ron J. Markin, Consumer Behavior: A Cognitive Orientation (New York: Macmillan Publishing Co., 1974).

Dominic W. Massaro, Experimental Psychology and Information Processing (Chicago: Rand-McNally, 1978).

Francesco Nicosia, Consumer Decision Processes (Englewood Cliffs, N. J.: Prentice-Hall, 1966).

Jum C. Nunnally, Psychometric Theory (New York: McGraw-Hill, 1973).

John W. Payne, "Heuristic Search Processes in Decision Making," In B. B. Anderson, ed., Advances in Consumer Research: Vol. 3, (Proceedings of the Sixth Annual Conference of the Association for Consumer Research, Cincinnati, Ohio, October 30-November 2, 1975).

Stanley L. Payne, The Art of Asking Questions (Princeton, N. J.: Princeton University Press, 1951).

John R. Platt, "Strong Inference," Science, 146(1964), 347-53.

Robert Plutchik, Foundations of Experimental Research (New York: Harper & Row, 1968).

Robert W. Pratt, Jr.. "ACR: A Perspective," in S. Ward and P. L. Wright, eds., Advances in Consumer Research: Vol. 1, (Urbana, Illinois: Association for Consumer Research, 1974), 1-8.

P. S. Raju, Rabi S. Bhagat, and Jagdish N. Sheth, "Predictive Validation and Cross-Validation of the Fishbein, Rosenberg, and Sheth Models of Attitudes," in M. J. Schlinger, ed., Advances in Consumer Research, Vol. 2, (Proceedings of the Fifth Annual Conference of the Association for Consumer Research, Chicago: University of Illinois, 1975), 405-25.

Thomas S. Robertson, Innovative Behavior and Communication (New York: Holt, Rinehart & Winston, 1971).

Everett M. Rogers and David G. Cartano, "Methods of Measuring Opinion Leadership," Public Opinion Quarterly, 26(Fall, 1962), 435-41.

Robert Rosenthal and Ralph L. Rosnow, eds., Artifact in Behavioral Research (New York: Academic Press,1969).

Mary Jane Schlinger, ed., Advances in Consumer Research: Vol. 2, Proceedings of the Association for Consumer Research, (Chicago: University of Illinois, 1975).

Claire Selltiz, Marie Jahoda, Morton Deutsch, and Stuart W. Cook, Research Methods in Social Relations (New York: Henry Holt & Co., 1959).

Jagdish N. Sheth, "The Future of Buyer Behavior Theory," in M. Venkatesan, ed., Proceedings of the Third Annual Conference of the Association for Consumer Research, (1972), 562-75.

Alvin J. Silk, "Response Set and the Measurement of Self-Designated Opinion Leadership," Public Opinion Quarterly, 35(Fall, 1971), 383-97.

Donald E. Speller, "Attitudes and Intentions as Predictors of Purchase: A Cross-Validation," Proceedings, 81st Annual Convention of the American Psychological Association, 8(2, 1973), 825-26.

Eugene J. Webb, Donald T. Campbell, Richard D. Schwartz, and Lee Sechrest, Unobtrusive Measures: Non-Reactive Research in the Social Sciences (Chicago: Rand-McNally, 1966),

Karl Weigl, "Perceived Risk and Information Search in a Gift Buying Situation," unpublished M.S. thesis (Purdue University, 1975).

David T. Wilson, H. Lee Mathews, and James W. Harvey, "An Empirical Test of the Fishbein Behavioral Intention Model," Journal of Consumer Research, l(March, 1975), 39-48.

John B. Wolfe, "Effectiveness of Token-Rewards for Chimpanzees," Comparative Psychological Monographs, 12 (5, 1936).

Peter L. Wright, "Research Orientations for Analyzing Consumer Judgment Processes," in S. Ward and P. L. Wright, eds., Advances in Consumer Research: Vol. 1, (Urbana, Illinois: Association for Consumer Research, 1974) 268-79.

Shirley Young, "Copy Testing Without Magic Numbers," Journal of Advertising Research, 12(February, 1972), 3-12.

P.S. Because her name appears nowhere else in print in connection with this conference, on behalf of the Association, I would like to express our warm and sincere appreciation to Miss Deborah Guethlein (Jerry Kernan's secretary) for all she did to make the 1975 Conference the success that it was.

----------------------------------------