Presearch As Giraffe: an Identity Crisis

Jane Templeton, Ted Bates, Inc.
[ to cite ]:
Jane Templeton (1976) ,"Presearch As Giraffe: an Identity Crisis", in NA - Advances in Consumer Research Volume 03, eds. Beverlee B. Anderson, Cincinnati, OH : Association for Consumer Research, Pages: 442-446.

Advances in Consumer Research Volume 3, 1976      Pages 442-446


Jane Templeton, Ted Bates, Inc.

[My first job in advertising was an act of faith on the part of the research director. He coined a title for my function and assigned me to the creative group. When I asked what I would do, he said: "You've done group therapy, haven't you? You should be good at focus groups. They'll tell you what they want to know, and you'll interview a group of consumers and come back with the answers." Aside from instructions on recruiting protocol, this was the sum of my instruction. Soon after, at an intra-agency presentation, I asked one of the art directors to do a visual flip-chart for me: two giraffes, behind a zoo-enclosure; one of the giraffes has bent his neck over to read the sign, and is saying: "We're Giraffes." Many years later, having watched and listened to many other researcher's conception of "focus groups", I find I've come full circle and am back to the Giraffes.]

For the past ten months -- give or take a month -- I have had on my desk a personnel form requesting me to produce a job description. I've gotten as far as name and title, but the twelve lines (I've counted them) provided for the actual descriptive meat remain blank. Given the usual personal vanity about compressing one's vital and complex function to a simple objective account, and allowing for an innate tendency to procrastinate, ten months is still a long time. Once, out of impatience with the delay, I decided that a flat statement of the operations that fill the time might satisfy, and started with "I conduct and interpret group interviews". That leaves out meetings and white papers, but does cover the bulk of time and motion. However, it dawned on me before the crossing of the first "t" that it was a cop-out and I reached for the O-Pake. To say that I conduct and interpret group interviews is like a tailor saying that he threads needles.

It would be equating, e.g., a panel recruited for casting purposes with a fishing trip for motivational dispositions in a new product area. In casting interviews, we want to know how the panelists feel about particular product-connected areas, and to elicit this information, if possible, without ever touching on those exact questions (so as to preserve some degree of freshness and spontaneity when the studio interviewer asks them). We must also be able to gauge whether, for any particular panelist, the vivacious and responsive lady or gentleman we see in the group interview will continue to scintillate under production stresses, or will go blank or become afflicted with "peanut-butter-mouth" in the studio situation.

When we are trawling for ideas and needs associated with a new product, we are similarly wary of introducing key topics -- but not because we don't want them discussed. We try to encourage the respondents to raise the relevant issues, so we can discover what they are and where they most naturally "fit" into what these respondents currently think about. We try out various broad general areas of discourse with some likely relationship to our product -- working from general to increasing particularity from the responsibilities of the homemaker, say, to nutritional accountability, to breakfast, to breakfast juices .... until somebody's switch is tripped. But once this happens, we pursue the exploration of opinions, attitudes, and feelings as deeply as the group will go (and group tolerances differ) to find the points at which fundamental internal states intersect with behavior.

And what about creative reconnaissance, which is another challenge altogether? Where to put new business sallies which -- with half an eye to the theatrical use of the videotape -- call for on-the-spot interpretations? Manifestly, naming the tool isn't going to get me off the hook. I could as usefully say that "I do marketing research".

What is it in general, then, that distinguishes the functions of Presearch from the in baskets of the account research people down the hall, with whom we frequently share projects?

* There are the apparent, easily observable distinctions that Presearch is not (as the account-re-search people are) affixed to any particular account group, but float from account to account as need dictates. And our "data-gathering" differs in that most of it is done in the agency's "fishbowl", or in similar one-way mirror facilities outside the city. But these distinctions are really extraneous.

* Paradoxically, the essential difference between what "we" do and what "they" do occurs during the one stage in the project when we all appear just the same way: shuffling papers, frowning, and getting cranky about interruptions. We are all "interpreting" data, but Presearch deals, in a different kind of interpretive responsibility than that assumed by our partners at the other end of the floor.

For example, if someone is not copied on a survey report, or left his copy on the New Haven train, or takes issue with the conclusions it propounds, he can always go back to the raw data, to seek his (her) own conclusions. Our "data" are only data to us, and our clients, in or out of the agency, pretty well know it. They know it because:

* We've worked hard to help our internal "clients" in the creative and account divisions to recognize the complexity of group interview data, and many of the agency clients now understand this as well.

* Also because some of them have learned by experience that our interview materials do mot lend themselves to literal interpretation. A typed transcript of a session, for instance, often has the same relationship to the interview guide we originally set down, that a 36-in snake has to a yardstick -- it's hard to line up and check off. The topics suggested in the guide are raised, but not sequentially, and topics tend to be interwoven and reappear, sometimes with very different implications, throughout the interview. Further, the written material may contradict the transcript, or both may be belied by the behavioral observations or by the projective data.

The focus of this paper is interpretation of group material, rather than details of procedure. But at this point, a general summary of our particular m.o. B what we do do and what we don't -- may prevent confusion. Presearch group interviews:

* Employ a relatively formal setting: a conference table in a large office within the agency (or similar facilities in other cities).

* Employ a one-way mirror and/or videotape, and overhead microphones which are connected to an audio recording system and to a loudspeaker in the observation room behind the mirror.

* Are conducted with panels of, typically, 10 - 12 consumers, using the following format:

- After panelists enter and seat themselves, and are given refreshments, there is a short Warm-Up, during which everyone including the moderators introduces him(her)self to the rest of the group and "ground rules" for the interview are stated.

- This is followed by a Predisposition discussion, which concerns itself with the contexts in which the product (we are to explore) is bought, used, and thought about. This will include general reactions to advertising in the product area generally.

- We then introduce materials: concepts, rough or finished creative executions, products, etc., and ask panelists first to write, privately, their immediate reactions to each of the materials, and then to discuss it. This pattern of "write, then talk" is continued until all materials have been exposed.

- After all materials have been discussed individually, there is usually a collective and comparative discussion of everything exposed to the respondents.

- The discussion ends with the wrap-up: a summary statement of what panelists think the group as a whole has expressed during the interview.

- Before leaving, panelists complete a brief demographic questionnaire and a self-administered projective instrument (drawings and stories).

In view of the several papers delineating rules and standards for moderating group interviews, to elaborate further on our particular interview protocol would be redundant and, worse, presumptuous. What I am describing is in no way intended as a prescription for "how to do group interviews". It is only a statement of how w-e do group interviews.

But a clear understanding of the nature of our interpretive grist does require more detailed description of how we conduct our sessions. One of the essentials of our manner of approach is the avoidance of direct questions in all parts of the interview. This type of interchange happens as near to never as conversation permits, which accounts in part for the non-sequential, interweaving flow of discussion. Obviously a direct question procedure would be simpler both to moderate and to interpret. But we feel that answers to direct questions are dangerous. They cannot give us some of the kinds of information (motivational, qualitative) that we are seeking, but more importantly, they tend to provide answers which can seriously mislead us. We have defined four reasons for avoiding the hard-edged frontal question:

* Partly, we eschew direct questioning because this kind of interaction is boring. It produces emotional disengagement from the topic, for everybody concerned, to the point of automatic, unsearching answers. Feelings would still be going on, because feelings do operate constantly in people. Some of the feelings might even be strong (like the itch to get away or to gag the moderator), but the feelings each of the panelists might be experiencing could have low, intermittent, or no relationship to the stated topic in a direct question-and-answer interview. A clever and funny moderator can, of course, make even this format entertaining, but who's been interviewed?

* Also, parallel questioning of individuals in the group is a very efficient anti-personnel weapon, in the sense of group dynamics. Respondents get disengaged -- not only from the topic, but also --from the other people in the group, so that interpersonal provocation, influence, and drift are no longer discernible.

* Too, asking a question directly does not allow the issues to emerge spontaneously. This deprives the moderator and the behind-the-mirror viewers of the opportunity to see its relative saliency, or to weigh and consider the company which that issue keeps: the ideas immediately associated with it, the feelings that accompany it, the language used to express it, etc.

* Finally, there is a less obvious reason for avoiding direct questioning; less obvious, but central to the different kind of interpretive responsibility assumed by the group-interview researcher: we avoid direct questions because of the difficulties in willingness to respond. Not that the subjects are unwilling to answer direct questions. Rather, they are quite willing (providing the questioning hasn't put them to sleep). In fact, they are willing regardless of whether or not they know the answers. I'm not suggesting malicious uncooperativeness on the panelists' parts, nor am I falling into the trap of "insulting the intelligence" of our respondents.

Remember that the kinds of questions typically addressed by this type of research are uncommonly complex. We go into our groups committed to come as close as possible to answering a brain-buster like:

"If this storyboard is produced as a television commercial, what reactions would the people in this panel be likely to have to it, and what would they do about it?"

Questions like these are not only complex, but also require attitude projection and behavioral conjecture that I'm not sure anyone can manage accurately by introspection. Simple past, present, and future tenses are hard enough to introspect about, heaven knows, but our questions -- if directly asked of group panels -- would have to be set in some obscure tense like the pluperfect conditional.

"If such and such were to happen, in the following situational context, then would you...?"

Questions of this sort are asked, by the people who request the project. And they are duly set down in the "Background and Purpose" section of the final report. They are also, more often than not, answered -- by the person responsible for the project -- with limitations and caveats reflecting the size and probable biases of the sample. But they are neither asked of nor answered by the panelists themselves, directly. Because if we asked, they would answer. And they not only don't know the correct answers, they don't know that they don't know.

That statement, and the claim which is implicit in it, bear some thinking about. Assuming that panel members want to be cooperative (and we usually assume that), and that they have had twenty or more years to get a good fix on themselves, it is a lot to say that we can learn things about them in a two-hour group interview that even the brightest respondent, with the best intentions imaginable, can't tell us. But we can -- and do.

For starters, the things we principally want to know about them are things that they rarely think about concentratedly: buying and brand behavior, usage, product attitudes, etc. -- things which are negligibly important to them as ordinary citizens in the real world ... and are the very essence of our real world. They have little motivation to search themselves for better understanding of this sphere of their lives. Our motivational stakes in understanding these things, on the other hand, are very high indeed. So we'll try harder.

Also, we bring to the interview situation two kinds of expertise which the panelists don't have. We use our expertise in human behavior and marketing strategies to figure out how internal events like feeling, attention, and memory combine in the ultimate sacrament of reaching a hand into a pocket to buy our product. So when we talk about "interpreting" consumer reactions to get the answers to specific questions or to make specific recommendations, we mean something different from the "interpretation'' of a questionnaire survey.

In interpreting survey data, the respondents' actual statements are treated as factual, and interpretations are based on measurements and comparisons of these "facts". In the case of group interview interpretation, "what they say" may be amended or modified, or in some extreme cases, even totally contradicted by the interpretation.

I don't mean that the panel's reactions are ignored. On the contrary, all of the respondents' communications are taken into account, both in aggregate and minute-to-minute. As we perceive it, at least three communicative channels are open either all of the time or intermittently through the interview, providing us with three kinds -- or levels -- of information:

* The level of public affirmation: This is what panelists actually say. It is their interpretation of what they think and feel, impacting with the social role they are trying to maintain, in conjunction with the expressed views of other panel members. We haven't asked them to go "on record" with flat "yes"es and "no"es, so they are not greatly concerned with consistency (or can't keep track of it). Neither are we. We watch motivational drift closely because an about-face is as useful to us as a to-the-death stance. The language they use in discussing what we are there to talk about is also a part of public affirmation, and since they have usually introduced the topic, their language is relatively uncontaminated by our expectations of how they will talk about it.

* The level of private acknowledgement: This is what they write, on our open-end questionnaire, as soon as they have been shown a commercial, or storyboard, or concept, and before any discussion takes place. If your eyebrows go up about the assumption of some independence between these two levels, I have no hard-headed answer. We do ask everybody to turn the written forms face down before open talk commences, and we begin discussion in a different way than the questionnaire, so that there is no exact parallel. But we can't erase the written answers from their minds. Exactly how or why it works, I'm not sure, but that it works, I'm pretty confident about. Written reactions very often sound as if somebody else came in to write them, when compared to the group interchange, and they rarely track very closely the direction of open discussion. We think that the written material reflects what they think "in solitary", as opposed to feelings they subsequently "own" under social pressure. Or perhaps the spoken comments mirror more what they want to be heard saying. Fitting written statements into our interpretive scheme, we use content plus indications of intensity of emotion or opinion (underlining, exclamation marks, heavy pressure) and involvement (how much is written, signs of personal projection of product use, etc.)

* The level of personal revelation: This is first of all, what we learn from their non-verbal communications: vocal range and variation, postural changes, facial expressions, constrictive or expansive demeanor [If you were worried about the independence of written/ verbal responses, you may be beginning to question what we use as a behavioral baseline against which interview behavior stands out and can be interpreted. We establish this informally, for the group as a whole and for individuals, during the warm-up.]. Three respondents cam say the same thing and express quite different inner states. Consider the phrase: "Frankly, it leaves me cold." Assume that one respondent making this comment spits it between clenched teeth, leaning forward, hands gripping the edge of the table; that a second panel member says it in a low voice without inflective melody, suppressing a yawn, leaning back with her hands slack in her lap; and that a third panelist says it almost laughingly, sitting forward hugging her arms, maybe with her hand across her mouth, swinging her chair, and with her eyes sparkling. It's up to the moderator to be aware of such behavioral distinctions, not only in the person speaking at any one time, but in the group as a whole. About a group, we may note its speed of warm-up, whether -- and when -- they are autonomous or look to each other or the moderator for guidance, the intensity of controversy, the tendency to return to -- or to avoid -- particular topics.

The figure drawings and stories we regularly ask panels to produce are also sources of the "personal revelation" level of data, and are used to elaborate, underline, or reconcile sketchy, ambiguous, or contradictory impressions.

It will have occurred to you, of course, that when we shift our attention from one level of response to another, during the interview, we are confounding the distinction between moderating and interpreting. And of course, you are right. The line is a little fuzzy. Some clearly interpretive operations may go on -- even out loud -- while the interview is in progress. This is a decision based on moderator-judgment. Interpreting is almost sure to raise the feeling intensity of the group, and may generate a certain amount of anxiety. The balance is difficult to describe. Placidity, comfort, and consensus are by no means the only -- or even necessarily the most ideal -- forms or outcomes of a group-exploration. But on the other hand, the moderator assumes an obligation to keep feeling-intensity and interpersonal contention within tolerable bounds, and to hold rein on behavioral chaos. If the group is judged to be firmly-knit enough to contain stress, and the individual(s) in questions is (are) self-searching enough not to become overanxious, then it is interpersonally defensible -- and can also be extremely productive -- to offer interpretive probes like:

"You say that it leaves you cold, but I'm getting a very different message from your voice and manner -- that you really dislike it very much" (or "that something about it amuses or delights you"). Can you clarify those different communications for me?"

When the respondent can assimilate this degree of conflict and has the self-awareness to resolve it, or alternately, when the climate of the group is supportive enough so that other respondents will rush in to help one of its members to better self-understanding, these interpretive interchanges are not only helpful to the moderator (and viewers of the session), but also provide the respondents with the uniquely heady achievement of insight. This explains the apparent paradox that some interviewers which are apparently charged with ambivalent or unpleasant feelings and interpersonal strife are frequently perceived by respondents as joyous, uplifting experiences.

In the closing minutes, during the interview wrap-up it is customary for us to invite interpretation from the panel. At this point, the moderator says something like: "You've all been sitting at the table just as I have. If you had the assignment of summarizing what this whole group felt about _________, what do you think you'd say?" Because our sessions tend to be up-tempo, interpersonally active, even sometimes tense and factional, respondents usually want a degree of closure, and will generally jump in -- often spotting things that neither moderator nor viewers have picked up.

The first formal, unadulterated "interpretive" act takes place during our "post-mortem": a debriefing session which is held immediately after the interview, and is attended by the moderators, any viewers who have hung on staunchly to the very end, and others who are concerned with the project and may or may not have been able to watch from behind the mirror. During this informal rehashing, we note the conspicuous themes of the discussion, mention spontaneous impressions of the behavioral flow, perhaps glance at the written responses and the drawings, and negotiate a general sense of the direction of the group, using the contributions of everyone who participates.

Now the major job of interpretation begins. This is part of the job description that gives me the most trouble. It is also the part of the job that gives me the most trouble ... and the most personal "juice". Interpreting one or a series of group interviews places great demands on intuitive and organizational skill, and I never finish a report without the feeling that some small truth has been extracted from tons of pitchblende.

Obviously, the interpretation which is finally made will depend on the purpose for which the groups were scheduled and on the form in which results are to be communicated. But whatever the purpose and intended format of presentation, the act of interpreting our group interview data consists in the bringing together of disparate material (private, written reactions, interactive discussion, observed behavior, drawings and stories), weighing and sifting of all inputs, and organization of these multiple clues into an articulated set of premises and speculations.

To take one example, probably our most frequent assignments are aimed at assessing panelists' reactions to creative material. The way we collect data on the relative power or "goodness" of concepts or executions is very different from standard copy-testing procedures. We don't, for example, measure increments of interest and importance, nor do we construct scales. It follows that the kinds of answers we give to the questions asked of the project will be different kinds of answers, in meaning if not in labeling. For instance:

* Comprehension: We do include in reports some estimate of how well our panels seem to understand the message in a concept or execution, but with one important difference. We assume that in any communication the message that is received is as valid as the one transmitted. If most of the people we talk to "understand" copy to mean the same thing, and if they are positively affected by what they think it means, we say that this is good comprehension. We will say this, even if what panelists understood is not what the copy meant to say (of course noting in the report that there is a gap between what we think we said and what they think they picked up).

* Persuasiveness: We also milk the data for anything they can tell us about the extent to which panel members are convinced by our creative material that they should try the product. Precisely because we don't ask: "Would you buy?", we feel free to place some weight on spontaneous statements of buying intention, especially if they are supported by indications that the respondent has projected the buying or using of the product into his future expectations, e.g. by incorporating it into a larger plan: "I would buy it and use the money it would save me to go to the movies." We also watch -- and use --things like switching from the conditional to the declarative mode: "I would buy it so that I will be able to save money and go to the movies with it." Facial expression and behavioral responses bear on the state of persuasion as well. Also fanning contention and controversy allows us to observe how much respondents who are persuaded will argue with those who are not, or how stoutly they will resist the arguments of those definitely opposed.

* "Importance" and "Recall": We combine these ideas in a concept we have privately labeled "Embeddedness". This has to do with the extent to which other, subsequent life events which these particular respondents are likely to encounter will tend to evoke rather than to bury their recollection of the message (the "message" being what the respondent got out of it). If, e.g., the next time the respondent gets into a lather about rising prices, she'll probably remember our product, and if she's a type who padlocks her purse, and lathers often and intensely, then for this respondent, the message is highly "embedded".

"Embeddedness" also includes the quality of "identification" which panelists may feel with the idea, the situation, or the people depicted in the execution. Someone who sincerely gets a shock of recognition: "Hey, that's me" when he looks at a commercial is apt to be reminded of the execution every- time the parallel situation occurs in his own life.

* Believability: Is something that shifts significance according to the product, the degree of belief or unbelief, and the reason for which it is believed or not. We report on it, when it seems to be important, but it has no permanently assigned evaluation. Clearly, a cosmetic product that is "too good to be true" may have created a very positive impression, while "some of your damned advertising doubletalk" in connection with bank services or a food's freedom from additives has not gone down too well.

* Liking: The romance which panelists have with the creative material is by far the untidiest of the creative responses we consider. Presumably, an ad or commercial that isn't sufficiently "liked" won't be allowed to deliver its message. On the other hand, we've all seen "adorable" campaigns that didn't move the product, and "outrageous" ones that did. We have to address "liking" in reports, because it is something that panels talk about, but there is no one standard rule for interpreting what "liking" a commercial or advertisement has to do with purchasing the product. We tend to think that strong feelings in either direction register more clearly and last longer than the most benign low level response. A comment like: "It's short and to the point and no-nonsense" -- whatever pejorative connotations are intended -- is nearly always a kiss of death.

As for how we go about combining our three layers of interview material into estimates of persuasiveness or embeddedness or whatever other judgments we have been asked to make, there are, again, no invariable rules. When written reactions are at odds with socially aired opinions or feelings, we can't assume a priori that one or the other is the more "true". We must take into account the type of product, the experience of using it, the probable impact of social pressure on the product category, and so forth. If the written comments are more positive than the beginning of the discussion, it may mean that respondents are drawn to the product, but must pay lip-service to consumer cynicism. If attitudes expressed in the group tend to become more positive as the discussion continues, we would probably assume that this was true. On the other hand, initial private acceptance followed by public rejection may equally well show a quick disenchantment with advertising claims perceived as superficial or irrelevant, and in such a case, could indicate more intense net-aversion than when both written and verbal responses are moderately, uniformly negative.

There are no formulae. There are, alas, few precedents. Sometimes, long familiarity with a product or product group will give us a reassuring feeling of solidity, and some ready-made hypotheses for explaining contradictions. But even here, we must be alert for signs of change. There is also a kind of cumulative serendipity that permits us to recognize in one product category attitudes that are familiar from another, and to speculate about whether, e.g., a product that used to be almost purely cosmetic is beginning to shift to a medicinal image (since attitudinal patterns are suddenly similar to those habitually seen in drug product interviews).

By-and-large, however, once the interviews are done and the various interview products sifted through, we are alone with the data and whatever tools we have acquired to organize them into a final report. Partly, I am hindered in delivering an adequate job description because I feel I should append a resume. It would be difficult to find a vocation that challenged more completely the sum of knowledge and skills I can muster. My group interviews -- and the reports that summarize them -- are as they are because of my academic and clinical background, and have gotten better as my marketing background increased. Group interviews, generally, have astonishing flexibility, and can absorb whatever one brings to them.

Having begun with the premise that I was setting forth one way -- not the way -- to use group interview research, I find I do have something to say about how one "ought" to approach group interviews. One should approach them with as simple and clear an idea of the objectives as possible, and with an equally clear (though possibly less simple) inventory of one's own skills, blind spots, biases, and expectations as possible. This will instruct the professional as to what he can do very well, where he must exercise caution, and -- when the data are before him -- help him to recognize a moderator-skew when he sees one.