The Role of Post-Experience Comparison Standards in the Evaluation of Unfamiliar Services

Ann L. McGill, Northwestern University
Dawn Iacobucci, Northwestern University
ABSTRACT - Previous research suggests that consumers evaluate service encounters by comparing them to prior expectations or other pre-experience comparison standards. The present research addresses the evaluation of services for consumers who lack the information or experience to construct detailed pre-experience comparison standards and so must rely on a more bottom-up approach to the evaluation. An exploratory study was conducted to examine the nature and content of consumers expectations for unfamiliar services and on their post-experience evaluations. Results of the exploratory study indicate that consumers may indeed evaluate novel or unfamiliar services on "post-experience comparison standards" suggested by characteristics of the service encounter itself. The article considers differences in evaluations for consumers who rely primarily on pre- versus post-experience comparison standards.
[ to cite ]:
Ann L. McGill and Dawn Iacobucci (1992) ,"The Role of Post-Experience Comparison Standards in the Evaluation of Unfamiliar Services", in NA - Advances in Consumer Research Volume 19, eds. John F. Sherry, Jr. and Brian Sternthal, Provo, UT : Association for Consumer Research, Pages: 570-578.

Advances in Consumer Research Volume 19, 1992      Pages 570-578

THE ROLE OF POST-EXPERIENCE COMPARISON STANDARDS IN THE EVALUATION OF UNFAMILIAR SERVICES

Ann L. McGill, Northwestern University

Dawn Iacobucci, Northwestern University

ABSTRACT -

Previous research suggests that consumers evaluate service encounters by comparing them to prior expectations or other pre-experience comparison standards. The present research addresses the evaluation of services for consumers who lack the information or experience to construct detailed pre-experience comparison standards and so must rely on a more bottom-up approach to the evaluation. An exploratory study was conducted to examine the nature and content of consumers expectations for unfamiliar services and on their post-experience evaluations. Results of the exploratory study indicate that consumers may indeed evaluate novel or unfamiliar services on "post-experience comparison standards" suggested by characteristics of the service encounter itself. The article considers differences in evaluations for consumers who rely primarily on pre- versus post-experience comparison standards.

In a recent article on models of consumer satisfaction, Tse and Wilton (1988) note that "it is generally agreed that post-consumption consumer satisfaction/dissatisfaction (CS/D) can be defined as the consumer's response to the evaluation of the perceived discrepancy between prior expectations (or some other norm of performance) and the actual performance of the product as perceived after its consumption" (p.204; see also Day 1984). Models of CS/D have emphasized the role of a pre-experience comparison standard (Beardon & Teel 1983; Cardozo 1965; Day 1977; Liechty & Churchill 1979; Miller 1977; Oliver 1977, 1980; Woodruff, Cadotte & Jenkins 1983) and the extent to which this pre-experience standard is disconfirmed (Anderson 1973; Beardon & Teel 1983; Cadotte, Woodruff & Jenkins 1987; Day 1977; Howard & Sheth 1969; LaTour & Peat 1979; Maddox 1981; Oliver 1977, 1980; Olshavsky & Miller 1972; Swan & Combs 1976; Swan & Trawick 1981; Woodruff, Cadotte & Jenkins 1983). Thus, a large body of research on consumers' satisfaction or dissatisfaction with products may be described using the so-called "disconfirmation paradigm".

The post-consumption evaluation of service quality is frequently asserted to be more difficult than the evaluation of products, primarily because of the intangibility of services, the heterogeneity across service encounters and the inseparability of production and consumption (e.g., Gronroos 1982; Lehtinen & Lehtinen 1982; Lewis & Booms 1983; Parasuraman, Zeithaml & Berry 1985; Sasser, Olsen & Wyckoff 1978). Nevertheless, the evaluation of services, like the evaluation of products, is said to involve a comparison of expectation with performance:

Service quality is a measure of how well the service level delivered matches customer expectations. Delivering quality service means conforming to customer expectations on a consistent basis (p.100, Lewis & Booms 1983).

The utility of this view of the evaluation process is demonstrated by the work of Parasuraman et al. (1985), which extends the notion of disconfirmation to a more general model of service quality. This model has direct managerial implications and also provides a structure for future research. Specifically, these authors propose five "gaps," including the gap between consumers' expectations and the actual service delivered, that determine service quality. For example, Parasuraman et al. posit that service quality may be influenced by the gap between management perceptions of consumers' expectations and the expectations themselves, between management perceptions of consumers' expectations and service quality specs, between service quality specs and the actual service delivered, and between service delivery and external communications to the consumer. Thus, the idea that consumers' evaluations are determined by the gap between consumers' expectations and their actual service experiences is intuitively appealing, provides a useful basis for additional research on the determinants of service quality, and carries with it the benefit of a large body of literature on CS/D that may be applied to understanding how people evaluate service quality.

A question that has not yet been addressed in the literature, however, and one that may become increasingly important as the size of the service sector continues to grow in the North American economy concerns how the disconfirmation paradigm may be applied to novel or unfamiliar service encounters. In particular, it is not clear how the notion that people compare their experiences to a pre-experience comparison standard may be applied to the evaluation of services for which the consumer has little information or experience to generate a meaningful expectation--i.e., when so little is known by consumers about the class of services that their expectations are imprecise and without much detail. Examples of such services may include consumers' initial experiences with visiting an attorney, seeing a therapist, getting a fitness evaluation, hiring a caterer, taking a marketing class, or visiting a career counselor. In each of these cases, novice consumers may have a general idea of what they hope to get out of the service and a vague sense of what the experience might be like. However, it is likely that consumers' expectations are quite impoverished relative to the actual experience, containing far fewer attributes than consumers will in fact notice in experiencing the service.

Unfortunately, little information is provided in the marketing literature on the nature or content of consumers' expectations for unfamiliar services. Past studies have either manipulated subjects' expectations (e.g., Olshavsky & Miller 1972) or focused on relatively familiar services (e.g., Parasuraman et al. 1985; Cadotte, Woodruff, & Jenkins 1987). The question then is how do consumers evaluate novel service experiences? We draw on the marketing and psychology literatures, which suggest three possible answers to this question.

The first possibility is that consumers evaluate novel experiences on only those few attributes included in their expectations. For example, a novice consumer may expect attorneys to have nice offices, to dress in suits, and to use unfamiliar jargon. On visiting an attorney for the first time, the novice consumer may limit the evaluation to these three attributes. This view preserves the importance of the pre-experience standard of comparison as described in the literature on CS/D and carries with it the plausible assertion that novice consumers consider very few attributes in their evaluations because they simply don't know what to look for. This approach would keep the evaluation task simple for the novice.

A second possibility is that novice consumers cope with their inexperience by shifting to a higher level of abstraction. Research in marketing indicates that in choosing between so-called "noncomparable alternatives," which share few basic features, consumers shift to a higher level of abstraction to effect the comparison (Johnson 1984). Thus, in choosing between a stereo and a vacation, consumers may compare the alternatives on abstract features such as practicality, opportunity for self-improvement, or length of time over which the alternative will provide enjoyment. The application of this idea to the evaluation of unfamiliar services yields the suggestion that although novice consumers may not have any directly comparable experiences to help them construct an expectation for the service, they may have experiences that are comparable at a higher level of abstraction. For example, consumers who haven't been to an attorney might have trouble constructing expectations the way experienced people might--e.g., on detailed features specific to attorneys such as knowledge of relevant case law, education, and ability to predict the opposing attorney's strategy. Novice consumers may nevertheless construct expectations based on abstract features such as friendliness, self-confidence, and articulateness. It is possible also that this second approach based on abstract features may provide a way to reconcile Westbrook and Reilly's (1983) view of CS/D with the disconfirmation paradigm. Westbrook and Reilly suggest that people determine satisfaction not by comparing their experiences to a pre-experience comparison standard but instead by noting how well the alternative filled their needs or wants. At a very high level of abstraction, consumers may compare the outcome of their experience to the level of the utility or value that they had expected to receive.

A third possibility that we would like to propose is that people evaluate services by generating a comparison case after the fact. Novice consumers may be unable to generate detailed expectations in advance, but upon experiencing the service, they may see how it could have been otherwise. The service is evaluated as suggested by the disconfirmation paradigm--i.e., on the gaps between what was and what might have been. The difference is that the comparison standard is generated during or after the consumption experience, not before. Hence, this third view suggests a more bottom-up versus top-down approach to the evaluation of services.

Thus, we identify three possible approaches to the evaluation of unfamiliar services--i.e., based on a pre-experience comparison standard at a low level of abstraction, based on a pre-experience comparison standard at a high level of abstraction, or based on a post-experience comparison standard. Further, all three approaches may be present in a single evaluation. For example, consumers may evaluate the movie BATMAN on detailed features included in their expectations (e.g., "I expected Robin to be in it"), on abstract features derived from experiences with other action-adventure movies ("I expected it to be high-energy and fast-paced"), and on detailed features suggested after the fact ("It ruined it for me when they showed the Batmobile indestructible but the Batplane something you could shoot out of the sky with a handgun").

As this example suggests, evaluations based on post-experience comparison standards may be based on different types of attributes or features than evaluations based on pre-experience comparison standards. In particular, in contrast to the suggestion that consumers evaluate service encounters at a high level of abstraction, comparison to a post-experience standard suggests an evaluation based on specific details. An example may be student assessments of courses and instructors. Although frequently ingenuous in their expectations (e.g., "I hope to gain insights to make me a better manager"), student course evaluations often appear based on minute details (e.g., "use more subheadings in lecture outlines", "make more frequent eye contact with students"). Further, it would appear that these details are not likely to be those included in an expectation generated in advance, but rather appear available to the consumer only by virtue of having experienced the service. Thus, an important advantage of this third view, which proposes use of post-experience comparison standards, is its ability to explain how features that are not included in consumers' expectations influence the evaluation.

The purpose of the present research is to examine this third approach based on post-experience comparison standards in greater detail. An important goal first for this research was to gather evidence for the evaluation of services on features that were not included in consumers' expectations but which were suggested by the service encounter itself. To accomplish this goal, we conducted an exploratory study designed to examine the nature of consumers' expectations for unfamiliar services and the basis for consumers' post-experience evaluations. In keeping with the three alternatives proposed for the evaluation of novel or unfamiliar services, we were particularly interested in determining whether these expectations and evaluations were based on many or few attributes and whether these attributes were expressed at a high or low level of abstraction.

After presenting results for the exploratory study, we interpret differences in the evaluation of services based primarily on pre- versus post-experience comparison standards in light of recent research by Kahneman and Miller (1986). These authors provide a theoretical basis for understanding a) how people may construct post-experience comparison standards and b) how evaluations based on post-experience comparison standards may differ from those based on pre-experience comparison standards. We present formal propositions regarding the nature of these differences and suggest methods for their examination.

STUDY

Method

Subjects. Subjects were 21 members of the Northwestern University community who had enrolled in an introductory workshop on a computer spreadsheet package. The workshop was conducted by the computer services division of the university and was designed for individuals with little or no prior computing experience. Subjects were given $1.00 "as a small token of our appreciation" for agreeing to fill out the questionnaire.

Materials. Subjects answered a pre-experience and post-experience questionnaire. The pre-experience questionnaire asked subjects to describe the upcoming workshop in terms of what they expected to get out of the experience ("What are the main reasons you are planning to attend this workshop?") and in terms of how they expected to evaluate the workshop ("How will you evaluate this workshop? What factors will affect your satisfaction with this workshop?").

The post-experience questionnaire asked subjects to describe the workshop. Subjects were asked to "consider such things as what it was like to be in the workshop, how the instructor was, what you learned, and so forth" in describing the experience. Subjects were also asked to indicate, "What factors influenced your assessment of the workshop? What affected your satisfaction or dissatisfaction?" Finally, subjects were asked to evaluate perceived performance of the workshop ("...your objective assessment of the quality of the workshop, regardless of your personal level of satisfaction with the service..." on a 7-point scale ranging from "very poor" to "excellent" quality), subjective disconfirmation ("...how close the workshop came to satisfying your expectations for the service..." on a 7-point scale ranging from "very much poorer" to "very much better" than expected), and satisfaction ("...how satisfied are you with the workshop..." on a 7-point scale from "very dissatisfied" to "very satisfied"). These scales were adopted from Tse and Wilton (1988). Support for these scales can also be found in Churchill and Suprenant (1982) and Oliver (1980).

Procedure. Subjects were contacted before the workshop and asked if they would be willing to fill out a questionnaire on the upcoming experience. Subjects were told that the purpose of the questionnaire was "to help understand how people evaluate experiences such as workshops and to assist the computing center in improving their services." Written instructions for the questionnaire were as follows:

Thank you for participating in this study. Please accept the dollar as a small token of our appreciation.

On the following pages, we ask you to describe your expectations for various services including the workshop in which you are about to participate. Please describe your expectations in as much detail as you can, noting both major and minor features. Also, try to include even "obvious" features in your description. For example, you might expect an optometrist to have a chart of letters on the wall and to evaluate your vision by asking you to read these letters. Even though this sort of thing may seem almost too common to mention, we ask that you please include such features in your description.

Please note that there are no right or wrong answers to any of these questions. For coding purposes, please write down the last 4 digits of your social security number: .

The post-experience questionnaire, which subjects were not told to anticipate, was administered at the end of the workshop. Instructions for the post-experience questionnaire informed subjects that "on the following pages we ask you to describe and evaluate the workshop you attended." Subjects were again asked to provide the last four digits of their social security number for coding purposes. Subjects spent roughly 10 minutes filling out each the pre-experience questionnaire and the post-experience questionnaire.

Results. Subjects' expectations and descriptions of the service in the pre-experience and post-experience questionnaires were coded as "abstract" meaning they could be applied to any workshop, class, or service or "specific" meaning they applied to the specific characteristics, outcomes, or content of the workshop on the particular spreadsheet program. Responses were further categorized as referring to a) the "outcome" of the service--i.e., what the person hopes to gain from the workshop--b) the "process" by which the service was delivered--i.e., the materials in the course, the behavior of the instructor, and so forth. Responses that could not be classified into any of the above categories were coded as "other." Figure 1 provides an example of each category of response. Table 1 displays the mean number and mean proportion of responses in each category for the pre-experience and post-experience questionnaires.

FIGURE 1

EXAMPLES OF FEATURES PROVIDED ON THE PRE-EXPERIENCE AND POST-EXPERIENCE QUESTIONNAIRES BY CATEGORY

Results confirm that subjects evaluated the workshop on attributes that were not included in their expectations but which were suggested by the service encounter itself. The first indication of this pattern is that subjects provided relatively more detailed descriptions in the post-experience questionnaire than in the pre-experience questionnaire (t(19) = 3.77, p < .001, matched t-tests). This difference in detail may also be attributed in part to a more general tendency to provide more detailed accounts for past versus future events (Bavelas 1973).

Additional evidence for the generation of post-experience comparison standards comes from examination of the nature of subjects' expectations and post-experience evaluations. We consider first the pre-experience questionnaire. When asked to describe their reasons for attending the workshop (Q1--see Table 1), subjects overwhelmingly provided reasons related to the outcome of the service (t(20) = 14.06, p =.000, matched t-tests). Further, subjects tended to mention specific outcome features in greater proportion than abstract outcome features (t(20) = 2.74, p = .013). That is, subjects for the most part indicated their reasons were related to how they would use the specific course material, for example, "everyone else in my office uses [this spreadsheet package] and I need to be compatible."

When asked how they expected to evaluate the upcoming workshop (Q2), subjects provided a different mix of features. Subjects again mentioned outcome features in greater proportion than process features, but for this question the difference was only marginally significant (t(20) = 1.77, p = .092). The decrease in proportion of process features was due to relatively more frequent mention of abstract process features (t(20) = 3.15, p = .005) and relatively less frequent mention of specific outcome features (t(20) = 2.67, p = .015) compared to the preceding question on reasons for attending the workshop. The proportion of abstract outcome and specific process features did not change significantly (p's > .25). Thus, although subjects emphasized specific outcome features when asked how they would evaluate the workshop, they described their evaluation standard more frequently in terms of abstract features, particularly abstract process features. This outcome suggests that subjects intended to evaluate the workshop using general cues to quality that could be applied across service encounters of this sort--e.g., on the professional manner of the instructor or the general appearance of the handout material.

Whatever subjects' expectations or intended evaluation standards, the post-experience questionnaire indicated a wholly different response to the experience. In contrast to the pre-experience questionnaire, subjects described the workshop (Q3) primarily in terms of process features (t(20) = 6.17, p = .000), dividing their comments almost equally between abstract and specific process features (t(20) = .90, p = .377).

Subjects' listing of features that ultimately influenced their satisfaction with the experience (Q4) revealed a similar pattern. Process features were more commonly mentioned (t(19) = 5.78, p = .000, with specific process features (e.g., "the assistants were distracting when they walked behind me") mentioned in somewhat greater proportion than abstract process features (e.g., "there was plenty of time for questions"), although this difference was not significant (t(19) = 1.59, p = .13).

In contrast to what might have been expected from the literature on the disconfirmation paradigm, comparison of subjects' listing of features that affected their level of satisfaction in the post-experience questionnaire (Q4) were not entirely consistent with the listing of factors that they expected to affect their level of satisfaction in the pre-experience questionnaire (Q2). The proportion of abstract process features did not change significantly (t(19) = .32, p = .756). However, the proportion of abstract outcome features and specific outcome features dropped significantly (abstract outcome features, t(19) = 2.28, p = .034; specific outcome features t(19) = 2.85, p = .01), while the proportion of specific process features increased significantly (t(19) = 6.61, p = .000). The increase in proportion of specific process features is particularly noteworthy; whereas the proportion of these features was not reliably greater than zero in the pre-experience questionnaire (Q2--t(21) = 1.00, p = .17), specific process features assumed a strict majority in the actual evaluation (Q4--53%). The frequency of mention of specific process features in the post-experience questionnaire indicates that the inexperienced consumers who participated in the present study were satisfied or dissatisfied with the workshop because of small details--details that they did not mention as part of their expectations for the service and which it would be difficult to imagine them able to mention before experiencing the service given the particular nature of these features.

TABLE 1

MEAN NUMBER AND MEAN PROPORTION OF RESPONSES BY CATEGORY

We also examined the relationships among the perceived performance, subjective disconfirmation, and satisfactions scales for evidence for the use of pre-experience and post-experience comparison standards. As the current CS/D literature would predict, subjects' ratings of their satisfaction with the workshop were highly correlated with their ratings of the extent to which their expectations were met by the workshop--i.e., their ratings of subjective disconfirmation (r = .599, p < .003). Furthermore, ratings of subjective disconfirmation were only partly related to their ratings of perceived performance (r = .360, p < .064), a result that is sensible if confirmation is a function of the objective assessment and subjects' expectations. However, we also note that ratings of perceived performance were an even better sole predictor of satisfaction (r = .696 vs. .599) than was the subjective disconfirmation rating. That is, while theoretically the perceived performance and the expectation together are expected to predict satisfaction better through the disconfirmation process, the perceived performance ratings alone predict satisfaction more clearly, and expectations as evidenced through the disconfirmation rating add only noise to the prediction of satisfaction.

Discussion. One of the most striking aspect of the exploratory study was the difference in features mentioned in the pre-experience and post-experience questionnaires. Subjects' post-experience evaluations appeared influenced primarily by specific process features, although these features were rarely mentioned in the pre-experience questionnaire. This pattern of results suggests use of a post-experience comparison standard in which consumers' satisfaction is determined by first experiencing the service and then imagining how it could have been otherwise. The implication of this finding is discussed in the following sections.

However, instead of viewing the large proportion of specific process features in the post-experience questionnaire as inconsistent with the pre-experience questionnaire, we also considered the possibility that specific features were cited merely as examples of the abstract. For example, we considered the possibility that subjects mentioned specific process features (e.g., "the instructor told stupid jokes") in the post-experience questionnaire as evidence for the abstract standard articulated in the pre-experience questionnaire (e.g., "professionalism of the instructor"). Future research is needed to evaluate this view further, but it is our belief that the specific process features affected the consumers' satisfaction directly and not as evidence for a more abstract standard. This belief is based on the previously stated theoretical concerns and on closer scrutiny of subjects' post-experience questionnaires wherein subjects appeared to describe their satisfaction as a direct consequence of the specific process features (e.g., "I was annoyed because the instructor told stupid jokes"). Subjects did not articulate any relationship between specific process features and an abstract standard, for example in comments of the form "the instructor told stupid jokes which just didn't seem professional."

Results of the exploratory study also suggest that consumers may experience considerable difficulty in translating their reasons for purchasing a service into an effective evaluation standard. Subjects in the present study articulated their reasons for attending the workshop primarily in terms of specific outcome features. However, subjects based their evaluations of the workshop more frequently on specific process features. Subjects may have adopted this approach because while process features were immediately available, outcome features may be more difficult to assess until some later date. This reasoning suggests that consumers may evaluate services on different classes of features at different points in time. Nevertheless, the scant proportion of outcome features in the post-experience questionnaire is surprising given the importance of these features in the pre-experience questionnaire. One might have expected relatively more frequent comments in the post-experience questionnaire of the form, "I liked (disliked) the workshop because I think I learned a lot (very little).) The particulars of the experience appear, however, to have dominated subjects' evaluations.

DIFFERENCE IN EVALUATION FOR PRE-VERSUS POST-EXPERIENCE STANDARDS

While previous research posits that consumers evaluate services by comparing their actual service experiences to pre-experience comparison standards, results of the exploratory suggest that consumers may in some cases evaluate the actual service experience relative to a standard of comparison that is generated after the fact. In this section, we examine differences in the evaluation of services for subjects who compare the experience to a pre-experience comparison standard versus those who rely primarily on a post-experience comparison standard.

Attribute Importance

Differences in the evaluation may derive from how post-experience comparison standards are constructed versus expectations or other pre-experience comparison standards. Research by Kahneman and Miller (1986) suggests the likely characteristics of a post-experience comparison standard. These authors note that in constructing an alternative to a stimulus, people will hold some features constant while they let other features vary. Kahneman and Miller propose that "the mental representation of a state of affairs can always be modified in many ways, that some modifications are much more natural than others, and that some attributes are particularly resistant to change" (p.142-43).

Kahneman and Miller further suggest the sorts of features that are likely to be held constant ("immutable features") as opposed to those which are likely to be let vary ("mutable features"): "a plausible hypothesis is that the essential features that define the identity of the stimulus are most likely to be maintained as immutable. This hypothesis has surprising consequences: it entails that judgments of a stimulus evaluated in isolation will tend to be dominated by features that are not its most central" (p.141). The implication of this proposal for the present research is that when a service encounter is evaluated in isolation, for example when the consumer has no previous experience or does not have the ability or inclination to produce a well-formed expectation, the consumer may construct a comparison standard post hoc. The post-experience comparison standard is proposed to resemble the service experience on its more central, more important attributes but to differ on its less central, less important attributes thereby affording the less important attributes greater influence in the evaluation. By contrast, when the service encounter is evaluated in context, for example, when it is compared to an expectation, prior service encounter, or other pre-experience standard, evaluations should derive primarily from differences on the more central, more important attributes (LaTour & Peat, 1979; Tse & Wilton, 1988). The following proposition reflects this reasoning:

P1: Evaluations of a service relative to a post-experience comparison standard will tend to be influenced by less central, less important features as compared to evaluations relative to a pre-experience comparison standard.

Thus, in evaluating an attorney, for example, consumers who rely on a post-experience comparison standard may evaluate the attorney relatively more on how they were greeted at the office, which they imagine could have been more cordial, than on the attorney's level of experience, which they took for granted (and so treated as immutable). However, these same consumers may independently concede that the level of experience of an attorney is more central and more important to assessing overall quality than the attorney's manners. By contrast, consumers who approached the experience with a pre-experience comparison standard, although put off by the attorney's manners, may nevertheless evaluate the attorney to a greater extent according to his/her level of experience as prescribed by the relative importance of these attributes.

Attribute Type

Our second proposition concerns the types of features that are likely to influence the evaluation. As noted above, researchers have made the distinction between "process features," which concern how the service is delivered, and "outcome features," which concern the benefits for the which the service is purchased (e.g., Parasuraman et al. 1985). For example, process features for dental service may concern the pleasantness of the interactions with the dentist and hygienist, the promptness and cleanliness of the office, and the comfort or discomfort of procedures. Outcome features might concern, for example, clean teeth, filled cavities, and a sense of well-being.

We propose that process features may be further decomposed into two types. "Dynamic process features" refer to the behavior of the service provider toward the consumer--e.g., courtesy, friendliness, and responsiveness. "Static process features" refer to fixed or semi-fixed characteristics of the service provider that do not vary from customer to customer--e.g., age, gender, education, level of knowledge of the dentist--and to the fixed characteristics of the production process itself--i.e., the formal steps or procedures involved in the delivery of the service. An example of the latter would be the use of a dental assistant to take x-rays and to perform the initial examination.

We propose that dynamic process features may be treated as relatively more mutable in constructing imagined alternatives to an experience than static process features. For example, in constructing a post-experience comparison standard for a visit to the dentist, novice patients (for example, those who haven't been to a dentist since, say, childhood) may be more likely to imagine the dentist less gruff in response to their questions than to imagine the dentist and not the assistant responsible for the x-rays. Patients may be similarly disinclined to imagine the dentist with different traits, for example, younger or a different gender. Thus, patients may base their evaluations more on the behavior of the dentist than on the specific traits of the dentist or on the structure of their visit:

P2: Evaluations of a service relative to a post-experience comparison standard will tend to be influenced more by the interpersonal behavior of the service provider than by traits of the service provider or characteristics of the service itself as compared to evaluations relative to a pre-experience comparison standard.

Number and Consistency of Attributes

Two other differences concern the consistency with which a standard of comparison is applied in the evaluation of a service and the number of attributes that are used across different evaluations. A pre-experience comparison standard implies, by definition, that the attributes used in the evaluation will not shift depending on the features of the particular service encounter. In addition, the attributes used in the evaluation should not shift markedly from service encounter to service encounter. By contrast, a post-experience comparison standard implies, by definition, a standard that is constructed in response to the features of a given service encounter. Thus, each service encounter creates its own standard of evaluation and the standard may shift from service encounter to service encounter. For example, students who rely on post-experience comparison standards may not evaluate instructors on a consistent set of attributes such as knowledge of the course material, organization, and preparedness, but instead may evaluate each instructor on a separate set of attributes depending on the alternatives that were available for each experience. Thus, consumers with less experience, who generate standards of comparison after having experienced the service, may be especially difficult to please or to predict because their evaluations are based on largely inconsistent and varied sets of attributes.

A similar view is offered by those concerned with the cognitive complexity of individuals. Research in this area suggests that when subjects are asked, for example, to make pairwise similarity judgments, more sophisticated processors may be better able to produce and use consistently the dimensions required to accommodate a set of objects (Scott, Osgood, & Peterson, 1979). Less sophisticated processors, by contrast, may not be able to generate the general dimension needed to accommodate the entire set and so instead make do with a series of fractionated similarity judgments on less general dimensions (Malholtra, Pinson, & Jain, 1989). Thus, evaluations by those who generate a pre-experience comparison standard may differ from evaluations by those who generate the comparison standard post hoc on the number and consistency of attributes:

P3: Variability in the attributes used to evaluate service encounters will tend to be greater among consumers who rely on a post-experience comparison standard compared to consumers who rely on a pre-experience comparison standard; and

P4: Consumers who evaluate services relative to a post-experience comparison standard will tend to base their evaluations on a larger set of attributes than groups who evaluate services relative to a pre-experience comparison standard.

Quality of the Evaluation

Finally, differences in the quality of the evaluation are expected for consumers who rely on pre- versus post-experience comparison standards. Past research indicates that in constructing an alternative to a stimulus, people are more likely to change negative features to positive rather than positive to negative; the goal appears, at least, to contrast the actual stimulus with an ideal (Kahneman & Miller, 1986; Read, 1985). This method of constructing a post-experience standard of comparison implies a typically harsh evaluation, the stimulus must always be less desirable than the post-experience standard. By contrast, when the actual experience is compared to a pre-experience standard of comparison which is not an ideal, the evaluation may be positive, negative or neutral, depending on the relative values of attributes of the experience and the standard (e.g., Woodruff, Cadotte & Jenkins, 1983). Hence, this reasoning implies that evaluations by consumers who construct the comparison standard after the fact will differ from those by consumers who construct the standard in advance:

P5: Evaluations of a service relative to a post-experience comparison standard will tend to be less favorable as compared to evaluations relative to a pre-experience comparison standard.

CONCLUSION

Results of the exploratory study suggest that in some cases, people may evaluate services in a more bottom-up ("data-driven") approach than has been suggested previously in the literature. The present research has focused on those who find it especially difficult to develop a detailed top-down strategy for evaluation because they lack the necessary information and experience. It seems plausible, however, that even knowledgeable customers would sometimes rely on bottom-up processing, for example, when they are pressed for time or when they are not motivated to consider the service much in advance because it is not perceived as very important. Although research in marketing is frequently sensitive to the use of bottom-up processing, for example, in discussing the effectiveness of advertising and promotional messages, there is little discussion of this sort of processing in post-purchase evaluations. The present research on the generation and use of post-experience comparison standards addresses this gap in the literature by identifying the phenomenon and by suggesting how so-called "data-driven" evaluations progress. Four of the propositions presented describe differences in the evaluation for top-down versus bottom-up processors and concern the number, importance, type, and stability of attributes that may be used in the evaluation. The fifth proposition concerns the nature of the evaluation and suggests that bottom-up processors may be more difficult to please in their evaluations.

Recent research in marketing has focused on the importance of managing what consumers learn from experience (e.g., Hoch & Deighton 1989). Consistent with this literature, the propositions detailed in the present article suggest that service providers may benefit by helping consumers to develop pre-experience comparison standards instead of allowing consumers to construct evaluations after the fact. Future research should test these propositions and develop additional propositions on the differences between top-down and bottom-up processors.

REFERENCES

Anderson, Ralph E. (1973), "Consumer Dissatisfaction: The Effect of Disconfirmed Expectancy on Perceived Product Performance," Journal of Marketing Research, 10 (February), 38-44.

Bavelas, Janet Beavin (1973) "Effects of the Temporal Context of Information," Psychological Reports, 32, 695-8.

Bearden, William D. and Jesse E. Teel (1983), "Selected Determinants of Consumer Satisfaction and Complaint Reports," Journal of Marketing Research, 20 (November) 21-8.

Cadotte, Ernest R., Robert B. Woodruff, and Roger L. Jenkins (1987), "Expectations and Norms in Models of Consumer Satisfaction," Journal of Marketing Research, 24 (August), 305-14.

Cardozo, Richard N. (1965), "An Experimental Study of Consumer Effort, Expectations and Satisfaction," Journal of Marketing Research, 2 (August), 244-9.

Churchill, Gilbert A., Jr. and Carol Surprenant (1982), "An Investigation into the Determinants of Consumer Satisfaction," Journal of Marketing Research, 19 (November), 491-504.

Day, Ralph L. (1977), "Modeling Choices Among Alternative Responses to Dissatisfaction," in Advances in Consumer Research, Vol. 11, Thomas C. Kinnear, ed. Ann Arbor, MI: Association for Consumer Research, 496-9.

Gronroos, Christian (1982), Strategic Management and Marketing in the Service Sector, Helsingors: Swedish School of Economics and Business Administration.

Hoch, Stephen J. and John Deighton (1989), "Managing What Consumers Learn from Experience," Journal of Marketing, 53 (April), 1-20.

Howard, John A. and Jagdish N. Sheth (1969), The Theory of Buyer Behavior. New York: Wiley Marketing Series.

Johnson, Michael D. (1984), "Consumer Choice Strategies for Comparing Noncomparable Alternatives," Journal of Consumer Research, 11 (December), 741-53.

Kahneman, Daniel and Dale T. Miller (1986), "Norm Theory: Comparing Reality to its Alternatives," Psychological Review, 93 (2), 136-53.

LaTour, Stephen A. and Nancy C. Peat (1979), "Conceptual and Methodological Issues in Consumer Satisfaction Research," in Advances in Consumer Research, Vol.6, William L. Wilkie, Ed. Ann Arbor, MI: Association for Consumer Research, 431-7.

Lehtinen, Uolevi and Jarmo R. Lehtinen (1982), "Service Quality: A Study of Quality Dimensions," unpublished working paper, Helsinki: Service Management Institute Finland OY.

Lewis, Robert C. and Bernard H. Booms (1983), "The Marketing Aspects of Service Quality," in Emerging Perspectives on Services Marketing, L. Berry, G. Shostack, and G. Upah, eds., Chicago: American Marketing, 99-107.

Liechty, M. and Gilbert A. Churchill, Jr. (1979), "Conceptual Insights into Consumer Satisfaction will Services," in Educators Conference Proceedings, Series 94, Neil Beckwith et. al., eds. Chicago American Marketing Association, 509-15.

Maddox, R. Neil (1981), "Two-Factor Theory and Consumer Satisfaction: Replication and Extension," Journal of Consumer Research, 8 (June), 97-102.

Malhotra, Naresh K., Christian Pinson, and Arun K. Jain (1988), "Consumer Cognitive Complexity and the Dimensionality of Multidimensional Scaling Configurations," unpublished manuscript.

Miller, John A. (1977), "Studying Satisfaction, Modifying Models, Eliciting Expectations, Posing Problems and Making Meaningful Measurements," in Conceptualization and Measurement of Consumer Satisfaction and Dissatisfaction, H. Keith Hunt, ed. Cambridge, MA: Marketing Science Institute, 72-91.

Oliver, Richard L. (1977), "A Theoretical Reinterpretation of Expectation and Disconfirmation Effects on Post-Exposure Product Evaluations: Experience in the Field," in Consumer Satisfaction, Dissatisfaction and Complaining Behavior, Ralph L. Day, ed. Bloomington: Indiana University, 2-9.

Oliver, Richard L. (1980), "A Cognitive Model of the Antecedents and Consequences of Satisfaction Decisions," Journal of Marketing Research, 17 (November), 460-9.

Olshavsky, Richard W. and John A. Miller (1972), "Consumer Expectation, Product Performance and Perceived Product Quality," Journal of Marketing Research, 9 (February), 19-21.

Parasuraman, A., Valerie A. Zeithaml, and Leonard L. Berry (1985), "A Conceptual Model of Service Quality and its Implications for Future Research," Journal of Marketing, 49 (Fall), 4-50.

Read, D. (1985) "Determinants of Relative Mutability," unpublished research, Univeristy of British Columbia, Vancouver, Canada.

Sasser, W. Earl, Jr. R. Paul Olsen, and D. Daryl Wyckoff (1978), Management of Service Operations: Text and Cases, Boston: Allyn & Bacon.

Scott, W.A., D.W. Osgood, and C. Peterson (1979), Cognitive Structure: Theory and Measurement of Individual Differences, New York: Halstead Press.

Sobel, Robert A. (1971) "Tests of Performance and Post-Performance of Satisfaction With Outcomes," Journal of Personality and Social Psychology, 19 (July), 213-21.

Swan, John E. and I Fredierick Trawick (1981), "Disconfirmation of Expectations and Satisfaction and a Retail Service," Journal of Retailing, 57 (Fall), 49-67.

Westbrook, Robert A. and Michael D. Reilly (1983), "Value-Precept Disparity: An Alternative to the Disconfirmation of Expectations Theory of Consumer Satisfaction," in Advances in Consumer Research, Vol. 10, Richard P. Bagozzi and Alice M. Tybout, eds. Ann Arbor, MI: Association for Consumer Research, 256-61.

Wilton, Peter C. and David K. Tse (1988), "Models of Consumer Satisfaction Formation: An Extension," Journal of Marketing Research, 25 (May), 204-12.

Woodruff, Robert B., Ernest R. Cadotte, and Roger L. Jenkins (1983), "Modeling Consumer Satisfaction Processes Using Experience-Based Norms," Journal of Marketing Research, 20 (August), 296-304.

----------------------------------------