The Accuracy of Unsolicited Consumer Communications As Indicators of &Quot;True&Quot; Consumer Satisfaction/Dissatisfaction

ABSTRACT - Consumer communications are received by firms through two primary channels. In the first, consumer inputs are solicited as part of an ongoing market research process. In contrast, a second form of communication flow results when consumers voluntarily contact the organization with messages of complaint, compliment, or information. Analysis of the two forms of information acquisition shows that they differ on a number of important dimensions critical to the strategic implications of the data obtained. It is also widely observed that these disparate firm-initiated and consumer-initiated information flows are handled by separate departments within the organization and are viewed as serving separate purposes. Moreover, the keepers of these pools of information, market research and consumer affairs respectively, rarely communicate or share data bases. To address these issues, we explore the theoretical basis for a solicited or unsolicited response and the meaning of the response to the firm, discuss the corporate interpretation attached to market research and corporate affairs data apart from its actual numeric content. explore the pragmatic implications of acting on the basis of data from either group, given that both sets of data have inherent imperfections, and suggest a research program which addresses the issues raised by this analysis.



Citation:

Ivan Ross and Richard L. Oliver (1984) ,"The Accuracy of Unsolicited Consumer Communications As Indicators of &Quot;True&Quot; Consumer Satisfaction/Dissatisfaction", in NA - Advances in Consumer Research Volume 11, eds. Thomas C. Kinnear, Provo, UT : Association for Consumer Research, Pages: 504-508.

Advances in Consumer Research Volume 11, 1984      Pages 504-508

THE ACCURACY OF UNSOLICITED CONSUMER COMMUNICATIONS AS INDICATORS OF "TRUE" CONSUMER SATISFACTION/DISSATISFACTION

Ivan Ross, University of Minnesota

Richard L. Oliver, Washington University

ABSTRACT -

Consumer communications are received by firms through two primary channels. In the first, consumer inputs are solicited as part of an ongoing market research process. In contrast, a second form of communication flow results when consumers voluntarily contact the organization with messages of complaint, compliment, or information. Analysis of the two forms of information acquisition shows that they differ on a number of important dimensions critical to the strategic implications of the data obtained. It is also widely observed that these disparate firm-initiated and consumer-initiated information flows are handled by separate departments within the organization and are viewed as serving separate purposes. Moreover, the keepers of these pools of information, market research and consumer affairs respectively, rarely communicate or share data bases. To address these issues, we explore the theoretical basis for a solicited or unsolicited response and the meaning of the response to the firm, discuss the corporate interpretation attached to market research and corporate affairs data apart from its actual numeric content. explore the pragmatic implications of acting on the basis of data from either group, given that both sets of data have inherent imperfections, and suggest a research program which addresses the issues raised by this analysis.

INTRODUCTION

American companies receive millions of letters and phone calls Annually from consumers who have complaints, compliments, or questions and suggestions regarding products and services. Although exact numbers are unknown, it is undoubtedly true that companies hear from more consumers through this communication channel than they do through formal consumer research activities (e.g. surveys, focus groups, etc.). Letters, phone calls and other modalities through which consumers may choose to contact organizations (including government agencies, the Better Business Bureau, etc.) are referred to in the literature as "volunteered" or "unsolicited," although perhaps a better term would be "consumer initiated" communications.

The distinction between "consumer initiated" and "firm initiated" consumer research is not entirely clear, although all would agree that the primary distinction is that, in consumer research, the firm selects a meaningful (usually probability) sample from a known universe (usually users or prospects) and obtains responses from a reasonable number of these target respondents. Depending upon survey modality, a response rate range of 20 to 50 percent could be expected. Consumer initiated communications, on the other hand, are by definition self-selected. Except for efforts by the firm to stimulate responses (e.g. by placing a postage-paid response card in the package or on the hotel room bureau, encouraging the use of an 800 number, etc.), it is essentially through the consumer's initiative that the company or organization receives product-related information.

In addition. the "response rate" from consumer initiated communications (CIC data, henceforth) is lower than it is for firm-initiated communications (FIC data). A major study reported by the Technical Assistance Research Programs, Inc. (TARP) in 1979 reports that about four percent of dissatisfied consumers actually complain, although the range may be from one or two percent for minor problems to 15 to 20 percent for major ones. Estimates of the number of satisfied or "neutral" consumers who communicate to firms are scarce, but it is reasonable to assume that one would hear from fewer satisfied than dissatisfied consumers (as a proportion of their "true" numbers), and from few if any "neutrals." Most questions and comments or suggestions not strictly either classifiable as complaints or as compliments presumably come from consumers who are more or less satisfied, but in the main the available data probably under-represent these "middling" respondents.

A Simple Respondent Motivation Model

The issues raised here may best be approached by asking why anyone would wish to provide feedback to a firm. If one uses any of a number of early models of behavior, the concepts of ability to respond and incentive (motivation) to respond (e.g., Vroom 1964) are frequently posited as behavioral determinants. Although the key to understanding complaining behavior is probably contained in the motivation concept, a short discussion of ability is in order.

It does not have to be stressed that those unable to complain, when unassisted, don't complain. This includes those who are illiterate in the written mode and those who do not speak the necessary language in the verbal mode. It includes those who are not mobile as well as those whose travel-related activities continually remove them from complaint channels. It also includes individuals who do not know where these same complaint channels are located.

In contrast FIC data can remedy some of these problems. In the cases of illiteracy, language, and immobility. bilingual personal and phone interviewers will encounter fewer problems. Mail surveys, of course, cannot overcome the illiteracy problem, but can be drafted in multiple languages if one can anticipate the need in specific localities. Thus, without further detail, it is clear that based on ability to respond, market research data and consumer initiated data operate from different sample frames and, as a result, provide different "response rates."

The motivation to respond concept is much more complex. At the outset, one must distinguish between internal or intrinsic and external or extrinsic motivation. The former exists when a psychological drive. perhaps emotional. causes the individual to correspond with a firm. These "drives" probably include (dis)satisfaction, inequity, and need for information. On the other hand. external motivation exists when something unrelated to the original purchase is used to prod a response. Incentives and the appeal of an interviewer are the most common examples.

It should be obvious that the contrasts between the two motivational states reflect the differing psychology of CIC vs. FIC data. When a firm solicits response, it does so at its convenience and addresses issues of relevance to management. The consumer may be uninvolved with the product generally or at an uninvolved stage of product usage (perhaps off-season). Moreover, it is possible that a dissatisfied consumer may have been surveyed during a time when his own psychology has countered the negativism of a bad product experience with a more pleasant emotional tone for reasons discussed in Oliver (1981). it is also possible that the survey response is more positive than would bc warranted because of yea-saying or positive halo response sets. Thus, surveys may be rich in information that may not be relevant to the satisfaction/dissatisfaction issue.

Internal motivation is more difficult to discuss because one must work with its nature, its level. and its direction. The nature question is really one of causality. Why is one motivated to complain? While the literature almost exclusively has focused on dissatisfaction, other possibilities exist. A consumer could simply recognize a product deficit without emotional tone; he/she could feel inequitably treated in the exchange transaction between buyer and seller; this same consumer could perceive deception or "mislabeling" or he/she could require product information beyond that provided. The point here is that each of these reasons behind customer-firm correspondence reflects a different emotional basis and each is likely to have different probabilities of behavioral outcome and different "response races".

The intensity or level of motivation is the second of the variables which vary the likelihood of a "response." An emotional threshold below which no complaining will occur probably exists on an individual basis. In the case of dissatisfaction, a consumer first must perceive product performance below expectations. In deciding to complain, some probability less than one that this discrepancy will be redressed as well as a frequency of distribution of the total cost of complaining must be subjectively determined. If all three of these variables combine so that a threshold is exceeded, complaining may occur. This perspective is not new (see Day 1977; Landon 1977), but it does underscore why the probability of a complaint effort, given dissatisfaction, may be low indeed.

The firm's odds of observing (receiving) this complaint are lower still. If a consumer's complaint threshold is exceeded, the firm will record this activity only if the complaint is directed toward the firm. Channel members may or may not forward these activities. Third party organizations or retailers may "absorb", greatly delay, or distort the nature and intensity of the complaint. Letters to the president, "quality control", or the legal division may get sidetracKed and "disappear" from the total count. Some channel or corporate employees, particularly those charged with servicing the customer, may "bury" complaints because their performance evaluation hinges on low complaint activity. In sum, this direction problem in and of itself is a major cause for concern when tabulating complaint and satisfaction data.

This discussion illustrates the problems involved in relying on either market research or consumer initiated data. Market research presents the consumer with an external reason for responding. Thus, individuals with little to say or neutral feelings will respond in some unknown fashion; possibilities include halo and yea-saying response tendencies as noted. On the other hand, true complaint data is based on internal motivations. Unfortunately, a number of variables intervene to decrease the likelihood that an internal state of dissatisfaction will be registered by the firm. These include the actual nature of the negative emotion, the intensity of dissatisfaction, the perceived probability of redress, the cost of complaining, and the direction of the complaint response. Moreover, no mention has been made of multiple and fraudulent complaining, which serve to distort the figures further.

Statistical Concerns

We now approach the issue of the statistical meaningfulness of findings obtained from both FIC and CIC data. The basic concepts here include the sample frame (of population), the response rate, nonresponse bias, and data accuracy (validity). Here, again, the two methods of data acquisition provide extremely diverse interpretations of each of these parameters.

"Response rate" is not a good term to use in a comparison between FIC and CIC data because, in the one case reference is made to those who respond from a sample and in the other, a percentage comparing those we hear from (we can't call them "respondents") to the total population of users is calculated. It is possible that CIC data are reasonably thought of as resulting from a survey or perhaps a census in that all consumers have some (but an unknown amount of) chance to initiate communications with the organization. Indeed, some companies would appear to succeed in placing a copy of their "questionnaire" into the hands of essentially all consumers of their product or service through the use of in-package or in-location response forms. For this and other reasons, the distinction between CIC and FIC data is not clear.

Nevertheless, the consumer satisfaction/dissatisfaction literature has consistently found that consumers who volunteer or self-initiate complaints are probably not representative of the totality of dissatisfied consumers (Warland et al. 1975: Best and Andreasen 1977; Day and Landon 1976). Analogously one can assume that they are not representative of satisfied or neutral consumers either. For example, among complainers (as compared with others who are equally dissatisfied but do not complain), we find differences in personality, attitudes and values, lifestyle. knowledge, and demographics. Generally, complainers are more assertive, more "liberal", more knowledgeable about product and complaint mechanisms, younger, higher income, and better educated. Interestingly. among "complimenters". the picture is reversed; these individuals appear to be older and lower in income. See Robinson and Berl 1979).

However, these parameters do not shed light on whether what we hear from complainers and complimenters is "typical" or not, only that the characteristics of these consumers are not typical. In some cases one may reasonably assume that there is an interaction between the two. For example, an older person when compared to a younger one may have different things to complain about regarding a product both have consumed. But one can never be sure since the question has never been directly addressed. Nonetheless, it would be important to know since companies, in some cases, cake action on the basis of the content of these CIC data as if they were in some way representative of a larger body of consumer opinion. If such actions are not justified, or are justified only under certain circumstances, it would be important to know what these circumstances are.

Conversely, since it has been observed that CIC data are not properly "survey" data and are received from "atypical" consumers, many companies may not take action on the content of these data even though research could demonstrate that under certain circumstances there is a parallelism between what "market research" would show and what CIC data analysis would reveal. If this were so, a recommendation for taking action based upon such data, where there had been a reticence to use them before. would be warranted.

Regarding the question of whether the content of what companies receive is representative even though the communicators are not, writers have assumed that such data can at least be viewed as nominally relevant. That is, a substantial flow of information concerning a particular negative product feature can be judged "non-zero" and at least relevant to that extent. This is similar to viewing such data as having "qualitative" value or to be relevant in an "exploratory" sense. As Fornell (1981) states:

"Even though complaint data have been found to be biased in favor of certain groups, they can probably provide valuable starting points for suggestions and hypotheses to guide the search for causes and solutions. If used properly, it seems likely that they can also help to amplify marketing research findings and as a means for validity and reliability assessment. For example, a food processing company's marketing research indicated that consumers did not want artificial colors, artificial flavors, flavor enhancers or preservatives in certain foods. Unsolicited consumer complaints supported these findings and suggested that many consumers wanted to return to the basics in food (p. 202)."

Thus, if such data can be useful in "amplifying" or "supporting" marketing research, these data must have a qualitative character between nominal and ordinal metric properties.

Hunt (1977) imputes ordinal character to CIC data when he states that, "The rate at which complaining letters come in to firms is not a correct indication of the amount of discontent, but it does give some indication of the severity, and true problems do show up even if not in the correct percentages (p. 4?9)." In contrast, Day and Bodur (1977) and Ash and Quelch (1979) argue that CIC data are unrepresentative of both the kinds of people and the kinds of problems being experienced. Thus, although the literature is mixed, writers continue to reflect a sense that CIC data do project something meaningful about the kinds of concerns or problems consumers experience.

How are CIC Data Being Used?

Although the authors intend to initiate research which will directly focus on this question, currently available data and opinion shed some light on the kinds of decisions being made with the aid of CIC data. For example, the Federal Government has directed its agencies to act as though CIC data had managerial relevance. Executive Order No. 12160 issued by President Carter on September 26, 1979 entitled "Providing for Enhancement and Coordination of Federal Consumer Programs," directs each agency to "...establish procedures for systematically logging in, investigating, and responding to consumer complaints, and for integrating analyses of complaints into the development of policy," and further requires that there be a "...statistical reporting of complaints according to topical categories, and analyses of the patterns of issues raised and their implications for agency policymaking (determined)...."

As an example of this policy in action, Jacoby and Jaccard (1981) note that the National Highway Traffic Safety Administration (NHTSA) apparently acted to recall the Firestone 500 steel belted radial tire on the basis of "the relatively high number of complaints it had received (p. 4)." Moreover, the same kind of attention to what in effect are no more than CIC data is reflected in decisions by the Federal Trade Commission, the Post Office, the Consumer Products Safety Commission, and certainly other agencies. These decisions clearly assume that, not only does the (most) squeaky wheel need to be oiled (first), but also that the relative magnitude and intensity of complaining represents what is felt by consumers (or voters, depending upon one's perspective) out there "in the real world." But does not this kind of logic raise concern among those of us who are "survey researchers?"

Unfortunately, our concerns do not stop with the government; this same practice is just as common in private industry. E recent Eastern Airlines campaign, for example, pictured President Frank Borman making reference to a C.A.B. Report on complaints per 100,000 passengers boarded during 1980 to show that Eastern received the third fewest complaints among trunk airlines. This, despite the fact that the CAB properly notes a caveat in its report of such data that:

"These statistics reflect alleged problems with airline service as stated in complaint. No determination as to the validity of the complaint has been made nor should the report bc construed as a "rating" of one carrier's performance in relation to that of any other since each class of carrier and, to some extent. each carrier has problems unique to its operation.

At least in the Eastern Airlines example. the ad agency appropriately reported complaint data on the basis of passengers boarded, a courtesy not equally extended by others who use CIC data. Even though Day and Bodur (1977) and Day and Landon (1976), among others, have duly noted that, because different products and brands within products are used by different numbers of people, one must at least norm or index complaint rates as a percentage of units sold or transactions completed, many well-intentioned organizations appear never to have conceived of such a fundamental rule of data reporting.

Marketing News (1983), for example, recently reported a raging controversy between the Direct Marketing Association and the Better Business Bureaus concerning the latter's annual survey entitled, "Inquiries and Complaints." This survey simply reports complaints received by product category without converting complaints to a percentage of the customer base. Direct mail complaints calculated in this fashion are misleadingly high. In 1982, 83,691 or 22 percent of all complaints received by BBB offices concerned direct mail. But the mail order category in 1982 amounted to $40 billion; hence the number of actual complaints received, when compared to the volume of transactions, is not a very significant number. (This ignores, incidentally, further aggregation errors committed by the Bureau. All mail order correspondence was lumped together whereas retail communications were not. Rather, they were broken out by category (e.g., department stores. auto repair, etc.), an apples and oranges situation which is grossly unfair to mail order concerns. This lead Marketing News to speculate chats since BBB represents traditional retailers, the aggregation may not have been accidental.)

We might also use the Eastern Airlines advertisement to identify some other interesting questions and hypotheses about the utility of CIC data as marketing research data and therefore as a potential basis for action by marketing management. First, although not reported in the advertisement, Eastern's complaint rate was lower than Pan American and TWA but higher than Delta's. Might this lead one to suspect that, contrary to Eastern's assertion, complaint data may be inversely rather than directly related to perceived quality or performance? This relates to the expectancy disconfirmation hypothesis (Oliver 1981) among other explanations. If one has high expectations, then one is more apt to perceive inadequacies in performance levels than if one has low expectations. Unquestionably Pan American and TWA fly a much higher proportion of first class passengers and long-distance travelers, probably those who would expect (and are probably paying for) a higher class of service than would be the case with the average Eastern flyer. Eastern may be "positioned" as a relatively shorter haul, "business person" airline with frequently scheduled but basically "no frills" comforts. If this is so, is the low complaint rate for Eastern reasonably interpreted as a testament to its quality as Mr. Borman suggests? We think not.

To the contrary, a paradoxical counterintuitive hypothesis develops. Namely, the more complaints a firm receives, the higher consumers' expectations must be. The fewer the complaints, the lower the expectations. (The reverse would be true for compliments). Since expectations derive substantially from past experiences with a product, it follows that if a firm produces a high quality product, it will get a disproportionate number of complaints. Conversely, if consumers expect poor performance, compliments will bc received on a disproportionate basis. If this hypothesis is valid, its implications would certainly unsettle many who are relying on CIC data for quite the opposite interpretation.

A second issue concerns the relative weighting of favorable and unfavorable correspondence. If it were true that one airline received more complaints than any other per 100,000 passengers and also more compliments. should compliments trade off with complaints in representing the "quality" of a product or service? No basis whatsoever exists for averaging or weighting these communications because nothing can be said a priori about the reactive or emotional nature of the market being served. Some customers segments may be reactive to positive surprises, others to negative surprises and still others to nothing much at all.

Third, we question why the C.A.B. would release such data knowing it might be used as a promotional vehicle. Even if its caveat were repeated in the ad (which it was not), would not consumers still perceive the claim as one of superior performance and quality? This is similar to the misreporting of media audience "polls" where viewers, listeners, or readers are asked to call in or are exposed to a questionnaire inserted in a magazine. Even though there is usually a caveat stating that, "of course, these results do not scientifically describe our audience," it is nevertheless obvious that the results are intended to be meaningful to the reader of the poll results who makes the ultimate interpretation of the data. Research is needed on this question because it is reasonable to assume that the FTC would consider false or misleading a representation of quality or performance based upon a "bad" (or "non") survey

There are many other ways in which CIC data have been used as though they were marketing research data. Discovering new product ideas is certainly one of these although companies infrequently recognize this application for obvious legal and financial reasons. von Hippel (1982) discusses at length the importance of CIC data as a source of new product ideas and the TARP study (1919) noted previously cites numerous other instances, including the development of Polaroid's SX-70 camera. The camera's designers supposedly relied upon CIC data for the idea of including a battery in the film pack as well as an automatic ejector for the film.

The use of CIC data in quality control is perhaps the most broadly assumed and accepted role for such data. It is a common practice in many consumer goods companies to route counts of consumer complaints of foreign matter in food products, defective packaging, "off colors" or "off tastes", and so on, to quality assurance functions. Although follow-up work might be done to determine the magnitude of the problem, it is generally assumed that the existence of the problem can be reasonably assumed from a sudden "bump" in complaints, especially if there is concentration of complaints from the distribution region of one producing plant, for example. Again, this use of CIC data requires that some meaningful "base rate" or norm be established. On the other hand, one wonders how the establishment of even a base race would assuage the sampling concerns of the statistical purist as such data do not resuLt from a careful probability sample from the production line. What is the sampling distribution against which a particular rate is to be compared? Might it not be conceivable that a sudden shift in complaints about a particular product attribute could still be "chance?" What is a significant change in race?

Analogously, some companies may reward (or punish) employees based on CIC data used as part of an evaluation of their performance. Although many companies intentionally sample consumer opinion (or "mystery shop" the personnel) for this purpose, it is also true that volunteered praise or criticism may result in personnel action.

Possible defects or ambiguities in labelling or other instructional material may also be discovered through CIC data. Although preparation and usage instructions are researched before being formally incorporated into the product package, companies many times discover that consumers are not preparing the product properly or otherwise using it i a manner not likely to result in optimal satisfaction. Sometimes the problem can be traced to a misunderstanding of instructions. Since such errors may result in litigation (e.g. "How do I get the shells out after adding two eggs.") it is particularly important that companies appreciate the value of CIC data in this case. It is also well to note that in the area of product liability. the importance placed on a company's determination of how many consumers may be potentially injured through product misuse is secondary to the possibility that one may be injured (or was injured).

Other examples of the use of CIC data include the discovery of similar confusion regarding company policies or procedures (e.g. "I thought I wouldn't have to pay local telephone charges if I bought my own phone.") which may result in changes in the information flow to consumers. In addition, the findings of new uses for products (e.g. "your baking soda is a terrific odor-eater") has been frequently noted as an application of CIC data. And by extension, advertising creative may be helped by consumer suggestions or comments. These are only some of what must be a large number of possible uses of CIC data. Many require that one understands the limitations (as well as the value) of such data. It is clear from these examples that attributing ordinal metric value to such data is commonplace. Unfortunately, we simply do not know whether such interpretations are legitimate.

Needed Research

It should be obvious to the reader that no body of existing data will provide insights as to t he accuracy of unsolicited consumer communications as indicators of "true" consumer satisfaction/dissatisfaction. The answer to this question would require that parallel studies be run within a representative cross-section of companies so that comparisons could be made between CIC and FIC levels. Two types of research designs may provide more definitive answers to this issue.

The first is a captive market approach. In this technique, a small scale marketplace is constructed where the manufacturer or provider, retailer or servicer, and complaint channel locations are clearly defined and small in number. The investigator would then "plant" the product or service of interest in the marketplace and monitor the entire consumer population with satisfaction surveys, while simultaneously logging all complaints and post-purchase communications. Although it would first appear that this approach could only be applied in a small scale and that the design would be necessarily artificial and transparent, successful earlier attempts, albeit with different research objectives, have been made.

Perhaps the study closest to what we have in mind has been performed by Arndt (1967) in the context of product diffusion and word-of-mouth. He introduced a new consumer product in the commissary of a 495 unit married student housing complex and was able to monitor the adoption and word-of-mouth patterns among 90 percent of the residents. As applied to the study of satisfaction and complaining, one would need only to record communications to the commissary and chose to the manufacturer from the complex. Obviously, a unique zip code would facilitate the process. Monitoring of complaints and compliments would have to be correlated with the results of regular consumer surveys over the usage period of the study. This, of course, is critical to the focus of the investigation.

Problems with this technique involve the limited generalizability of the findings. One would necessarily be restricted to a biased sample, small number of product categories, and a short time frame lest fatigue set in. It is also possible that the respondents would soon discover the intent of the study through frequent references to the focal product and by word-of-mouth. This design, however, is intended to be determinant in nature and the problems encountered are somewhat unavoidable.

A second approach, based on the mathematical relationship between satisfaction/dissatisfaction levels and complaint data, requires frequent surveys of representative samples of buyers or users regarding their satisfaction levels. A statistical or stochastic model of the dissatisfaction-complaint relationship would then be constructed to provide an index of dissatisfaction/complaint ratios. This would provide a numerical basis for the question of the accuracy of complaint data as an indicator of satisfaction/dissatisfaction. The model could be "fine tuned" with additional factors such as seasonality, if desired.

Note that this latter approach is very similar to the captive sample design discussed earlier except that it is based on macro as opposed to micro data. Whereas the captive sample technique allows one to match satisfaction with complaining on an individual basis, the statistical time series requires only that aggregate levels of each be matched. In this sense, the captive sample approach provides certain advantages in that a richer mix of variables, including demographics, can be studied. It also allows one the ability to survey non-complainers to study the ratio of satisfied to dissatisfied in this group. In fact, this latter strategy will be required if one is to ever answer the question of the accuracy of CIC data.

Other topics of interest include differences in the accuracy question across product categories, across stages in the life cycle, and across high and low expectation situations. While these are higher level goals, ultimately all must be answered before a complete understanding of the meaning of complaint data is provided.

We hope to soon undertake an investigation of this question and will be eager to report our findings to you as they develop.

REFERENCES

Arndt, Johan (1967), "Role of Product-Related Conversations in the Diffusion of a New Product," Journal of Marketing Research, 4 (August), 291-295.

Ash, Stephen B. and Quelch, John A. (1980), "Consumer Satisfaction, Dissatisfaction and Complaining Behavior: A Comprehensive Study of Rentals, Public Transportation and Utilities." in H. Keith Hunt and Ralph L. Days (Eds.), Refining Concepts and Measures of Consumer Satisfaction and Complaining Behavior, Bloomington: Indiana University School of Business, 120-130

Best, Arthur and Andreasen, Alan R. (1977), "Consumer Response to Unsatisfactory Purchases: A Survey of Perceiving Defects, Voicing Complaints, and Obtaining Redress," Law and Society Review, 11 (Spring), 701-742.

Day, Ralph L. (1977), "Toward a Process Model of Consumer Satisfaction," in H. Keith Hunt (Ed.), Conceptualization and Measurement of Consumer Satisfaction and Dissatisfaction. Cambridge, MA: Marketing Science Institute, 151-183.

Day, Ralph L. and Bodur, Muzaffer (1977), "A Comprehensive Study of Satisfaction with Consumer Services." In Ralph L. Dan (Ed.), Consumer Satisfaction, Dissatisfaction and Complaining Behavior, Bloomington: Indiana University School of Business, 64-74.

Day. Ralph L. and Landon. E. Laird (1976), "Collecting Comprehensive Consumer Complaint Data by Survey Research." In Beverlee B. Anderson (Ed.), Advances in Consumer Research, Vol. III, Ann Arbor. MI: Association for Consumer Research, 263-268.

Fornell, Claes (1981), "Increasing the Organizational Influence of Corporate Consumer Affairs Departments," Journal of Consumer Affairs, 15 (Winter), 191-213.

Higgins, Kevin (1983), "Mail Order Industry is Fighting the Old, Sleazy Image on Several Fronts," Marketing News, 17 (July 8) l, 12.

Hunt, H. Keith (1977), "CS/D--Overview and Future Research Directions." In H. Keith Hunt (Ed.), Conceptualization and Measurement of Consumer Satisfaction and Dissatisfaction, Cambridge, MA: Marketing Science Institute, 455-488.

Jacoby, Jacob and Jaccard, James J. (1981), "The Sources. Meaning, and Validity of Consumer Complaint Behavior: A Psychological Analysis," Journal of Retailing, 57 (Fall), 4-24.

Landon, E. Laird Jr. (1977), "A Model of Consumer Complaint Behavior." In Ralph L. Day (Ed.), Consumer Satisfaction. Dissatisfaction and Complaining Behavior, Bloomington: Indiana University School of Business, 31-35

Oliver, Richard L. (1981), "Measurement and Evaluation of Satisfaction Processes in Retail Settings," Journal of Retailing, 57 (Fall), 25-48.

Robinson, Larry M. and Berl, Robert L. (1979), "What About Compliments: A Follow-up Study on Customer Complaints and Compliments." In H. Keith Hunt and Ralph L. Day (Eds.), Refining Concepts and Measures of Consumer Satisfaction and Complaining Behavior, Bloomington: Indiana University School of Business, 144-148.

Technical Assistance Research programs, Inc. (TARP) (1979), "Consumer Complaint Handling in America: Summary of Findings and Recommendations," Washington, D.C., (September) .

von Hippel, Eric (1982), "Get New Products from Customers." Harvard Business Review, (March-April).

Vroom, Victor H. (1964), Work and Motivation, New York: Wiley.

Warland, Rex H., Herrmann, Robert O. and Willits, Jane (]975), "Dissatisfied Consumers: Who Gets Upset and Who Takes Action," Journal of Consumer Affairs, 9 (Winter), 148-163.

----------------------------------------

Authors

Ivan Ross, University of Minnesota
Richard L. Oliver, Washington University



Volume

NA - Advances in Consumer Research Volume 11 | 1984



Share Proceeding

Featured papers

See More

Featured

‘But Screw the Little People, Right?’ Case of the Commercialization of Reward-Based Crowdfunding

Natalia Drozdova, Norwegian School of Economics and Business Administration, Norway

Read More

Featured

“But, will you think it's important to use mouthwash?” How Visual Communication of a Set Impacts Perceived Set Completeness and Item Importance

Miaolei (Liam) Jia, University of Warwick, United Kingdom
Xiuping Li, National University of Singapore, Singapore
aradhna krishna, University of Michigan, USA

Read More

Featured

Shared Values, Trust, and Consumers’ Deference to Experts

Samuel Johnson, University of Bath, UK
Max Rodrigues, DePaul University, USA
David Tuckett, University College London

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.