Some Can, Some Can't, and Some Don't Know How They Did It: a Direct Test of the Utility Maximization Hypothesis

Peter R. Dickson, The Ohio State University
Joel E. Urbany, University of South Carolina
Paul W. Miniard, The Ohio State University
ABSTRACT - A direct test of whether relatively knowledgeable students can maximize utility in a- very simple investment choice task was undertaken. Although a pretest of an on-going, larger study, the results suggest that many can, particularly for the task that involved a simpler optimizing rule. Further manipulations and dependent measures are needed to determine whether the poor performance of some subjects is primarily an ability or a motivational problem. It appears to be an ability problem.
[ to cite ]:
Peter R. Dickson, Joel E. Urbany, and Paul W. Miniard (1986) ,"Some Can, Some Can't, and Some Don't Know How They Did It: a Direct Test of the Utility Maximization Hypothesis", in NA - Advances in Consumer Research Volume 13, eds. Richard J. Lutz, Provo, UT : Association for Consumer Research, Pages: 257-262.

Advances in Consumer Research Volume 13, 1986      Pages 257-262

SOME CAN, SOME CAN'T, AND SOME DON'T KNOW HOW THEY DID IT: A DIRECT TEST OF THE UTILITY MAXIMIZATION HYPOTHESIS

Peter R. Dickson, The Ohio State University

Joel E. Urbany, University of South Carolina

Paul W. Miniard, The Ohio State University

ABSTRACT -

A direct test of whether relatively knowledgeable students can maximize utility in a- very simple investment choice task was undertaken. Although a pretest of an on-going, larger study, the results suggest that many can, particularly for the task that involved a simpler optimizing rule. Further manipulations and dependent measures are needed to determine whether the poor performance of some subjects is primarily an ability or a motivational problem. It appears to be an ability problem.

Recent years have seen a growing interest in the developing interface between marketing and economics (see, for example, Mitchell 1978, Journal of Business special issues 1980, 1984). Economists have shown greater concern over how well their "laws" of the marketplace actually describe observed behavior (Gilad et al 1984), while marketing researchers are increasingly interested in the application of economic theory to marketing problems (Nagle 1984). This paper attempts to make a contribution to the merger of the two fields by developing a direct test of the classic economic proposition that buyers maximize utility relative to price. We first describe the concepts of utility maximization, and the current discontent with the neoclassical theory in economics. We then present results from a preliminary study designed to directly test whether student consumers make simple investment decisions in a way consistent with the maximization hypothesis. A more extensive experiment is now in progress.

UTILITY MAXIMIZATION

The neoclassical economic model of demand proposes that consumers spread their disposable income across purchases by equating the marginal utility/price ratio for each category of goods (Waud 1980, Monroe 1979). According to the theory, a change in price for one good is compensated for by a shift in expenditures among all goods so that the ratios remain equal. When applied to a specific consumer purchase choice decision, the maximization hypothesis suggests that the buyer will select the alternative with the largest marginal utility per dollar cost.

While some economists have long questioned the adequacy of the utility maximization hypothesis (UMH) (see Becker 1962 for a discussion of criticisms), the hypothesis has come under particularly severe criticism recently. Nobel laureate Herbert Simon, who long ago discarded the UMH in proposing his theory of "bounded rationality" (Simon 1957), laid the groundwork for a developing "behavioral" school of economic thought. More recently, Gilad, Kaish, and Loeb (1984) have described the "platforms" of a new school of thought, foremost among which is the notion that economic theory be consistent with observed fact. The primary basis for criticism of the UMH is that behavior observed in the laboratory and the marketplace has frequently differed from what we would expect if participants were maximizing utility by applying sound economic and statistical logic. For example, the experimental observations that subjects are not consistent in their preferences for gambles (preference reversal - Lichtenstein and Slovic 1971) and that the way decision alternatives are framed (phrased) affects preference (Tversky and Kahneman 1984) are inconsistent with the proposition of a "maximizing" decision maker. Kunreuther's (1978) study of consumer aversion to low cost flood insurance and Arrow's (1982) observation of how investors overweigh current information (relative to baseline information) lead to the save conclusion. These studies do not directly test the utility maximization hypothesis, however, because they introduce the added complication of outcome uncertainty and the possibility that each decision maker has a unique conception of risk which changes depending on the decision context and their interpretation of different information cues.

While disenchantment with the UMH has grown in economics, there appears to be no direct evidence regarding whether buyers have the natural tools and instincts to maximize utility. Setting aside the important criticism that decision-makers are too limited in their information gathering and processing abilities to be maximizers (Simon 1978, 1979), more basic questions can be raised: given perfect information about simple decision alternative payoffs and costs (thus avoiding the complication of outcome uncertainty as noted above) and a clear maximization objective, do buyers naturally apply and can buyers naturally apply the rules necessary for utility maximization? These questions have not only been untested in the literature, the conventional wisdom in economics seems to be that they cannot be addressed empirically. Even when economists argue about philosophy of science applied to economics they agree that the maximization hypothesis is untestable (Boland 1982, Caldwell 1983). The problem appears to be to construct a convincing case for supporting or refuting what amounts to a paradigm as any indirect test based on inference can be shown to be suspect (Boland 1982). This research represents a first step in an attempt to directly test the utility maximization hypothesis The fundamental issue is not whether the decision maker will maximize but whether he or she can maximize when rewarded for doing so.

MARKETING AND UTILITY

Understanding how buyers place utility or value on product/ brand alternatives is central to research in marketing. Much research addresses how people derive their judgments of product utility or value (e.g., Cox 1967, Olson 1977). Two other major areas which deal explicitly with the measurements of product utility or value include multiattribute models (e.g., Wilkie and Pessemier 1974) and conjoint analysis (Green and Srinivasan 1978). Note that an underlying rationale for research in the above areas is that marketers want to present products (and/or product cues) which provide maximum net value or utility to buyers. According to Sheluga, Jaccard, and Jacoby (1979), this is appropriate because "the best prediction of which product will be chosen... is the product alternative having the most positive overall evaluation." (p. 166). The logic of the UMH, then, is often reflected in the study of consumer choice behavior.

The same logic is also reflected in our pricing literature. Monroe (1979, 1984) has contended that buyers make purchase decisions by selecting the alternative which has the highest "perceived value (utility) for the money." The "value for the money" construct is commonly measured in pricing studies (see Zeithaml 1984). Note that this decision criterion is equivalent to the marginal return/price decision rule presented in the classic economic theory.

The marketing relevance of studying the UMH is not to call into question the work cited above, but to extend our understanding of buyers' abilities to "rationally" approach and solve decision problems. Whether decision-makers satisfice or optimize in consumption decisions has come under the category of "choice rule" research. A consistent finding from this literature is that consumers, when confronted with a complex decision, initially use a satisficing rule to "weed out" unwanted alternatives and then use a more thorough rule to evaluate the remaining alternatives (Lussier and Olshavsky 1979). The current research differs from such choice rule research in that (l) it uses an objective measure of decision-makers' ability to maximize and (2) it does not have an information acquisition confounding problem because it provides subjects with all the information they need to make their very simple decisions. The following research involves a test of buyers' natural abilities to maximize their outcomes. It is relatively unencumbered by the problems of incomplete information, information overload and optimizing ambiguity that have troubled other studies.

THE EXPERIMENT AND HYPOTHESES

The present study examined subjects' ability to maximize their decision outcomes for two different optimization rules. The decision context consisted of subjects choosing the one of three investment alternatives which yielded the greatest return. To illustrate, consider the following example:

TABLE

Some subjects were required to invest $1000 in one of the three investment alternatives. For this decision, subjects should select the alternative with the largest "return/cost" ratio (hereafter referred to as the Ratio rule) in order to maximize their return. In the above example, the second alternative is the optimal choice. The remaining subjects were constrained to purchasing only one unit of the chosen investment. In this situation, subjects should choose the alternative with the largest "return minus cost" difference (referred to as the Diff rule). Thus, the third alternative is the optimal selection.

The first hypothesis tests whether the subjects come from a population where the mean score on the tasks is 18/20 (relaxed maximization assumption). This test is directional in that the alternative hypothesis is that the subjects' mean performance is less than 18/20. The null hypothesis is:

H1: The subjects come from a population wi-h an average maximization performance of 18 correct out of 20 (90%).

We anticipated that subjects' ability to maximize their decisions would depend on the rule required for outcome optimization. Assuming that subjects find the mathematical operations required for ratio estimation appreciably more difficult than those involved in difference estimation, the Diff rule should be easier to implement than the Ratio rule. Therefore. the following hypothesis is offered:

H2: Subjects who purchase a single unit of their selected investment will more often make optimal investment decisions than will those who invest $1000 in their selected investment.

A second experimental factor involved the presence or absence of the information in the last two columns of the prior example. These two columns present the mathematical calculations relevant to the Diff and Ratio optimization rules respectively. This information may affect decision accuracy in one of two ways. First, its presence may serve as a cue to subjects, helping them to recognize the appropriate decision rule (see Russo 1977). A third hypothesis, then. is:

H3: Subjects who are provided the return-cost difference return/cost ratio information will make better decisions than those not provided such information.

The presence of this information may also be beneficial by assisting subjects in their mathematical calculations. That is, it should help eliminate simple mathematical errors. However, because the mathematical calculation required by the Diff rule is relatively straightforward, we expected the facilitating effect of the information to emerge only for the relatively more complex Ratio rule. Accordingly, the following interaction is proposed:

H4: Providing subjects with the mathematical information should improve their accuracy only when making $1000 investment decisions.

Finally, we examined the possibility of a learning effect. Subjects were given a replication of the 20 tasks in the same order. It was expected that this practice would enable some subjects to discover and execute the correct optimizing rule. This learning effect is proposed in the following hypothesis:

H5: Subjects will more often make optimal decisions later in their investment tasks than earlier.

In summary, the major issues addressed are whether subjects (l) can achieve a 90% correct choice performance level, (2) are equally adept at making decisions in terms of the gain/cost ratio or "value for the money" rule compared to the "return minus cost" rule in the appropriate situations, (3) are affected by information about relevant mathematical calculations, and (4) improve their decision accuracy with practice.

METHOD

Procedure

This preliminary study involved a series of "investment" decisions made by 48 junior and senior marketing undergraduates at a major university. All had taken a required accounting or finance course. The subjects were told that the study was intended to evaluate the information presentation format of a new investment guide published by Standard and Poors. The research took place in a personal computer laboratory and required subjects to make 40 investment decisions. Their decisions were preceded by an introduction to the exercise, three practice investment decisions, and three "check-up" problems to make sure they understood the consequences of their decisions. After finishing the investment task, the subjects completed a paper and pencil questionnaire, were compensated for their participation, and left the lab.

Subject Compensation

Subjects' compensation was based directly on their investment "performance" in order to heighten task involvement and effort. After each decision, subjects received a return based on their investment choice. This return was then added to a "bank balance" appearing at the top of their computer screens. Upon completion of the experiment, subjects were paid a percentage of their bank balance. The payments ranged from $3.79 to $4.75. Subjects were paid with a check which had been stapled to her/his handout on which the subject had identified her/himself as "payee" at the beginning of the hour. This procedure was intended to heighten awareness of the subjects' rewards for making the best decisions. Their objective, clearly, was to maximize their ending bank balances. The average post-test agreement score of subjects with the statement "I tried to make the best decision for every decision I had to make" was 6.3, with the statement "The instructions were very clear" 6.4 and with the statement "The game was interesting" 5.4 (l = strongly disagree, 7 = strongly agree).

Research Design

The experimental design was a 2 (Investment Task) by 2 (Information Set) design with replication of the 20 tasks within subjects. The different conditions of the Investment Task factor will be referred to as the "single unit purchase" and "$1000 investment" conditions. The Information Set conditions will be referred to as the "limited" and "full" information conditions. Both independent factors were described in the hypotheses section.

It should be noted that the second 20 problems given to subjects were an exact repeat of the first 20 problems. This allowed for a more precise analysis of the learning effect (H4). Informal discussion with some subjects after the experiment indicated that they did not recognize that the problem set had been repeated. This is understandable as the replications were separated by 19 tasks with very different cost and return profiles.

Dependent Variables

Overall choice accuracy was assessed by the number of times subjects selected the optimal alternative for each of the 20 decisions in the first and second problem sets. For each decision, subjects received a score of one for an optimal choice and a zero for a suboptimal choice. A set of post-task measures were included to identify the self-reported rule that was used during decision making, whether the rule changed during the exercise, self-reported understanding of economic maxims, and self-reported ability to do arithmetic, make investments and play video games. A number of the measures checked the subjects' understanding of the task, level of effort and beliefs about the purpose of the study.

It should be noted that the methodology used here was developed to maximize subjects' involvement in the research. Toward this end, subjects were provided with a clear objective in the research, a clear monetary reward which dominated their choice behavior during the experiment and a reward that was clearly tied to their performance and no one else's. Grether and Wilde (1984) have described a set of four conditions that research addressing microeconomic issues should meet. Their conditions appear to be met by the current methodology.

RESULTS

Post-Task Measures

We first consider subjects' responses to the 7-point agree (7) - disagree (1) scales involving their reactions to the experimental task. Subjects strongly agreed that the instructions were very clear (M = 6.4). They perceived the "game" as interesting (M = 5.35) and disagreed that the game was tedious and boring (M = 2.52). Subjects also disagreed (M = 2.48) that the choices were very difficult. Concerning self-reported effort, subjects strongly agreed (M = 6.3) that they tried to make the best decision for every choice, although subjects in the $1000 investment-limited information condition reported significantly ( p < .05) less agreement (M = 5.5) than subjects in the remaining conditions. Subjects disagreed (M = 2.3) that they exerted less effort in decision making later in the game. Interestingly, subjects differed in their feelings that they had maximized their ending bank balance. Subjects in the full information conditions agreed (M = 5.5) more strongly (p < .05) with the statement than those in the limited information conditions (M = 4.3).

Given our expectation that the $1000 investment task is more difficult than the single unit purchase because the former requires the more mathematically complex optimization rule, we asked subjects whether a calculator would have made their decision making easier and more accurate. Although subjects in the single unit condition disagreed (M = 2.3) that a calculator would have made the task easier, subjects in the $1000 investment condition had a different perception (M = 4.7, p < .01). Similarly, subjects in the limited information condition (M - 3.9) differed (p < .05) from those in the full information condition (M = 2.9). In addition to these main effects, a significant (p < .01) interaction also arose such that subjects in the $1000 investment - limited information condition reported the greatest support (M = 5.7) for the calculator's potential to make their choices easier. Although these same patterns emerged for subjects' perceptions about making more accurate choices with the calculator, only the difference between the $1000 investment (M = 4.3) and single unit purchase (M = 2.2) conditions attained statistical significance (p < .01).

A disturbing finding concerns the existence of significant differences on measures of individual characteristics that should not differ across experimental conditions. For example, subjects in the $1000 investment - full information and single unit - limited information conditions significantly (p < .05) differed from subjects in the remaining conditions on reported mental arithmetic and financial abilities, understanding of expected value, present value, and marginal utility concepts, and their experience and skill with computer games. These unexpected differences suggest that the assumptions underlying random assignment of subJects to experimental conditions may not hold, and thus represent a serious threat to the internal validity of this study. A correlation and factor analysis revealed that none of the post-task belief measures were strongly related to performance. The highest correlation was between agreement with the statement "I am.good at playing computer games" and subjects score on the second replication (r = 0.29). Most of the correlations were below 0.10 with only a few above 0.2. But because of a concern over our ability to control for extraneous between subject effects through randomization (because of our small cell sizes),we ran several covariance analyses adjusting for self-reported involvement and effort, self-reported arithmetic ability and self-reported knowledge of economics and finance. Interestingly,only the first covariance analysis had a material effect on our findings.

Decision Accuracy

On the whole, subjects displayed considerable skill in making the optimal decision. Thirteen of the 48 subjects had a perfect score across the 40 decisions. A total of thirty subjects had an accuracy rate of 90% or more. Only seven subjects failed to make the optimal choice more than half the time. The average accuracy rate was 80 percent. A within cell test of the first hypothesis lead to its rejection only in the investment task condition (where the ratio rule had to be applied) where the subjects were not provided with the difference and ratio calculations for each of the three choices (see Table 1 for the mean scores and standard deviations for each cell). This result should be qualified by the low statistical power afforded by the experiment .

An interesting question is whether the poor performance was due to either a lack of ability and/or motivation to optimize. A comparison of subjects scoring 75% accuracy or better versus those less accurate on key post-task measures was therefore undertaken. Subjects high and low in accuracy equally agreed (p . .9) that they tried to make the best decision for each choice. Nor did they differ in their understanding (p = .8) of the task. While there was a tendency for the less accurates to perceive that a calculator would have been useful, this difference was not statistically (p - .14) significant. This latter result may in part stem from limited statistical power given that the less accurate group consisted of 14 subjects.

Impact of Experimental Manipulations

Table 1 summarizes the cell means for decision accuracy. The pattern of these data is very consistent with the research hypotheses. There is a tendency for subjects to make better decisions (l) when the Diff rule is the appropriate optimization rule, (2) under full information conditions involving only the Ratio rule, and (3) with practice. The results of a 2 (Investment Task) X 2 (Information Set) X 2 (Replication) ANOVA consisting of both between-subjects (the first and second manipulations) and within-subjects (Replication) factors are presented in Table 2.

As can be seen, none of the experimental manipulations attained statistical significance, although both the Investment Task and Replication factors approached significance (p = .11). Thus, we are unable to support most of the research hypotheses. This lack of support suggests that the research hypotheses are incorrect. However, we believe such a conclusion would be premature for several reasons.

First, as discussed above, the pattern of cell means, although statistically insignificant, it very consistent with the research hypotheses (as well as many of the findings involving the post-task measures). Second, the power of the statistical tests may be unduly constrained by a relatively small sample size. In this regard, we should note that a sample with the same pattern of mean responses and variability as in the current sample but triple the size would have produced statistically significant main effects for Investment Task and Replication and a significant Investment Task by Information Set interaction effect.

Third, the apparent "breakdown" of random assignment procedures may have introduced an unfortunate bias in the results. A set of six measures of task involvement and effort introduced as covariates was reduced to three that explained a statistically significant amount of performance variance. The resulting covariange analysis resulted in a significant effect of task that supported our second hypothesis (see Table 2). Table 1 reports the adjusted cell means. Finally, the decision sets may have been too easy, as reflected by the high degree of choice accuracy. Obviously, it will be quite difficult to demonstrate a learning effect when subjects are "perfect" initially. A separate analysis that included only the subjects who scored less than 20 on the first replication resulted in a significant directional replication effect (p < .05), This set of 31 subjects were significantly more accurate in their second effort (M1 = 13.4, M2 = 14.5), thus supporting H5.

Self-Reported Rule Usage

Subjects were asked to select from a series of statements (see Table 3) the one which described how they made their investment choices. None of the subjects selected the "random" decision rule, while six selected the "others' option. Examination of these six subjects' descriptions of their decision making process revealed that they had in fact used a strategy consistent with one of those described in Table 3, but had simply failed to recognize this fact. We compared the recognition rule choice and self-described use of the different possible choice rules across the experimental conditions and then examined whether self-reported use of the optimal choice rule was related to performance. In the single unit purchase task condition 76% reported using the correct Diff rule when presented with the options in Table 3. An analysis of the written descriptions (open ended) revealed that 80% of the subjects confronted with this choice task used the correct Diff rule. The results were very different for the $1000 investment task. In the full information $1000 Investment task condition, 80% reported using the correct ratio rule from the list presented in Table 3 but none of the subjects in the limited information - $1000 investment task reported using the appropriate rule.

In short, there was a significant task/information interaction effect on the self-reported use of the correct rule (p < .05). This suggests that a major reason why the subjects in the $1000 investment - limited information condition did not do as well was because they did not use the correct rule. however, upon examination of the self-described rules that the subjects used it became clear that 5 of the 13 subjects in this condition did indeed use a ratio rule - they simply did not recognize it as such when presented with the choices in Table 3. For example, the following two subjects described their choice rules thus: "I took the cost of investment decision and divided it into $1000. For example, 1000/10 = 100 units. I then multiplied that amount (100 units) by the return. This was done for all three choices. I then selected the choice with the highest return." Both were not able to recognize that they had, in effect, used the ratio rule more simply described in Table 3. In both task conditions self-reported use of the correct rule (either using the recognition or described measures) significantly increased performance (p < .0001).

Of further interest is that the subjects who reported changing their rule did not improve their performance in the second replication. Four out of five in the single unit purchase task who reported switching, switched to the correct rule but only one significantly improved performance. In the $1000 investment task condition, nine reported switching (6/13 in the limited information condition switched) with only four reporting switching to the correct rule and none of the four improved their performance significantly as a result of the switch. It seems that the learning effect that we observed was not the result of rule switching but the result of improvement in application or execution of the correct rule.

TABLE 1

EXPERIMENTAL CELL MEANS FOR CHOICE ACCURACY

TABLE 2

ANOVA RESULTS

TABLE 2

DECISION RULES

DISCUSSION

The present study indicates that while some subjects were either unable and/or unwilling to optimize their decision making, the majority of them were in fact able to maximize their decision outcomes. Unfortunately, in this pretest we were unable to clearly discriminate between the motivational and ability explanation for suboptimizers, an issue clearly for future investigation. Even so, we were able to provide direct evidence that many subjects can and do make decisions which maximize utility. The results are also suggestive that for the task where subjects have to assess the return to cost ratio in an information condition where the computation is not done for them, they are not able to consistently optimize even when motivated to do so. If replicated in later studies, this is an important result as this situation comes closest to the typical real world circumstance where the decision maker has to allocate and optimize using marginal utility divided by price. The experimental task the subjects faced was much more straightforward than choice decisions in the real world. Utility and costs were provided in the same units and with no uncertainty. Consequently,it can be expected that subjects will be even less capable of optimizing in the real world.

The lack of evidence supporting some of the experimental manipulations is, of course, disappointing. We believe, however, that refinements in the task setting (e.g., more difficult tasks) and increased sample size (for both power and random assignment reasons) will permit a more appropriate test of the research hypotheses. We are currently pursuing an experiment that studies differences in decision time as well as accuracy. Time spent making the choice along with some other post-task measures should help us tease out whether poor performance is because of a lack of motivation or a lack of ability. The latest experiment also examines the impact of changing the decision task and hence optimizing rule between the replications. This will permit some insight into subjects' rigidity in their application of choice strategies (i.e., ability to adapt). In our follow-up study we also are examining the framing effects of only providing either the net return computation or value for money calculations in hath tasks.

REFERENCES

Arrow, Kenneth J. (1982), "Risk Perception in Psychology and Economics," Economic Inquiry, 20 (January), 1-9.

Becker, Gary (1962), "Irrational Behavior and Economic Theory," Journal of Political Economy, LXX (February), 1-13.

Boland, Lawrence A. (1981), "On the Futility of Criticizing the Neoclassical Maximization Hypothesis," American Economic Review, 71 (December), 1031-1036.

Caldwell, Bruce J. (1983), "The Neoclassical Maximization Hypothesis: Comment," The American Economic Review, 73.4, (September), 824-830.

Cox, Donald F. (1967), "The Sorting Rule Model of the Consumer Product Evaluation Process," in Donald F. Cox (ed.), Risk Taking and Information Handling in Consumer Behavior. Drutstor of Research, Harvard University.

Gilad, Benjamin, Stanley Kaish, and Peter D. Loeb (1984), "From Economic Behavior to Behavioral Economics: The Behavioral Uprising in Economics," Journal of Behavioral Economics, 12 (Winter), 1-22.

Green, Paul E. and V. Srinivasan (1978), "Conjoint Analysis in Consumer Research: Issues and Outlook," Journal of Consumer Research, 5 (September), 103-122.

Grether, David and Louis Wilde (1984), "Experimental Economics and Consumer Research," in Thomas Kinnear (et.), Advances in Consumer Research, Vol. 11, Provo, UT: Association for Consumer Research, 724-728.

Kahneman, Daniel and Amos Tversky (1984), "Choices, Values, and Frames," American Psychologist, 39 (April), 341-350.

Kunreuther, H., R. Ginsberg and L. Miller (1978), Disaster Insurance Protection: Public Policy Lessons,New York: Wiley.

Lussier, Denis A. and Richard W. Olshavsky (1979), "Task Complexity and Contingent Processing in Brand Choice," Journal of Consumer Research, 6 (September), 154-165.

Mitchell, Andrew, (1978), The Effect of Information on Consumer Market Behavior Chicago: American Marketing Association.

Monroe, Rent B. (1979), Pricing: Making Profitable Decisions, New York, McGraw-Hill.

Monroe, Kent B. (1984), "Theoretical and Methodological Developments in Pricing," in Thomas Kinnear (ed.), Advances in Consumer Research, Vol. 11, Provo, UT: Association for Consumer Research, 636-637.

Nagle, Thomas (1984), "Economic Foundations for Pricing," Journal of Business. 57.1, 53-526.

Olson, Jerry (1977), "Price as an Informational Cue: Effects on Product Evaluations," in Consumer and Industrial Buying Behavior, (eds.) Arch G. Woodside, Jagdish N. Sheth, and Peter D. Bennet, New York: North Holland.

Russo, J. Edward (1977), "The Value of Unit Price Information," Journal of Marketing Research, 14 (May), 193-20

Sheluga, David A., James Jaccard, and Jacob Jacoby (1979), "Preferences, Search, and Choice: An Integrative Approach," Journal of Consumer Research, 6 (September), 166-176.

Simon, Herbert A. (1957), Models of Man, New York:

Simon, Herbert A. (1978), "Rationality as Process and as Product of Thought," American Economic Review, 68 (May), 1-16.

Simon, Herbert A. (1979), "Rational Decision-Making in Business Organizations," American Economic Review, 69 (September), 493-513.

Slovic, Paul and Sarah Lichtenstein (1983), "Preference Reversals: A Broader Perspective," American Economic Review, 73 (September), 596-605.

Waud, Roger N. (1980), Economics, New York: Harper and Row.

Wilkie, William L. and Edgar A. Pessemier (1973), "Issues in Marketing's Use of Multi-Attribute Attitude Models," Journal of Marketing Research, 10, 428-441.

Zeithaml, Valerie A. (1984), "Issues in Conceptualizing and Measuring Consumer Response to Price," in Thomas Kinnear (ed.), Advances in Consumer Research, Vol. 11, Provo. UT: Association for Consumer Research, 612-616.

----------------------------------------